Dec 13 15:10:08.921281 kernel: Linux version 5.15.173-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri Dec 13 12:55:10 -00 2024 Dec 13 15:10:08.921323 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=8c474c3ec361ec863adbecaa85281a726e1b53f7863ecc4742be8c5f6d02a66e Dec 13 15:10:08.921342 kernel: BIOS-provided physical RAM map: Dec 13 15:10:08.921353 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Dec 13 15:10:08.921362 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Dec 13 15:10:08.921371 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Dec 13 15:10:08.921382 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdbfff] usable Dec 13 15:10:08.921392 kernel: BIOS-e820: [mem 0x000000007ffdc000-0x000000007fffffff] reserved Dec 13 15:10:08.921401 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Dec 13 15:10:08.921411 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Dec 13 15:10:08.921424 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Dec 13 15:10:08.921434 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Dec 13 15:10:08.921444 kernel: NX (Execute Disable) protection: active Dec 13 15:10:08.921453 kernel: SMBIOS 2.8 present. Dec 13 15:10:08.921465 kernel: DMI: Red Hat KVM/RHEL-AV, BIOS 1.16.0-3.module_el8.7.0+3346+68867adb 04/01/2014 Dec 13 15:10:08.921475 kernel: Hypervisor detected: KVM Dec 13 15:10:08.921489 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 13 15:10:08.921500 kernel: kvm-clock: cpu 0, msr 5919a001, primary cpu clock Dec 13 15:10:08.921510 kernel: kvm-clock: using sched offset of 4778498038 cycles Dec 13 15:10:08.921521 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 13 15:10:08.921532 kernel: tsc: Detected 2799.998 MHz processor Dec 13 15:10:08.921542 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 13 15:10:08.921553 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 13 15:10:08.921563 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Dec 13 15:10:08.921573 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 13 15:10:08.921587 kernel: Using GB pages for direct mapping Dec 13 15:10:08.921598 kernel: ACPI: Early table checksum verification disabled Dec 13 15:10:08.921608 kernel: ACPI: RSDP 0x00000000000F59E0 000014 (v00 BOCHS ) Dec 13 15:10:08.921618 kernel: ACPI: RSDT 0x000000007FFE47A5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 15:10:08.921628 kernel: ACPI: FACP 0x000000007FFE438D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 15:10:08.921638 kernel: ACPI: DSDT 0x000000007FFDFD80 00460D (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 15:10:08.921648 kernel: ACPI: FACS 0x000000007FFDFD40 000040 Dec 13 15:10:08.921659 kernel: ACPI: APIC 0x000000007FFE4481 0000F0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 15:10:08.921669 kernel: ACPI: SRAT 0x000000007FFE4571 0001D0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 15:10:08.921683 kernel: ACPI: MCFG 0x000000007FFE4741 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 15:10:08.921693 kernel: ACPI: WAET 0x000000007FFE477D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 15:10:08.921704 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe438d-0x7ffe4480] Dec 13 15:10:08.921714 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfd80-0x7ffe438c] Dec 13 15:10:08.921724 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfd40-0x7ffdfd7f] Dec 13 15:10:08.921734 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe4481-0x7ffe4570] Dec 13 15:10:08.921750 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe4571-0x7ffe4740] Dec 13 15:10:08.921765 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe4741-0x7ffe477c] Dec 13 15:10:08.921775 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe477d-0x7ffe47a4] Dec 13 15:10:08.921786 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Dec 13 15:10:08.921797 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Dec 13 15:10:08.921808 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Dec 13 15:10:08.921819 kernel: SRAT: PXM 0 -> APIC 0x03 -> Node 0 Dec 13 15:10:08.921830 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Dec 13 15:10:08.921844 kernel: SRAT: PXM 0 -> APIC 0x05 -> Node 0 Dec 13 15:10:08.921855 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Dec 13 15:10:08.921866 kernel: SRAT: PXM 0 -> APIC 0x07 -> Node 0 Dec 13 15:10:08.921877 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Dec 13 15:10:08.921887 kernel: SRAT: PXM 0 -> APIC 0x09 -> Node 0 Dec 13 15:10:08.921898 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Dec 13 15:10:08.921909 kernel: SRAT: PXM 0 -> APIC 0x0b -> Node 0 Dec 13 15:10:08.921919 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Dec 13 15:10:08.921930 kernel: SRAT: PXM 0 -> APIC 0x0d -> Node 0 Dec 13 15:10:08.921941 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Dec 13 15:10:08.921955 kernel: SRAT: PXM 0 -> APIC 0x0f -> Node 0 Dec 13 15:10:08.921966 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Dec 13 15:10:08.921977 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Dec 13 15:10:08.921988 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x20800fffff] hotplug Dec 13 15:10:08.921999 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdbfff] -> [mem 0x00000000-0x7ffdbfff] Dec 13 15:10:08.922022 kernel: NODE_DATA(0) allocated [mem 0x7ffd6000-0x7ffdbfff] Dec 13 15:10:08.922033 kernel: Zone ranges: Dec 13 15:10:08.922044 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 13 15:10:08.922054 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdbfff] Dec 13 15:10:08.922080 kernel: Normal empty Dec 13 15:10:08.922092 kernel: Movable zone start for each node Dec 13 15:10:08.922102 kernel: Early memory node ranges Dec 13 15:10:08.922113 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Dec 13 15:10:08.922123 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdbfff] Dec 13 15:10:08.922134 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdbfff] Dec 13 15:10:08.922145 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 15:10:08.922155 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Dec 13 15:10:08.922166 kernel: On node 0, zone DMA32: 36 pages in unavailable ranges Dec 13 15:10:08.922180 kernel: ACPI: PM-Timer IO Port: 0x608 Dec 13 15:10:08.922191 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 13 15:10:08.922202 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 13 15:10:08.922212 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Dec 13 15:10:08.922223 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 13 15:10:08.922246 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 13 15:10:08.922257 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 13 15:10:08.922268 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 13 15:10:08.922290 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 13 15:10:08.922305 kernel: TSC deadline timer available Dec 13 15:10:08.922317 kernel: smpboot: Allowing 16 CPUs, 14 hotplug CPUs Dec 13 15:10:08.922327 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Dec 13 15:10:08.922338 kernel: Booting paravirtualized kernel on KVM Dec 13 15:10:08.922349 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 13 15:10:08.922360 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:16 nr_node_ids:1 Dec 13 15:10:08.922371 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u262144 Dec 13 15:10:08.922382 kernel: pcpu-alloc: s188696 r8192 d32488 u262144 alloc=1*2097152 Dec 13 15:10:08.922393 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Dec 13 15:10:08.922408 kernel: kvm-guest: stealtime: cpu 0, msr 7da1c0c0 Dec 13 15:10:08.922419 kernel: kvm-guest: PV spinlocks enabled Dec 13 15:10:08.922430 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 13 15:10:08.922441 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515804 Dec 13 15:10:08.922452 kernel: Policy zone: DMA32 Dec 13 15:10:08.922464 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=8c474c3ec361ec863adbecaa85281a726e1b53f7863ecc4742be8c5f6d02a66e Dec 13 15:10:08.922475 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 15:10:08.922486 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 15:10:08.922501 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 13 15:10:08.922512 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 15:10:08.922523 kernel: Memory: 1903832K/2096616K available (12294K kernel code, 2275K rwdata, 13716K rodata, 47472K init, 4112K bss, 192524K reserved, 0K cma-reserved) Dec 13 15:10:08.922534 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Dec 13 15:10:08.922545 kernel: Kernel/User page tables isolation: enabled Dec 13 15:10:08.922556 kernel: ftrace: allocating 34549 entries in 135 pages Dec 13 15:10:08.922567 kernel: ftrace: allocated 135 pages with 4 groups Dec 13 15:10:08.922578 kernel: rcu: Hierarchical RCU implementation. Dec 13 15:10:08.922589 kernel: rcu: RCU event tracing is enabled. Dec 13 15:10:08.922604 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Dec 13 15:10:08.922615 kernel: Rude variant of Tasks RCU enabled. Dec 13 15:10:08.922626 kernel: Tracing variant of Tasks RCU enabled. Dec 13 15:10:08.922637 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 15:10:08.922648 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Dec 13 15:10:08.922659 kernel: NR_IRQS: 33024, nr_irqs: 552, preallocated irqs: 16 Dec 13 15:10:08.922670 kernel: random: crng init done Dec 13 15:10:08.922693 kernel: Console: colour VGA+ 80x25 Dec 13 15:10:08.922705 kernel: printk: console [tty0] enabled Dec 13 15:10:08.922716 kernel: printk: console [ttyS0] enabled Dec 13 15:10:08.922727 kernel: ACPI: Core revision 20210730 Dec 13 15:10:08.922739 kernel: APIC: Switch to symmetric I/O mode setup Dec 13 15:10:08.922754 kernel: x2apic enabled Dec 13 15:10:08.922766 kernel: Switched APIC routing to physical x2apic. Dec 13 15:10:08.922777 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x285c3ee517e, max_idle_ns: 440795257231 ns Dec 13 15:10:08.922789 kernel: Calibrating delay loop (skipped) preset value.. 5599.99 BogoMIPS (lpj=2799998) Dec 13 15:10:08.922800 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Dec 13 15:10:08.922816 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Dec 13 15:10:08.922828 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Dec 13 15:10:08.922839 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 13 15:10:08.922850 kernel: Spectre V2 : Mitigation: Retpolines Dec 13 15:10:08.922862 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Dec 13 15:10:08.922873 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Dec 13 15:10:08.922885 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Dec 13 15:10:08.922896 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 13 15:10:08.922907 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Dec 13 15:10:08.922919 kernel: MDS: Mitigation: Clear CPU buffers Dec 13 15:10:08.922930 kernel: MMIO Stale Data: Unknown: No mitigations Dec 13 15:10:08.922945 kernel: SRBDS: Unknown: Dependent on hypervisor status Dec 13 15:10:08.922957 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 13 15:10:08.922969 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 13 15:10:08.922980 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 13 15:10:08.922992 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 13 15:10:08.923003 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Dec 13 15:10:08.923014 kernel: Freeing SMP alternatives memory: 32K Dec 13 15:10:08.923026 kernel: pid_max: default: 32768 minimum: 301 Dec 13 15:10:08.923037 kernel: LSM: Security Framework initializing Dec 13 15:10:08.923048 kernel: SELinux: Initializing. Dec 13 15:10:08.924681 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 13 15:10:08.924703 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 13 15:10:08.924715 kernel: smpboot: CPU0: Intel Xeon E3-12xx v2 (Ivy Bridge, IBRS) (family: 0x6, model: 0x3a, stepping: 0x9) Dec 13 15:10:08.924726 kernel: Performance Events: unsupported p6 CPU model 58 no PMU driver, software events only. Dec 13 15:10:08.924738 kernel: signal: max sigframe size: 1776 Dec 13 15:10:08.924749 kernel: rcu: Hierarchical SRCU implementation. Dec 13 15:10:08.924760 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Dec 13 15:10:08.924771 kernel: smp: Bringing up secondary CPUs ... Dec 13 15:10:08.924782 kernel: x86: Booting SMP configuration: Dec 13 15:10:08.924793 kernel: .... node #0, CPUs: #1 Dec 13 15:10:08.924808 kernel: kvm-clock: cpu 1, msr 5919a041, secondary cpu clock Dec 13 15:10:08.924819 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Dec 13 15:10:08.924830 kernel: kvm-guest: stealtime: cpu 1, msr 7da5c0c0 Dec 13 15:10:08.924841 kernel: smp: Brought up 1 node, 2 CPUs Dec 13 15:10:08.924853 kernel: smpboot: Max logical packages: 16 Dec 13 15:10:08.924876 kernel: smpboot: Total of 2 processors activated (11199.99 BogoMIPS) Dec 13 15:10:08.924888 kernel: devtmpfs: initialized Dec 13 15:10:08.924899 kernel: x86/mm: Memory block size: 128MB Dec 13 15:10:08.924911 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 15:10:08.924922 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Dec 13 15:10:08.924938 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 15:10:08.924949 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 15:10:08.924961 kernel: audit: initializing netlink subsys (disabled) Dec 13 15:10:08.924973 kernel: audit: type=2000 audit(1734102608.097:1): state=initialized audit_enabled=0 res=1 Dec 13 15:10:08.924984 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 15:10:08.924996 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 13 15:10:08.925007 kernel: cpuidle: using governor menu Dec 13 15:10:08.925018 kernel: ACPI: bus type PCI registered Dec 13 15:10:08.925042 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 15:10:08.925057 kernel: dca service started, version 1.12.1 Dec 13 15:10:08.925068 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Dec 13 15:10:08.925091 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved in E820 Dec 13 15:10:08.925103 kernel: PCI: Using configuration type 1 for base access Dec 13 15:10:08.925115 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 13 15:10:08.925138 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 15:10:08.925149 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 15:10:08.925160 kernel: ACPI: Added _OSI(Module Device) Dec 13 15:10:08.925170 kernel: ACPI: Added _OSI(Processor Device) Dec 13 15:10:08.925198 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 15:10:08.925209 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 15:10:08.925221 kernel: ACPI: Added _OSI(Linux-Dell-Video) Dec 13 15:10:08.925232 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Dec 13 15:10:08.925243 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Dec 13 15:10:08.925267 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 15:10:08.925290 kernel: ACPI: Interpreter enabled Dec 13 15:10:08.925302 kernel: ACPI: PM: (supports S0 S5) Dec 13 15:10:08.925313 kernel: ACPI: Using IOAPIC for interrupt routing Dec 13 15:10:08.925329 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 13 15:10:08.925341 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Dec 13 15:10:08.925353 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 13 15:10:08.925614 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 13 15:10:08.925779 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Dec 13 15:10:08.925938 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Dec 13 15:10:08.925955 kernel: PCI host bridge to bus 0000:00 Dec 13 15:10:08.926145 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 13 15:10:08.926317 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 13 15:10:08.926453 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 13 15:10:08.926587 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Dec 13 15:10:08.926718 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Dec 13 15:10:08.926850 kernel: pci_bus 0000:00: root bus resource [mem 0x20c0000000-0x28bfffffff window] Dec 13 15:10:08.926985 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 13 15:10:08.927181 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Dec 13 15:10:08.927363 kernel: pci 0000:00:01.0: [1013:00b8] type 00 class 0x030000 Dec 13 15:10:08.927513 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfa000000-0xfbffffff pref] Dec 13 15:10:08.927684 kernel: pci 0000:00:01.0: reg 0x14: [mem 0xfea50000-0xfea50fff] Dec 13 15:10:08.927828 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea40000-0xfea4ffff pref] Dec 13 15:10:08.927975 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 13 15:10:08.928189 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Dec 13 15:10:08.928364 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea51000-0xfea51fff] Dec 13 15:10:08.928551 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Dec 13 15:10:08.928702 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea52000-0xfea52fff] Dec 13 15:10:08.928897 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Dec 13 15:10:08.929046 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea53000-0xfea53fff] Dec 13 15:10:08.937298 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Dec 13 15:10:08.937470 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea54000-0xfea54fff] Dec 13 15:10:08.937688 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Dec 13 15:10:08.937839 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea55000-0xfea55fff] Dec 13 15:10:08.938019 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Dec 13 15:10:08.938198 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea56000-0xfea56fff] Dec 13 15:10:08.938400 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Dec 13 15:10:08.938572 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea57000-0xfea57fff] Dec 13 15:10:08.938765 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Dec 13 15:10:08.938917 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea58000-0xfea58fff] Dec 13 15:10:08.939111 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Dec 13 15:10:08.939303 kernel: pci 0000:00:03.0: reg 0x10: [io 0xd0c0-0xd0df] Dec 13 15:10:08.939470 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfea59000-0xfea59fff] Dec 13 15:10:08.939657 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Dec 13 15:10:08.939837 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfea00000-0xfea3ffff pref] Dec 13 15:10:08.940019 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Dec 13 15:10:08.940192 kernel: pci 0000:00:04.0: reg 0x10: [io 0xd000-0xd07f] Dec 13 15:10:08.940388 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfea5a000-0xfea5afff] Dec 13 15:10:08.940554 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfd004000-0xfd007fff 64bit pref] Dec 13 15:10:08.940778 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Dec 13 15:10:08.940954 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Dec 13 15:10:08.941176 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Dec 13 15:10:08.941354 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xd0e0-0xd0ff] Dec 13 15:10:08.941502 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea5b000-0xfea5bfff] Dec 13 15:10:08.941722 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Dec 13 15:10:08.941869 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Dec 13 15:10:08.942046 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 Dec 13 15:10:08.942232 kernel: pci 0000:01:00.0: reg 0x10: [mem 0xfda00000-0xfda000ff 64bit] Dec 13 15:10:08.942393 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Dec 13 15:10:08.942539 kernel: pci 0000:00:02.0: bridge window [io 0xc000-0xcfff] Dec 13 15:10:08.942696 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Dec 13 15:10:08.942853 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Dec 13 15:10:08.943037 kernel: pci_bus 0000:02: extended config space not accessible Dec 13 15:10:08.943231 kernel: pci 0000:02:01.0: [8086:25ab] type 00 class 0x088000 Dec 13 15:10:08.943405 kernel: pci 0000:02:01.0: reg 0x10: [mem 0xfd800000-0xfd80000f] Dec 13 15:10:08.943561 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Dec 13 15:10:08.943738 kernel: pci 0000:01:00.0: bridge window [io 0xc000-0xcfff] Dec 13 15:10:08.943892 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Dec 13 15:10:08.944044 kernel: pci 0000:01:00.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Dec 13 15:10:08.953291 kernel: pci 0000:03:00.0: [1b36:000d] type 00 class 0x0c0330 Dec 13 15:10:08.953467 kernel: pci 0000:03:00.0: reg 0x10: [mem 0xfe800000-0xfe803fff 64bit] Dec 13 15:10:08.953624 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Dec 13 15:10:08.953776 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Dec 13 15:10:08.953924 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Dec 13 15:10:08.954127 kernel: pci 0000:04:00.0: [1af4:1044] type 00 class 0x00ff00 Dec 13 15:10:08.954299 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Dec 13 15:10:08.954459 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Dec 13 15:10:08.954605 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Dec 13 15:10:08.954751 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Dec 13 15:10:08.954900 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Dec 13 15:10:08.955046 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Dec 13 15:10:08.955207 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Dec 13 15:10:08.955369 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Dec 13 15:10:08.955515 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Dec 13 15:10:08.955666 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Dec 13 15:10:08.955813 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Dec 13 15:10:08.955957 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Dec 13 15:10:08.956116 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Dec 13 15:10:08.956265 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Dec 13 15:10:08.956424 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Dec 13 15:10:08.956570 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Dec 13 15:10:08.956719 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Dec 13 15:10:08.956872 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Dec 13 15:10:08.957018 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Dec 13 15:10:08.957036 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 13 15:10:08.957049 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 13 15:10:08.957079 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 13 15:10:08.957091 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 13 15:10:08.957103 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Dec 13 15:10:08.957115 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Dec 13 15:10:08.957127 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Dec 13 15:10:08.957144 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Dec 13 15:10:08.957156 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Dec 13 15:10:08.957168 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Dec 13 15:10:08.957180 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Dec 13 15:10:08.957191 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Dec 13 15:10:08.957203 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Dec 13 15:10:08.957215 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Dec 13 15:10:08.957227 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Dec 13 15:10:08.957238 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Dec 13 15:10:08.957254 kernel: iommu: Default domain type: Translated Dec 13 15:10:08.957266 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 13 15:10:08.957437 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Dec 13 15:10:08.957586 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 13 15:10:08.957740 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Dec 13 15:10:08.957757 kernel: vgaarb: loaded Dec 13 15:10:08.957768 kernel: pps_core: LinuxPPS API ver. 1 registered Dec 13 15:10:08.957780 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Dec 13 15:10:08.957797 kernel: PTP clock support registered Dec 13 15:10:08.957809 kernel: PCI: Using ACPI for IRQ routing Dec 13 15:10:08.957821 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 13 15:10:08.957832 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Dec 13 15:10:08.957843 kernel: e820: reserve RAM buffer [mem 0x7ffdc000-0x7fffffff] Dec 13 15:10:08.957854 kernel: clocksource: Switched to clocksource kvm-clock Dec 13 15:10:08.957866 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 15:10:08.957877 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 15:10:08.957888 kernel: pnp: PnP ACPI init Dec 13 15:10:08.958109 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Dec 13 15:10:08.958129 kernel: pnp: PnP ACPI: found 5 devices Dec 13 15:10:08.958141 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 13 15:10:08.958153 kernel: NET: Registered PF_INET protocol family Dec 13 15:10:08.958165 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 15:10:08.958177 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Dec 13 15:10:08.958202 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 15:10:08.958213 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 13 15:10:08.958230 kernel: TCP bind hash table entries: 16384 (order: 6, 262144 bytes, linear) Dec 13 15:10:08.958242 kernel: TCP: Hash tables configured (established 16384 bind 16384) Dec 13 15:10:08.958266 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 13 15:10:08.958288 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 13 15:10:08.958300 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 15:10:08.958312 kernel: NET: Registered PF_XDP protocol family Dec 13 15:10:08.958458 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Dec 13 15:10:08.958616 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Dec 13 15:10:08.958763 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Dec 13 15:10:08.958921 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Dec 13 15:10:08.959066 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Dec 13 15:10:08.959232 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Dec 13 15:10:08.959394 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Dec 13 15:10:08.959544 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x1000-0x1fff] Dec 13 15:10:08.959701 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x2000-0x2fff] Dec 13 15:10:08.959874 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x3000-0x3fff] Dec 13 15:10:08.960022 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x4000-0x4fff] Dec 13 15:10:08.969591 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x5000-0x5fff] Dec 13 15:10:08.969758 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x6000-0x6fff] Dec 13 15:10:08.969909 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x7000-0x7fff] Dec 13 15:10:08.970086 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Dec 13 15:10:08.970242 kernel: pci 0000:01:00.0: bridge window [io 0xc000-0xcfff] Dec 13 15:10:08.970407 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Dec 13 15:10:08.970569 kernel: pci 0000:01:00.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Dec 13 15:10:08.970726 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Dec 13 15:10:08.970908 kernel: pci 0000:00:02.0: bridge window [io 0xc000-0xcfff] Dec 13 15:10:08.971067 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Dec 13 15:10:08.971219 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Dec 13 15:10:08.971382 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Dec 13 15:10:08.971531 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x1fff] Dec 13 15:10:08.971700 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Dec 13 15:10:08.971856 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Dec 13 15:10:08.972018 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Dec 13 15:10:08.972198 kernel: pci 0000:00:02.2: bridge window [io 0x2000-0x2fff] Dec 13 15:10:08.972358 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Dec 13 15:10:08.972519 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Dec 13 15:10:08.972690 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Dec 13 15:10:08.972840 kernel: pci 0000:00:02.3: bridge window [io 0x3000-0x3fff] Dec 13 15:10:08.973000 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Dec 13 15:10:08.973188 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Dec 13 15:10:08.973359 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Dec 13 15:10:08.973520 kernel: pci 0000:00:02.4: bridge window [io 0x4000-0x4fff] Dec 13 15:10:08.973703 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Dec 13 15:10:08.973861 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Dec 13 15:10:08.974021 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Dec 13 15:10:08.974187 kernel: pci 0000:00:02.5: bridge window [io 0x5000-0x5fff] Dec 13 15:10:08.974364 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Dec 13 15:10:08.974525 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Dec 13 15:10:08.974684 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Dec 13 15:10:08.974843 kernel: pci 0000:00:02.6: bridge window [io 0x6000-0x6fff] Dec 13 15:10:08.975002 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Dec 13 15:10:08.975199 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Dec 13 15:10:08.975359 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Dec 13 15:10:08.975514 kernel: pci 0000:00:02.7: bridge window [io 0x7000-0x7fff] Dec 13 15:10:08.975659 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Dec 13 15:10:08.975818 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Dec 13 15:10:08.975983 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 13 15:10:08.976161 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 13 15:10:08.976322 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 13 15:10:08.976473 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Dec 13 15:10:08.976626 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Dec 13 15:10:08.976777 kernel: pci_bus 0000:00: resource 9 [mem 0x20c0000000-0x28bfffffff window] Dec 13 15:10:08.976950 kernel: pci_bus 0000:01: resource 0 [io 0xc000-0xcfff] Dec 13 15:10:08.977134 kernel: pci_bus 0000:01: resource 1 [mem 0xfd800000-0xfdbfffff] Dec 13 15:10:08.977300 kernel: pci_bus 0000:01: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Dec 13 15:10:08.977466 kernel: pci_bus 0000:02: resource 0 [io 0xc000-0xcfff] Dec 13 15:10:08.977627 kernel: pci_bus 0000:02: resource 1 [mem 0xfd800000-0xfd9fffff] Dec 13 15:10:08.977815 kernel: pci_bus 0000:02: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Dec 13 15:10:08.978004 kernel: pci_bus 0000:03: resource 0 [io 0x1000-0x1fff] Dec 13 15:10:08.984537 kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff] Dec 13 15:10:08.984690 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Dec 13 15:10:08.984854 kernel: pci_bus 0000:04: resource 0 [io 0x2000-0x2fff] Dec 13 15:10:08.984997 kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff] Dec 13 15:10:08.985162 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Dec 13 15:10:08.985347 kernel: pci_bus 0000:05: resource 0 [io 0x3000-0x3fff] Dec 13 15:10:08.985491 kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff] Dec 13 15:10:08.985631 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Dec 13 15:10:08.985791 kernel: pci_bus 0000:06: resource 0 [io 0x4000-0x4fff] Dec 13 15:10:08.985933 kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff] Dec 13 15:10:08.986088 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Dec 13 15:10:08.986261 kernel: pci_bus 0000:07: resource 0 [io 0x5000-0x5fff] Dec 13 15:10:08.986426 kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff] Dec 13 15:10:08.986568 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Dec 13 15:10:08.986726 kernel: pci_bus 0000:08: resource 0 [io 0x6000-0x6fff] Dec 13 15:10:08.986868 kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff] Dec 13 15:10:08.987011 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Dec 13 15:10:08.987190 kernel: pci_bus 0000:09: resource 0 [io 0x7000-0x7fff] Dec 13 15:10:08.987354 kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff] Dec 13 15:10:08.987495 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Dec 13 15:10:08.987514 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Dec 13 15:10:08.987527 kernel: PCI: CLS 0 bytes, default 64 Dec 13 15:10:08.987539 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Dec 13 15:10:08.987552 kernel: software IO TLB: mapped [mem 0x0000000079800000-0x000000007d800000] (64MB) Dec 13 15:10:08.987564 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Dec 13 15:10:08.987582 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x285c3ee517e, max_idle_ns: 440795257231 ns Dec 13 15:10:08.987595 kernel: Initialise system trusted keyrings Dec 13 15:10:08.987608 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Dec 13 15:10:08.987620 kernel: Key type asymmetric registered Dec 13 15:10:08.987632 kernel: Asymmetric key parser 'x509' registered Dec 13 15:10:08.987645 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Dec 13 15:10:08.987657 kernel: io scheduler mq-deadline registered Dec 13 15:10:08.987670 kernel: io scheduler kyber registered Dec 13 15:10:08.987682 kernel: io scheduler bfq registered Dec 13 15:10:08.987837 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Dec 13 15:10:08.987987 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Dec 13 15:10:08.988196 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 15:10:08.988360 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Dec 13 15:10:08.988509 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Dec 13 15:10:08.988655 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 15:10:08.988811 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Dec 13 15:10:08.988973 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Dec 13 15:10:08.989186 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 15:10:08.989356 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Dec 13 15:10:08.989503 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Dec 13 15:10:08.989657 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 15:10:08.989799 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Dec 13 15:10:08.989972 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Dec 13 15:10:08.990146 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 15:10:08.990307 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Dec 13 15:10:08.990454 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Dec 13 15:10:08.990606 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 15:10:08.990754 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Dec 13 15:10:08.990904 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Dec 13 15:10:08.991050 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 15:10:08.991212 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Dec 13 15:10:08.991369 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Dec 13 15:10:08.991517 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 15:10:08.991536 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 13 15:10:08.991555 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Dec 13 15:10:08.991568 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Dec 13 15:10:08.991581 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 15:10:08.991593 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 13 15:10:08.991606 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 13 15:10:08.991619 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 13 15:10:08.991631 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 13 15:10:08.991644 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 13 15:10:08.991799 kernel: rtc_cmos 00:03: RTC can wake from S4 Dec 13 15:10:08.991946 kernel: rtc_cmos 00:03: registered as rtc0 Dec 13 15:10:08.992105 kernel: rtc_cmos 00:03: setting system clock to 2024-12-13T15:10:08 UTC (1734102608) Dec 13 15:10:08.992243 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Dec 13 15:10:08.992261 kernel: intel_pstate: CPU model not supported Dec 13 15:10:08.992285 kernel: NET: Registered PF_INET6 protocol family Dec 13 15:10:08.992298 kernel: Segment Routing with IPv6 Dec 13 15:10:08.992310 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 15:10:08.992323 kernel: NET: Registered PF_PACKET protocol family Dec 13 15:10:08.992341 kernel: Key type dns_resolver registered Dec 13 15:10:08.992353 kernel: IPI shorthand broadcast: enabled Dec 13 15:10:08.992366 kernel: sched_clock: Marking stable (954929226, 217781540)->(1457703655, -284992889) Dec 13 15:10:08.992378 kernel: registered taskstats version 1 Dec 13 15:10:08.992391 kernel: Loading compiled-in X.509 certificates Dec 13 15:10:08.992403 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.173-flatcar: e1d88c9e01f5bb2adeb5b99325e46e5ca8dff115' Dec 13 15:10:08.992415 kernel: Key type .fscrypt registered Dec 13 15:10:08.992427 kernel: Key type fscrypt-provisioning registered Dec 13 15:10:08.992439 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 15:10:08.992456 kernel: ima: Allocated hash algorithm: sha1 Dec 13 15:10:08.992468 kernel: ima: No architecture policies found Dec 13 15:10:08.992480 kernel: clk: Disabling unused clocks Dec 13 15:10:08.992493 kernel: Freeing unused kernel image (initmem) memory: 47472K Dec 13 15:10:08.992505 kernel: Write protecting the kernel read-only data: 28672k Dec 13 15:10:08.992517 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Dec 13 15:10:08.992529 kernel: Freeing unused kernel image (rodata/data gap) memory: 620K Dec 13 15:10:08.992542 kernel: Run /init as init process Dec 13 15:10:08.992554 kernel: with arguments: Dec 13 15:10:08.992570 kernel: /init Dec 13 15:10:08.992582 kernel: with environment: Dec 13 15:10:08.992607 kernel: HOME=/ Dec 13 15:10:08.992618 kernel: TERM=linux Dec 13 15:10:08.992630 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 15:10:08.992650 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 15:10:08.992667 systemd[1]: Detected virtualization kvm. Dec 13 15:10:08.992680 systemd[1]: Detected architecture x86-64. Dec 13 15:10:08.992696 systemd[1]: Running in initrd. Dec 13 15:10:08.992709 systemd[1]: No hostname configured, using default hostname. Dec 13 15:10:08.992734 systemd[1]: Hostname set to . Dec 13 15:10:08.992747 systemd[1]: Initializing machine ID from VM UUID. Dec 13 15:10:08.992763 systemd[1]: Queued start job for default target initrd.target. Dec 13 15:10:08.992777 systemd[1]: Started systemd-ask-password-console.path. Dec 13 15:10:08.992789 systemd[1]: Reached target cryptsetup.target. Dec 13 15:10:08.992802 systemd[1]: Reached target paths.target. Dec 13 15:10:08.992818 systemd[1]: Reached target slices.target. Dec 13 15:10:08.992832 systemd[1]: Reached target swap.target. Dec 13 15:10:08.992844 systemd[1]: Reached target timers.target. Dec 13 15:10:08.992858 systemd[1]: Listening on iscsid.socket. Dec 13 15:10:08.992870 systemd[1]: Listening on iscsiuio.socket. Dec 13 15:10:08.992883 systemd[1]: Listening on systemd-journald-audit.socket. Dec 13 15:10:08.992896 systemd[1]: Listening on systemd-journald-dev-log.socket. Dec 13 15:10:08.992909 systemd[1]: Listening on systemd-journald.socket. Dec 13 15:10:08.992928 systemd[1]: Listening on systemd-networkd.socket. Dec 13 15:10:08.992941 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 15:10:08.992955 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 15:10:08.992968 systemd[1]: Reached target sockets.target. Dec 13 15:10:08.992981 systemd[1]: Starting kmod-static-nodes.service... Dec 13 15:10:08.992994 systemd[1]: Finished network-cleanup.service. Dec 13 15:10:08.993007 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 15:10:08.993020 systemd[1]: Starting systemd-journald.service... Dec 13 15:10:08.993032 systemd[1]: Starting systemd-modules-load.service... Dec 13 15:10:08.993049 systemd[1]: Starting systemd-resolved.service... Dec 13 15:10:08.993062 systemd[1]: Starting systemd-vconsole-setup.service... Dec 13 15:10:08.993092 systemd[1]: Finished kmod-static-nodes.service. Dec 13 15:10:08.993118 kernel: audit: type=1130 audit(1734102608.920:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:10:08.993130 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 15:10:08.993143 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 15:10:08.993165 systemd-journald[201]: Journal started Dec 13 15:10:08.993238 systemd-journald[201]: Runtime Journal (/run/log/journal/869b445bda954e07a5507237d6d21b44) is 4.7M, max 38.1M, 33.3M free. Dec 13 15:10:08.920000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:10:08.919506 systemd-modules-load[202]: Inserted module 'overlay' Dec 13 15:10:09.019416 kernel: Bridge firewalling registered Dec 13 15:10:08.971774 systemd-resolved[203]: Positive Trust Anchors: Dec 13 15:10:09.034042 systemd[1]: Started systemd-resolved.service. Dec 13 15:10:09.034090 kernel: audit: type=1130 audit(1734102609.019:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:10:09.034111 kernel: SCSI subsystem initialized Dec 13 15:10:09.034127 kernel: audit: type=1130 audit(1734102609.027:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:10:09.034143 systemd[1]: Started systemd-journald.service. Dec 13 15:10:09.019000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:10:09.027000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:10:08.971791 systemd-resolved[203]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 15:10:08.971832 systemd-resolved[203]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 15:10:08.975563 systemd-resolved[203]: Defaulting to hostname 'linux'. Dec 13 15:10:09.045930 kernel: audit: type=1130 audit(1734102609.039:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:10:09.045977 kernel: audit: type=1130 audit(1734102609.040:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:10:09.039000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:10:09.040000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:10:08.999157 systemd-modules-load[202]: Inserted module 'br_netfilter' Dec 13 15:10:09.058747 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 15:10:09.058786 kernel: device-mapper: uevent: version 1.0.3 Dec 13 15:10:09.058804 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Dec 13 15:10:09.040216 systemd[1]: Finished systemd-vconsole-setup.service. Dec 13 15:10:09.041040 systemd[1]: Reached target nss-lookup.target. Dec 13 15:10:09.059561 systemd[1]: Starting dracut-cmdline-ask.service... Dec 13 15:10:09.062664 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Dec 13 15:10:09.071840 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Dec 13 15:10:09.078568 kernel: audit: type=1130 audit(1734102609.073:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:10:09.073000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:10:09.072233 systemd-modules-load[202]: Inserted module 'dm_multipath' Dec 13 15:10:09.073381 systemd[1]: Finished systemd-modules-load.service. Dec 13 15:10:09.080162 systemd[1]: Starting systemd-sysctl.service... Dec 13 15:10:09.079000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:10:09.088083 kernel: audit: type=1130 audit(1734102609.079:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:10:09.089950 systemd[1]: Finished systemd-sysctl.service. Dec 13 15:10:09.110021 kernel: audit: type=1130 audit(1734102609.090:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:10:09.090000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:10:09.111661 systemd[1]: Finished dracut-cmdline-ask.service. Dec 13 15:10:09.112000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:10:09.113483 systemd[1]: Starting dracut-cmdline.service... Dec 13 15:10:09.118291 kernel: audit: type=1130 audit(1734102609.112:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:10:09.126255 dracut-cmdline[223]: dracut-dracut-053 Dec 13 15:10:09.130302 dracut-cmdline[223]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=8c474c3ec361ec863adbecaa85281a726e1b53f7863ecc4742be8c5f6d02a66e Dec 13 15:10:09.212112 kernel: Loading iSCSI transport class v2.0-870. Dec 13 15:10:09.232086 kernel: iscsi: registered transport (tcp) Dec 13 15:10:09.260076 kernel: iscsi: registered transport (qla4xxx) Dec 13 15:10:09.260187 kernel: QLogic iSCSI HBA Driver Dec 13 15:10:09.308873 systemd[1]: Finished dracut-cmdline.service. Dec 13 15:10:09.309000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:10:09.311103 systemd[1]: Starting dracut-pre-udev.service... Dec 13 15:10:09.371115 kernel: raid6: sse2x4 gen() 10439 MB/s Dec 13 15:10:09.389124 kernel: raid6: sse2x4 xor() 8055 MB/s Dec 13 15:10:09.407150 kernel: raid6: sse2x2 gen() 9824 MB/s Dec 13 15:10:09.425176 kernel: raid6: sse2x2 xor() 8056 MB/s Dec 13 15:10:09.443204 kernel: raid6: sse2x1 gen() 10119 MB/s Dec 13 15:10:09.461775 kernel: raid6: sse2x1 xor() 7322 MB/s Dec 13 15:10:09.461819 kernel: raid6: using algorithm sse2x4 gen() 10439 MB/s Dec 13 15:10:09.461837 kernel: raid6: .... xor() 8055 MB/s, rmw enabled Dec 13 15:10:09.463033 kernel: raid6: using ssse3x2 recovery algorithm Dec 13 15:10:09.480209 kernel: xor: automatically using best checksumming function avx Dec 13 15:10:09.593104 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Dec 13 15:10:09.605534 systemd[1]: Finished dracut-pre-udev.service. Dec 13 15:10:09.606000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:10:09.606000 audit: BPF prog-id=7 op=LOAD Dec 13 15:10:09.606000 audit: BPF prog-id=8 op=LOAD Dec 13 15:10:09.607460 systemd[1]: Starting systemd-udevd.service... Dec 13 15:10:09.623945 systemd-udevd[400]: Using default interface naming scheme 'v252'. Dec 13 15:10:09.632870 systemd[1]: Started systemd-udevd.service. Dec 13 15:10:09.636000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:10:09.637869 systemd[1]: Starting dracut-pre-trigger.service... Dec 13 15:10:09.654792 dracut-pre-trigger[409]: rd.md=0: removing MD RAID activation Dec 13 15:10:09.693265 systemd[1]: Finished dracut-pre-trigger.service. Dec 13 15:10:09.693000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:10:09.694981 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 15:10:09.781000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:10:09.781012 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 15:10:09.867082 kernel: virtio_blk virtio1: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Dec 13 15:10:09.892229 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 15:10:09.892269 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 15:10:09.892287 kernel: GPT:17805311 != 125829119 Dec 13 15:10:09.892303 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 15:10:09.892318 kernel: GPT:17805311 != 125829119 Dec 13 15:10:09.892334 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 15:10:09.892350 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 15:10:09.910334 kernel: ACPI: bus type USB registered Dec 13 15:10:09.910408 kernel: usbcore: registered new interface driver usbfs Dec 13 15:10:09.913074 kernel: usbcore: registered new interface driver hub Dec 13 15:10:09.917074 kernel: usbcore: registered new device driver usb Dec 13 15:10:09.946089 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (458) Dec 13 15:10:09.950094 kernel: AVX version of gcm_enc/dec engaged. Dec 13 15:10:09.950126 kernel: AES CTR mode by8 optimization enabled Dec 13 15:10:09.955454 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Dec 13 15:10:10.105689 kernel: libata version 3.00 loaded. Dec 13 15:10:10.105775 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Dec 13 15:10:10.106408 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 1 Dec 13 15:10:10.106686 kernel: xhci_hcd 0000:03:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Dec 13 15:10:10.106898 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Dec 13 15:10:10.107099 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 2 Dec 13 15:10:10.107282 kernel: xhci_hcd 0000:03:00.0: Host supports USB 3.0 SuperSpeed Dec 13 15:10:10.107448 kernel: hub 1-0:1.0: USB hub found Dec 13 15:10:10.107757 kernel: hub 1-0:1.0: 4 ports detected Dec 13 15:10:10.107954 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Dec 13 15:10:10.108351 kernel: hub 2-0:1.0: USB hub found Dec 13 15:10:10.108589 kernel: hub 2-0:1.0: 4 ports detected Dec 13 15:10:10.108776 kernel: ahci 0000:00:1f.2: version 3.0 Dec 13 15:10:10.109019 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Dec 13 15:10:10.109074 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Dec 13 15:10:10.109292 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Dec 13 15:10:10.109469 kernel: scsi host0: ahci Dec 13 15:10:10.109689 kernel: scsi host1: ahci Dec 13 15:10:10.109890 kernel: scsi host2: ahci Dec 13 15:10:10.110091 kernel: scsi host3: ahci Dec 13 15:10:10.110304 kernel: scsi host4: ahci Dec 13 15:10:10.110538 kernel: scsi host5: ahci Dec 13 15:10:10.110794 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b100 irq 41 Dec 13 15:10:10.110817 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b180 irq 41 Dec 13 15:10:10.110833 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b200 irq 41 Dec 13 15:10:10.110848 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b280 irq 41 Dec 13 15:10:10.110863 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b300 irq 41 Dec 13 15:10:10.110891 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b380 irq 41 Dec 13 15:10:10.110740 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Dec 13 15:10:10.112289 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Dec 13 15:10:10.118192 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Dec 13 15:10:10.123486 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Dec 13 15:10:10.127039 systemd[1]: Starting disk-uuid.service... Dec 13 15:10:10.142158 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 15:10:10.144206 disk-uuid[533]: Primary Header is updated. Dec 13 15:10:10.144206 disk-uuid[533]: Secondary Entries is updated. Dec 13 15:10:10.144206 disk-uuid[533]: Secondary Header is updated. Dec 13 15:10:10.235134 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Dec 13 15:10:10.339037 kernel: ata6: SATA link down (SStatus 0 SControl 300) Dec 13 15:10:10.339198 kernel: ata5: SATA link down (SStatus 0 SControl 300) Dec 13 15:10:10.342564 kernel: ata2: SATA link down (SStatus 0 SControl 300) Dec 13 15:10:10.342619 kernel: ata3: SATA link down (SStatus 0 SControl 300) Dec 13 15:10:10.344096 kernel: ata4: SATA link down (SStatus 0 SControl 300) Dec 13 15:10:10.345683 kernel: ata1: SATA link down (SStatus 0 SControl 300) Dec 13 15:10:10.377162 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 13 15:10:10.384259 kernel: usbcore: registered new interface driver usbhid Dec 13 15:10:10.384303 kernel: usbhid: USB HID core driver Dec 13 15:10:10.391140 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:03:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Dec 13 15:10:10.394138 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:03:00.0-1/input0 Dec 13 15:10:11.170084 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 15:10:11.171107 disk-uuid[534]: The operation has completed successfully. Dec 13 15:10:11.229298 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 15:10:11.230000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:10:11.230000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:10:11.229464 systemd[1]: Finished disk-uuid.service. Dec 13 15:10:11.231637 systemd[1]: Starting verity-setup.service... Dec 13 15:10:11.253098 kernel: device-mapper: verity: sha256 using implementation "sha256-avx" Dec 13 15:10:11.312555 systemd[1]: Found device dev-mapper-usr.device. Dec 13 15:10:11.314354 systemd[1]: Finished verity-setup.service. Dec 13 15:10:11.315000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:10:11.317137 systemd[1]: Mounting sysusr-usr.mount... Dec 13 15:10:11.412252 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Dec 13 15:10:11.412986 systemd[1]: Mounted sysusr-usr.mount. Dec 13 15:10:11.413928 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Dec 13 15:10:11.415321 systemd[1]: Starting ignition-setup.service... Dec 13 15:10:11.416871 systemd[1]: Starting parse-ip-for-networkd.service... Dec 13 15:10:11.437668 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 15:10:11.437791 kernel: BTRFS info (device vda6): using free space tree Dec 13 15:10:11.437823 kernel: BTRFS info (device vda6): has skinny extents Dec 13 15:10:11.458524 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 15:10:11.467000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:10:11.467210 systemd[1]: Finished ignition-setup.service. Dec 13 15:10:11.469958 systemd[1]: Starting ignition-fetch-offline.service... Dec 13 15:10:11.548000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:10:11.547535 systemd[1]: Finished parse-ip-for-networkd.service. Dec 13 15:10:11.549000 audit: BPF prog-id=9 op=LOAD Dec 13 15:10:11.550783 systemd[1]: Starting systemd-networkd.service... Dec 13 15:10:11.587624 systemd-networkd[709]: lo: Link UP Dec 13 15:10:11.587639 systemd-networkd[709]: lo: Gained carrier Dec 13 15:10:11.589128 systemd-networkd[709]: Enumeration completed Dec 13 15:10:11.605000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:10:11.589297 systemd[1]: Started systemd-networkd.service. Dec 13 15:10:11.590148 systemd-networkd[709]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 15:10:11.598064 systemd-networkd[709]: eth0: Link UP Dec 13 15:10:11.598082 systemd-networkd[709]: eth0: Gained carrier Dec 13 15:10:11.605985 systemd[1]: Reached target network.target. Dec 13 15:10:11.610387 systemd[1]: Starting iscsiuio.service... Dec 13 15:10:11.622504 systemd[1]: Started iscsiuio.service. Dec 13 15:10:11.623000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:10:11.625512 systemd[1]: Starting iscsid.service... Dec 13 15:10:11.631845 iscsid[714]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Dec 13 15:10:11.631845 iscsid[714]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Dec 13 15:10:11.631845 iscsid[714]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Dec 13 15:10:11.631845 iscsid[714]: If using hardware iscsi like qla4xxx this message can be ignored. Dec 13 15:10:11.637000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:10:11.640249 iscsid[714]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Dec 13 15:10:11.640249 iscsid[714]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Dec 13 15:10:11.635209 systemd[1]: Started iscsid.service. Dec 13 15:10:11.644204 systemd[1]: Starting dracut-initqueue.service... Dec 13 15:10:11.649249 systemd-networkd[709]: eth0: DHCPv4 address 10.243.84.50/30, gateway 10.243.84.49 acquired from 10.243.84.49 Dec 13 15:10:11.665000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:10:11.664785 systemd[1]: Finished dracut-initqueue.service. Dec 13 15:10:11.665875 systemd[1]: Reached target remote-fs-pre.target. Dec 13 15:10:11.666529 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 15:10:11.667145 systemd[1]: Reached target remote-fs.target. Dec 13 15:10:11.669128 systemd[1]: Starting dracut-pre-mount.service... Dec 13 15:10:11.683000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:10:11.682977 systemd[1]: Finished dracut-pre-mount.service. Dec 13 15:10:11.692390 ignition[643]: Ignition 2.14.0 Dec 13 15:10:11.697000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:10:11.696916 systemd[1]: Finished ignition-fetch-offline.service. Dec 13 15:10:11.692415 ignition[643]: Stage: fetch-offline Dec 13 15:10:11.692562 ignition[643]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 15:10:11.699365 systemd[1]: Starting ignition-fetch.service... Dec 13 15:10:11.692662 ignition[643]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Dec 13 15:10:11.694514 ignition[643]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 15:10:11.694680 ignition[643]: parsed url from cmdline: "" Dec 13 15:10:11.694686 ignition[643]: no config URL provided Dec 13 15:10:11.694694 ignition[643]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 15:10:11.694722 ignition[643]: no config at "/usr/lib/ignition/user.ign" Dec 13 15:10:11.694733 ignition[643]: failed to fetch config: resource requires networking Dec 13 15:10:11.695554 ignition[643]: Ignition finished successfully Dec 13 15:10:11.714265 ignition[729]: Ignition 2.14.0 Dec 13 15:10:11.714285 ignition[729]: Stage: fetch Dec 13 15:10:11.714494 ignition[729]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 15:10:11.714530 ignition[729]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Dec 13 15:10:11.716204 ignition[729]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 15:10:11.716362 ignition[729]: parsed url from cmdline: "" Dec 13 15:10:11.716368 ignition[729]: no config URL provided Dec 13 15:10:11.716377 ignition[729]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 15:10:11.716392 ignition[729]: no config at "/usr/lib/ignition/user.ign" Dec 13 15:10:11.721136 ignition[729]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Dec 13 15:10:11.721186 ignition[729]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Dec 13 15:10:11.721816 ignition[729]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Dec 13 15:10:11.741221 ignition[729]: GET result: OK Dec 13 15:10:11.741359 ignition[729]: parsing config with SHA512: 36cf631efdfcd259dd27d12b3a3b7d1def002368adb397770214d09aa3c5f2e9ab48e3459d8c87d418411008489e86b58f290d90d09a7db16d3fc206d775f354 Dec 13 15:10:11.753235 unknown[729]: fetched base config from "system" Dec 13 15:10:11.754230 unknown[729]: fetched base config from "system" Dec 13 15:10:11.755019 unknown[729]: fetched user config from "openstack" Dec 13 15:10:11.756382 ignition[729]: fetch: fetch complete Dec 13 15:10:11.757158 ignition[729]: fetch: fetch passed Dec 13 15:10:11.757935 ignition[729]: Ignition finished successfully Dec 13 15:10:11.760706 systemd[1]: Finished ignition-fetch.service. Dec 13 15:10:11.761000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:10:11.763006 systemd[1]: Starting ignition-kargs.service... Dec 13 15:10:11.777641 ignition[735]: Ignition 2.14.0 Dec 13 15:10:11.777674 ignition[735]: Stage: kargs Dec 13 15:10:11.777874 ignition[735]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 15:10:11.777916 ignition[735]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Dec 13 15:10:11.779315 ignition[735]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 15:10:11.780987 ignition[735]: kargs: kargs passed Dec 13 15:10:11.783000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:10:11.782426 systemd[1]: Finished ignition-kargs.service. Dec 13 15:10:11.781084 ignition[735]: Ignition finished successfully Dec 13 15:10:11.784868 systemd[1]: Starting ignition-disks.service... Dec 13 15:10:11.796999 ignition[741]: Ignition 2.14.0 Dec 13 15:10:11.798314 ignition[741]: Stage: disks Dec 13 15:10:11.799194 ignition[741]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 15:10:11.800158 ignition[741]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Dec 13 15:10:11.801719 ignition[741]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 15:10:11.804409 ignition[741]: disks: disks passed Dec 13 15:10:11.805259 ignition[741]: Ignition finished successfully Dec 13 15:10:11.806407 systemd[1]: Finished ignition-disks.service. Dec 13 15:10:11.807000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:10:11.807339 systemd[1]: Reached target initrd-root-device.target. Dec 13 15:10:11.808374 systemd[1]: Reached target local-fs-pre.target. Dec 13 15:10:11.809574 systemd[1]: Reached target local-fs.target. Dec 13 15:10:11.810752 systemd[1]: Reached target sysinit.target. Dec 13 15:10:11.811906 systemd[1]: Reached target basic.target. Dec 13 15:10:11.814521 systemd[1]: Starting systemd-fsck-root.service... Dec 13 15:10:11.836831 systemd-fsck[748]: ROOT: clean, 621/1628000 files, 124058/1617920 blocks Dec 13 15:10:11.841131 systemd[1]: Finished systemd-fsck-root.service. Dec 13 15:10:11.841000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:10:11.843208 systemd[1]: Mounting sysroot.mount... Dec 13 15:10:11.857110 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Dec 13 15:10:11.857533 systemd[1]: Mounted sysroot.mount. Dec 13 15:10:11.858514 systemd[1]: Reached target initrd-root-fs.target. Dec 13 15:10:11.861267 systemd[1]: Mounting sysroot-usr.mount... Dec 13 15:10:11.862391 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Dec 13 15:10:11.863241 systemd[1]: Starting flatcar-openstack-hostname.service... Dec 13 15:10:11.866229 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 15:10:11.866281 systemd[1]: Reached target ignition-diskful.target. Dec 13 15:10:11.870459 systemd[1]: Mounted sysroot-usr.mount. Dec 13 15:10:11.872236 systemd[1]: Starting initrd-setup-root.service... Dec 13 15:10:11.879677 initrd-setup-root[759]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 15:10:11.892848 initrd-setup-root[767]: cut: /sysroot/etc/group: No such file or directory Dec 13 15:10:11.904253 initrd-setup-root[775]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 15:10:11.912453 initrd-setup-root[783]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 15:10:11.996549 systemd[1]: Finished initrd-setup-root.service. Dec 13 15:10:11.997000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:10:11.998783 systemd[1]: Starting ignition-mount.service... Dec 13 15:10:12.002082 systemd[1]: Starting sysroot-boot.service... Dec 13 15:10:12.018436 coreos-metadata[754]: Dec 13 15:10:12.018 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Dec 13 15:10:12.019981 bash[802]: umount: /sysroot/usr/share/oem: not mounted. Dec 13 15:10:12.033925 ignition[804]: INFO : Ignition 2.14.0 Dec 13 15:10:12.033925 ignition[804]: INFO : Stage: mount Dec 13 15:10:12.035660 ignition[804]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 15:10:12.035660 ignition[804]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Dec 13 15:10:12.035660 ignition[804]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 15:10:12.040000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:10:12.043682 coreos-metadata[754]: Dec 13 15:10:12.036 INFO Fetch successful Dec 13 15:10:12.043682 coreos-metadata[754]: Dec 13 15:10:12.036 INFO wrote hostname srv-iw0hd.gb1.brightbox.com to /sysroot/etc/hostname Dec 13 15:10:12.044000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:10:12.044000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:10:12.039301 systemd[1]: Finished ignition-mount.service. Dec 13 15:10:12.060438 ignition[804]: INFO : mount: mount passed Dec 13 15:10:12.060438 ignition[804]: INFO : Ignition finished successfully Dec 13 15:10:12.041099 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Dec 13 15:10:12.041287 systemd[1]: Finished flatcar-openstack-hostname.service. Dec 13 15:10:12.063000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:10:12.062527 systemd[1]: Finished sysroot-boot.service. Dec 13 15:10:12.338277 systemd[1]: Mounting sysroot-usr-share-oem.mount... Dec 13 15:10:12.350098 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (811) Dec 13 15:10:12.353679 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 15:10:12.353716 kernel: BTRFS info (device vda6): using free space tree Dec 13 15:10:12.353734 kernel: BTRFS info (device vda6): has skinny extents Dec 13 15:10:12.361160 systemd[1]: Mounted sysroot-usr-share-oem.mount. Dec 13 15:10:12.362937 systemd[1]: Starting ignition-files.service... Dec 13 15:10:12.383259 ignition[831]: INFO : Ignition 2.14.0 Dec 13 15:10:12.384242 ignition[831]: INFO : Stage: files Dec 13 15:10:12.385061 ignition[831]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 15:10:12.385997 ignition[831]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Dec 13 15:10:12.388272 ignition[831]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 15:10:12.390447 ignition[831]: DEBUG : files: compiled without relabeling support, skipping Dec 13 15:10:12.392381 ignition[831]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 15:10:12.392381 ignition[831]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 15:10:12.397446 ignition[831]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 15:10:12.398618 ignition[831]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 15:10:12.400290 unknown[831]: wrote ssh authorized keys file for user: core Dec 13 15:10:12.401278 ignition[831]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 15:10:12.402182 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 15:10:12.402182 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Dec 13 15:10:12.741798 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 13 15:10:13.115881 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 15:10:13.117327 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 13 15:10:13.117327 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Dec 13 15:10:13.415808 systemd-networkd[709]: eth0: Gained IPv6LL Dec 13 15:10:13.736251 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Dec 13 15:10:14.035525 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 13 15:10:14.036848 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Dec 13 15:10:14.036848 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 15:10:14.036848 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 15:10:14.039925 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 15:10:14.039925 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 15:10:14.039925 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 15:10:14.039925 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 15:10:14.039925 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 15:10:14.039925 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 15:10:14.039925 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 15:10:14.039925 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 15:10:14.039925 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 15:10:14.039925 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 15:10:14.039925 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Dec 13 15:10:14.138349 systemd-networkd[709]: eth0: Ignoring DHCPv6 address 2a02:1348:17c:d50c:24:19ff:fef3:5432/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:17c:d50c:24:19ff:fef3:5432/64 assigned by NDisc. Dec 13 15:10:14.138369 systemd-networkd[709]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Dec 13 15:10:14.546511 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Dec 13 15:10:15.710607 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 15:10:15.712542 ignition[831]: INFO : files: op(c): [started] processing unit "coreos-metadata-sshkeys@.service" Dec 13 15:10:15.712542 ignition[831]: INFO : files: op(c): [finished] processing unit "coreos-metadata-sshkeys@.service" Dec 13 15:10:15.712542 ignition[831]: INFO : files: op(d): [started] processing unit "prepare-helm.service" Dec 13 15:10:15.712542 ignition[831]: INFO : files: op(d): op(e): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 15:10:15.712542 ignition[831]: INFO : files: op(d): op(e): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 15:10:15.712542 ignition[831]: INFO : files: op(d): [finished] processing unit "prepare-helm.service" Dec 13 15:10:15.712542 ignition[831]: INFO : files: op(f): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Dec 13 15:10:15.712542 ignition[831]: INFO : files: op(f): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Dec 13 15:10:15.712542 ignition[831]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Dec 13 15:10:15.712542 ignition[831]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 15:10:15.724972 ignition[831]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 15:10:15.726499 ignition[831]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 15:10:15.726499 ignition[831]: INFO : files: files passed Dec 13 15:10:15.726499 ignition[831]: INFO : Ignition finished successfully Dec 13 15:10:15.739630 kernel: kauditd_printk_skb: 28 callbacks suppressed Dec 13 15:10:15.739671 kernel: audit: type=1130 audit(1734102615.729:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:10:15.729000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:10:15.727512 systemd[1]: Finished ignition-files.service. Dec 13 15:10:15.732834 systemd[1]: Starting initrd-setup-root-after-ignition.service... Dec 13 15:10:15.737806 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Dec 13 15:10:15.739090 systemd[1]: Starting ignition-quench.service... Dec 13 15:10:15.744900 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 15:10:15.745089 systemd[1]: Finished ignition-quench.service. Dec 13 15:10:15.757752 kernel: audit: type=1130 audit(1734102615.747:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:10:15.757792 kernel: audit: type=1131 audit(1734102615.747:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:10:15.747000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:10:15.747000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:10:15.759233 initrd-setup-root-after-ignition[856]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 15:10:15.759966 systemd[1]: Finished initrd-setup-root-after-ignition.service. Dec 13 15:10:15.766550 kernel: audit: type=1130 audit(1734102615.761:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:10:15.761000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:10:15.761280 systemd[1]: Reached target ignition-complete.target. Dec 13 15:10:15.768401 systemd[1]: Starting initrd-parse-etc.service... Dec 13 15:10:15.789380 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 15:10:15.789571 systemd[1]: Finished initrd-parse-etc.service. Dec 13 15:10:15.801106 kernel: audit: type=1130 audit(1734102615.790:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:10:15.801147 kernel: audit: type=1131 audit(1734102615.790:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:10:15.790000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:10:15.790000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:10:15.791152 systemd[1]: Reached target initrd-fs.target. Dec 13 15:10:15.801607 systemd[1]: Reached target initrd.target. Dec 13 15:10:15.802803 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Dec 13 15:10:15.804198 systemd[1]: Starting dracut-pre-pivot.service... Dec 13 15:10:15.824000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:10:15.823874 systemd[1]: Finished dracut-pre-pivot.service. Dec 13 15:10:15.830601 kernel: audit: type=1130 audit(1734102615.824:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:10:15.830144 systemd[1]: Starting initrd-cleanup.service... Dec 13 15:10:15.843969 systemd[1]: Stopped target nss-lookup.target. Dec 13 15:10:15.845604 systemd[1]: Stopped target remote-cryptsetup.target. Dec 13 15:10:15.847078 systemd[1]: Stopped target timers.target. Dec 13 15:10:15.848455 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 15:10:15.849446 systemd[1]: Stopped dracut-pre-pivot.service. Dec 13 15:10:15.850000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:10:15.855293 systemd[1]: Stopped target initrd.target. Dec 13 15:10:15.856952 kernel: audit: type=1131 audit(1734102615.850:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:10:15.856489 systemd[1]: Stopped target basic.target. Dec 13 15:10:15.857652 systemd[1]: Stopped target ignition-complete.target. Dec 13 15:10:15.858919 systemd[1]: Stopped target ignition-diskful.target. Dec 13 15:10:15.860188 systemd[1]: Stopped target initrd-root-device.target. Dec 13 15:10:15.861455 systemd[1]: Stopped target remote-fs.target. Dec 13 15:10:15.862621 systemd[1]: Stopped target remote-fs-pre.target. Dec 13 15:10:15.863890 systemd[1]: Stopped target sysinit.target. Dec 13 15:10:15.865068 systemd[1]: Stopped target local-fs.target. Dec 13 15:10:15.866346 systemd[1]: Stopped target local-fs-pre.target. Dec 13 15:10:15.867522 systemd[1]: Stopped target swap.target. Dec 13 15:10:15.868625 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 15:10:15.874909 kernel: audit: type=1131 audit(1734102615.869:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:10:15.869000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:10:15.868878 systemd[1]: Stopped dracut-pre-mount.service. Dec 13 15:10:15.869975 systemd[1]: Stopped target cryptsetup.target. Dec 13 15:10:15.881985 kernel: audit: type=1131 audit(1734102615.876:48): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:10:15.876000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:10:15.875707 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 15:10:15.882000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:10:15.875947 systemd[1]: Stopped dracut-initqueue.service. Dec 13 15:10:15.884000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:10:15.877103 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 15:10:15.877313 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Dec 13 15:10:15.882922 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 15:10:15.883180 systemd[1]: Stopped ignition-files.service. Dec 13 15:10:15.885814 systemd[1]: Stopping ignition-mount.service... Dec 13 15:10:15.895000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:10:15.890481 systemd[1]: Stopping iscsiuio.service... Dec 13 15:10:15.897000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:10:15.893233 systemd[1]: Stopping sysroot-boot.service... Dec 13 15:10:15.894215 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 15:10:15.894644 systemd[1]: Stopped systemd-udev-trigger.service. Dec 13 15:10:15.896018 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 15:10:15.896334 systemd[1]: Stopped dracut-pre-trigger.service. Dec 13 15:10:15.901232 systemd[1]: iscsiuio.service: Deactivated successfully. Dec 13 15:10:15.901459 systemd[1]: Stopped iscsiuio.service. Dec 13 15:10:15.909000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:10:15.910513 ignition[869]: INFO : Ignition 2.14.0 Dec 13 15:10:15.910513 ignition[869]: INFO : Stage: umount Dec 13 15:10:15.910513 ignition[869]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 15:10:15.910513 ignition[869]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Dec 13 15:10:15.912000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:10:15.912000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:10:15.916888 ignition[869]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 15:10:15.911785 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 15:10:15.920000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:10:15.921000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:10:15.921000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:10:15.922000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:10:15.923000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:10:15.926858 ignition[869]: INFO : umount: umount passed Dec 13 15:10:15.926858 ignition[869]: INFO : Ignition finished successfully Dec 13 15:10:15.911950 systemd[1]: Finished initrd-cleanup.service. Dec 13 15:10:15.931000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:10:15.919547 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 15:10:15.919696 systemd[1]: Stopped ignition-mount.service. Dec 13 15:10:15.920456 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 15:10:15.920519 systemd[1]: Stopped ignition-disks.service. Dec 13 15:10:15.921145 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 15:10:15.921199 systemd[1]: Stopped ignition-kargs.service. Dec 13 15:10:15.941000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:10:15.921826 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 13 15:10:15.921892 systemd[1]: Stopped ignition-fetch.service. Dec 13 15:10:15.943000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:10:15.922571 systemd[1]: Stopped target network.target. Dec 13 15:10:15.945000 audit: BPF prog-id=6 op=UNLOAD Dec 13 15:10:15.923211 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 15:10:15.923283 systemd[1]: Stopped ignition-fetch-offline.service. Dec 13 15:10:15.923935 systemd[1]: Stopped target paths.target. Dec 13 15:10:15.924529 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 15:10:15.924615 systemd[1]: Stopped systemd-ask-password-console.path. Dec 13 15:10:15.925260 systemd[1]: Stopped target slices.target. Dec 13 15:10:15.926306 systemd[1]: Stopped target sockets.target. Dec 13 15:10:15.951000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:10:15.927811 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 15:10:15.953000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:10:15.927851 systemd[1]: Closed iscsid.socket. Dec 13 15:10:15.954000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:10:15.928582 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 15:10:15.928643 systemd[1]: Closed iscsiuio.socket. Dec 13 15:10:15.929333 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 15:10:15.929413 systemd[1]: Stopped ignition-setup.service. Dec 13 15:10:15.931513 systemd[1]: Stopping systemd-networkd.service... Dec 13 15:10:15.932989 systemd[1]: Stopping systemd-resolved.service... Dec 13 15:10:15.936098 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 15:10:15.937438 systemd-networkd[709]: eth0: DHCPv6 lease lost Dec 13 15:10:15.968000 audit: BPF prog-id=9 op=UNLOAD Dec 13 15:10:15.940604 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 15:10:15.969000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:10:15.940766 systemd[1]: Stopped systemd-networkd.service. Dec 13 15:10:15.942382 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 15:10:15.942528 systemd[1]: Stopped systemd-resolved.service. Dec 13 15:10:15.972000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:10:15.944589 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 15:10:15.944650 systemd[1]: Closed systemd-networkd.socket. Dec 13 15:10:15.946562 systemd[1]: Stopping network-cleanup.service... Dec 13 15:10:15.976000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:10:15.950732 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 15:10:15.978000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:10:15.950821 systemd[1]: Stopped parse-ip-for-networkd.service. Dec 13 15:10:15.979000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:10:15.952043 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 15:10:15.952237 systemd[1]: Stopped systemd-sysctl.service. Dec 13 15:10:15.953491 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 15:10:15.953556 systemd[1]: Stopped systemd-modules-load.service. Dec 13 15:10:15.954646 systemd[1]: Stopping systemd-udevd.service... Dec 13 15:10:15.963908 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 13 15:10:15.965934 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 15:10:15.966253 systemd[1]: Stopped systemd-udevd.service. Dec 13 15:10:15.971461 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 15:10:15.971614 systemd[1]: Stopped network-cleanup.service. Dec 13 15:10:15.973032 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 15:10:15.973124 systemd[1]: Closed systemd-udevd-control.socket. Dec 13 15:10:16.004000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:10:15.974488 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 15:10:15.974536 systemd[1]: Closed systemd-udevd-kernel.socket. Dec 13 15:10:16.006000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:10:15.975719 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 15:10:16.008000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:10:15.975818 systemd[1]: Stopped dracut-pre-udev.service. Dec 13 15:10:15.976923 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 15:10:15.976993 systemd[1]: Stopped dracut-cmdline.service. Dec 13 15:10:16.011000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:10:16.011000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:10:15.978220 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 15:10:15.978280 systemd[1]: Stopped dracut-cmdline-ask.service. Dec 13 15:10:15.980672 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Dec 13 15:10:15.982360 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 15:10:15.982470 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Dec 13 15:10:16.005471 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 15:10:16.005582 systemd[1]: Stopped kmod-static-nodes.service. Dec 13 15:10:16.007095 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 15:10:16.007167 systemd[1]: Stopped systemd-vconsole-setup.service. Dec 13 15:10:16.009773 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Dec 13 15:10:16.010532 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 15:10:16.010687 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Dec 13 15:10:16.044968 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 15:10:16.045163 systemd[1]: Stopped sysroot-boot.service. Dec 13 15:10:16.046000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:10:16.046758 systemd[1]: Reached target initrd-switch-root.target. Dec 13 15:10:16.047900 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 15:10:16.049000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:10:16.047988 systemd[1]: Stopped initrd-setup-root.service. Dec 13 15:10:16.050414 systemd[1]: Starting initrd-switch-root.service... Dec 13 15:10:16.068017 systemd[1]: Switching root. Dec 13 15:10:16.093559 iscsid[714]: iscsid shutting down. Dec 13 15:10:16.094438 systemd-journald[201]: Received SIGTERM from PID 1 (n/a). Dec 13 15:10:16.094512 systemd-journald[201]: Journal stopped Dec 13 15:10:19.964039 kernel: SELinux: Class mctp_socket not defined in policy. Dec 13 15:10:19.966243 kernel: SELinux: Class anon_inode not defined in policy. Dec 13 15:10:19.966277 kernel: SELinux: the above unknown classes and permissions will be allowed Dec 13 15:10:19.966302 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 15:10:19.966333 kernel: SELinux: policy capability open_perms=1 Dec 13 15:10:19.966376 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 15:10:19.966419 kernel: SELinux: policy capability always_check_network=0 Dec 13 15:10:19.966438 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 15:10:19.966455 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 15:10:19.966472 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 15:10:19.966489 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 15:10:19.966508 systemd[1]: Successfully loaded SELinux policy in 65.716ms. Dec 13 15:10:19.966545 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 16.141ms. Dec 13 15:10:19.966568 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 15:10:19.966594 systemd[1]: Detected virtualization kvm. Dec 13 15:10:19.966626 systemd[1]: Detected architecture x86-64. Dec 13 15:10:19.966687 systemd[1]: Detected first boot. Dec 13 15:10:19.966713 systemd[1]: Hostname set to . Dec 13 15:10:19.966744 systemd[1]: Initializing machine ID from VM UUID. Dec 13 15:10:19.966762 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Dec 13 15:10:19.966780 systemd[1]: Populated /etc with preset unit settings. Dec 13 15:10:19.966798 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 15:10:19.966849 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 15:10:19.966887 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 15:10:19.966919 systemd[1]: iscsid.service: Deactivated successfully. Dec 13 15:10:19.966940 systemd[1]: Stopped iscsid.service. Dec 13 15:10:19.966959 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 13 15:10:19.966979 systemd[1]: Stopped initrd-switch-root.service. Dec 13 15:10:19.966999 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 13 15:10:19.967039 systemd[1]: Created slice system-addon\x2dconfig.slice. Dec 13 15:10:19.967086 systemd[1]: Created slice system-addon\x2drun.slice. Dec 13 15:10:19.967109 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Dec 13 15:10:19.967129 systemd[1]: Created slice system-getty.slice. Dec 13 15:10:19.967149 systemd[1]: Created slice system-modprobe.slice. Dec 13 15:10:19.967169 systemd[1]: Created slice system-serial\x2dgetty.slice. Dec 13 15:10:19.967197 systemd[1]: Created slice system-system\x2dcloudinit.slice. Dec 13 15:10:19.967224 systemd[1]: Created slice system-systemd\x2dfsck.slice. Dec 13 15:10:19.967257 systemd[1]: Created slice user.slice. Dec 13 15:10:19.967278 systemd[1]: Started systemd-ask-password-console.path. Dec 13 15:10:19.967298 systemd[1]: Started systemd-ask-password-wall.path. Dec 13 15:10:19.967319 systemd[1]: Set up automount boot.automount. Dec 13 15:10:19.967338 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Dec 13 15:10:19.967366 systemd[1]: Stopped target initrd-switch-root.target. Dec 13 15:10:19.967387 systemd[1]: Stopped target initrd-fs.target. Dec 13 15:10:19.967418 systemd[1]: Stopped target initrd-root-fs.target. Dec 13 15:10:19.967439 systemd[1]: Reached target integritysetup.target. Dec 13 15:10:19.967466 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 15:10:19.967487 systemd[1]: Reached target remote-fs.target. Dec 13 15:10:19.967507 systemd[1]: Reached target slices.target. Dec 13 15:10:19.967527 systemd[1]: Reached target swap.target. Dec 13 15:10:19.967565 systemd[1]: Reached target torcx.target. Dec 13 15:10:19.967590 systemd[1]: Reached target veritysetup.target. Dec 13 15:10:19.967611 systemd[1]: Listening on systemd-coredump.socket. Dec 13 15:10:19.967678 systemd[1]: Listening on systemd-initctl.socket. Dec 13 15:10:19.967698 systemd[1]: Listening on systemd-networkd.socket. Dec 13 15:10:19.967717 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 15:10:19.967735 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 15:10:19.967754 systemd[1]: Listening on systemd-userdbd.socket. Dec 13 15:10:19.967779 systemd[1]: Mounting dev-hugepages.mount... Dec 13 15:10:19.967799 systemd[1]: Mounting dev-mqueue.mount... Dec 13 15:10:19.967818 systemd[1]: Mounting media.mount... Dec 13 15:10:19.967861 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 15:10:19.967908 systemd[1]: Mounting sys-kernel-debug.mount... Dec 13 15:10:19.967930 systemd[1]: Mounting sys-kernel-tracing.mount... Dec 13 15:10:19.967951 systemd[1]: Mounting tmp.mount... Dec 13 15:10:19.967971 systemd[1]: Starting flatcar-tmpfiles.service... Dec 13 15:10:19.967991 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 15:10:19.968011 systemd[1]: Starting kmod-static-nodes.service... Dec 13 15:10:19.968030 systemd[1]: Starting modprobe@configfs.service... Dec 13 15:10:19.968050 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 15:10:19.968612 systemd[1]: Starting modprobe@drm.service... Dec 13 15:10:19.968640 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 15:10:19.968678 systemd[1]: Starting modprobe@fuse.service... Dec 13 15:10:19.968703 systemd[1]: Starting modprobe@loop.service... Dec 13 15:10:19.968759 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 15:10:19.968779 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 13 15:10:19.968797 systemd[1]: Stopped systemd-fsck-root.service. Dec 13 15:10:19.968815 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 13 15:10:19.968858 systemd[1]: Stopped systemd-fsck-usr.service. Dec 13 15:10:19.968881 systemd[1]: Stopped systemd-journald.service. Dec 13 15:10:19.968912 kernel: fuse: init (API version 7.34) Dec 13 15:10:19.968933 systemd[1]: Starting systemd-journald.service... Dec 13 15:10:19.968953 systemd[1]: Starting systemd-modules-load.service... Dec 13 15:10:19.968973 systemd[1]: Starting systemd-network-generator.service... Dec 13 15:10:19.968992 kernel: loop: module loaded Dec 13 15:10:19.969011 systemd[1]: Starting systemd-remount-fs.service... Dec 13 15:10:19.969030 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 15:10:19.969048 systemd[1]: verity-setup.service: Deactivated successfully. Dec 13 15:10:19.969181 systemd[1]: Stopped verity-setup.service. Dec 13 15:10:19.969206 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 15:10:19.969256 systemd[1]: Mounted dev-hugepages.mount. Dec 13 15:10:19.969277 systemd[1]: Mounted dev-mqueue.mount. Dec 13 15:10:19.969295 systemd[1]: Mounted media.mount. Dec 13 15:10:19.969328 systemd[1]: Mounted sys-kernel-debug.mount. Dec 13 15:10:19.969348 systemd[1]: Mounted sys-kernel-tracing.mount. Dec 13 15:10:19.969367 systemd[1]: Mounted tmp.mount. Dec 13 15:10:19.969386 systemd[1]: Finished kmod-static-nodes.service. Dec 13 15:10:19.969405 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 15:10:19.969424 systemd[1]: Finished modprobe@configfs.service. Dec 13 15:10:19.969477 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 15:10:19.969496 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 15:10:19.969515 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 15:10:19.969546 systemd[1]: Finished modprobe@drm.service. Dec 13 15:10:19.969566 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 15:10:19.969610 systemd-journald[983]: Journal started Dec 13 15:10:19.969696 systemd-journald[983]: Runtime Journal (/run/log/journal/869b445bda954e07a5507237d6d21b44) is 4.7M, max 38.1M, 33.3M free. Dec 13 15:10:16.264000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 15:10:16.340000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 15:10:16.340000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 15:10:16.340000 audit: BPF prog-id=10 op=LOAD Dec 13 15:10:19.973104 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 15:10:19.973154 systemd[1]: Started systemd-journald.service. Dec 13 15:10:16.340000 audit: BPF prog-id=10 op=UNLOAD Dec 13 15:10:16.341000 audit: BPF prog-id=11 op=LOAD Dec 13 15:10:16.341000 audit: BPF prog-id=11 op=UNLOAD Dec 13 15:10:16.470000 audit[901]: AVC avc: denied { associate } for pid=901 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Dec 13 15:10:16.470000 audit[901]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c0001178d2 a1=c00002ae40 a2=c000029100 a3=32 items=0 ppid=884 pid=901 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 15:10:16.470000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Dec 13 15:10:16.473000 audit[901]: AVC avc: denied { associate } for pid=901 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Dec 13 15:10:16.473000 audit[901]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c0001179a9 a2=1ed a3=0 items=2 ppid=884 pid=901 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 15:10:16.473000 audit: CWD cwd="/" Dec 13 15:10:16.473000 audit: PATH item=0 name=(null) inode=2 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:10:16.473000 audit: PATH item=1 name=(null) inode=3 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:10:16.473000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Dec 13 15:10:19.675000 audit: BPF prog-id=12 op=LOAD Dec 13 15:10:19.675000 audit: BPF prog-id=3 op=UNLOAD Dec 13 15:10:19.675000 audit: BPF prog-id=13 op=LOAD Dec 13 15:10:19.676000 audit: BPF prog-id=14 op=LOAD Dec 13 15:10:19.676000 audit: BPF prog-id=4 op=UNLOAD Dec 13 15:10:19.676000 audit: BPF prog-id=5 op=UNLOAD Dec 13 15:10:19.677000 audit: BPF prog-id=15 op=LOAD Dec 13 15:10:19.677000 audit: BPF prog-id=12 op=UNLOAD Dec 13 15:10:19.677000 audit: BPF prog-id=16 op=LOAD Dec 13 15:10:19.677000 audit: BPF prog-id=17 op=LOAD Dec 13 15:10:19.677000 audit: BPF prog-id=13 op=UNLOAD Dec 13 15:10:19.677000 audit: BPF prog-id=14 op=UNLOAD Dec 13 15:10:19.679000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:10:19.684000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:10:19.686000 audit: BPF prog-id=15 op=UNLOAD Dec 13 15:10:19.689000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:10:19.689000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:10:19.875000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:10:19.880000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:10:19.883000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:10:19.883000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:10:19.886000 audit: BPF prog-id=18 op=LOAD Dec 13 15:10:19.887000 audit: BPF prog-id=19 op=LOAD Dec 13 15:10:19.887000 audit: BPF prog-id=20 op=LOAD Dec 13 15:10:19.888000 audit: BPF prog-id=16 op=UNLOAD Dec 13 15:10:19.888000 audit: BPF prog-id=17 op=UNLOAD Dec 13 15:10:19.923000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:10:19.949000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:10:19.955000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:10:19.955000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:10:19.961000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Dec 13 15:10:19.961000 audit[983]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=4 a1=7ffd19860e60 a2=4000 a3=7ffd19860efc items=0 ppid=1 pid=983 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 15:10:19.961000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Dec 13 15:10:19.961000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:10:19.961000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:10:19.966000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:10:19.966000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:10:19.972000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:10:19.972000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:10:19.976000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:10:19.977000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:10:19.977000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:10:19.978000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:10:19.978000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:10:19.979000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:10:19.671078 systemd[1]: Queued start job for default target multi-user.target. Dec 13 15:10:16.467521 /usr/lib/systemd/system-generators/torcx-generator[901]: time="2024-12-13T15:10:16Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 15:10:19.671103 systemd[1]: Unnecessary job was removed for dev-vda6.device. Dec 13 15:10:16.468302 /usr/lib/systemd/system-generators/torcx-generator[901]: time="2024-12-13T15:10:16Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Dec 13 15:10:19.980000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:10:19.679034 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 13 15:10:16.468342 /usr/lib/systemd/system-generators/torcx-generator[901]: time="2024-12-13T15:10:16Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Dec 13 15:10:19.976465 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 15:10:19.981000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:10:16.468413 /usr/lib/systemd/system-generators/torcx-generator[901]: time="2024-12-13T15:10:16Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Dec 13 15:10:19.976679 systemd[1]: Finished modprobe@fuse.service. Dec 13 15:10:16.468432 /usr/lib/systemd/system-generators/torcx-generator[901]: time="2024-12-13T15:10:16Z" level=debug msg="skipped missing lower profile" missing profile=oem Dec 13 15:10:19.977694 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 15:10:16.468497 /usr/lib/systemd/system-generators/torcx-generator[901]: time="2024-12-13T15:10:16Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Dec 13 15:10:19.977919 systemd[1]: Finished modprobe@loop.service. Dec 13 15:10:16.468520 /usr/lib/systemd/system-generators/torcx-generator[901]: time="2024-12-13T15:10:16Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Dec 13 15:10:19.978948 systemd[1]: Finished systemd-modules-load.service. Dec 13 15:10:16.469031 /usr/lib/systemd/system-generators/torcx-generator[901]: time="2024-12-13T15:10:16Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Dec 13 15:10:19.979982 systemd[1]: Finished systemd-network-generator.service. Dec 13 15:10:16.469116 /usr/lib/systemd/system-generators/torcx-generator[901]: time="2024-12-13T15:10:16Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Dec 13 15:10:19.981147 systemd[1]: Finished systemd-remount-fs.service. Dec 13 15:10:16.469153 /usr/lib/systemd/system-generators/torcx-generator[901]: time="2024-12-13T15:10:16Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Dec 13 15:10:19.984478 systemd[1]: Reached target network-pre.target. Dec 13 15:10:16.469904 /usr/lib/systemd/system-generators/torcx-generator[901]: time="2024-12-13T15:10:16Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Dec 13 15:10:19.987493 systemd[1]: Mounting sys-fs-fuse-connections.mount... Dec 13 15:10:16.469962 /usr/lib/systemd/system-generators/torcx-generator[901]: time="2024-12-13T15:10:16Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Dec 13 15:10:16.470011 /usr/lib/systemd/system-generators/torcx-generator[901]: time="2024-12-13T15:10:16Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.6: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.6 Dec 13 15:10:16.470039 /usr/lib/systemd/system-generators/torcx-generator[901]: time="2024-12-13T15:10:16Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Dec 13 15:10:16.470106 /usr/lib/systemd/system-generators/torcx-generator[901]: time="2024-12-13T15:10:16Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.6: no such file or directory" path=/var/lib/torcx/store/3510.3.6 Dec 13 15:10:16.470133 /usr/lib/systemd/system-generators/torcx-generator[901]: time="2024-12-13T15:10:16Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Dec 13 15:10:19.086373 /usr/lib/systemd/system-generators/torcx-generator[901]: time="2024-12-13T15:10:19Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 15:10:19.086911 /usr/lib/systemd/system-generators/torcx-generator[901]: time="2024-12-13T15:10:19Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 15:10:19.996291 systemd[1]: Mounting sys-kernel-config.mount... Dec 13 15:10:19.087828 /usr/lib/systemd/system-generators/torcx-generator[901]: time="2024-12-13T15:10:19Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 15:10:19.996992 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 15:10:19.088192 /usr/lib/systemd/system-generators/torcx-generator[901]: time="2024-12-13T15:10:19Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 15:10:19.088276 /usr/lib/systemd/system-generators/torcx-generator[901]: time="2024-12-13T15:10:19Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Dec 13 15:10:19.088385 /usr/lib/systemd/system-generators/torcx-generator[901]: time="2024-12-13T15:10:19Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Dec 13 15:10:20.000258 systemd[1]: Starting systemd-hwdb-update.service... Dec 13 15:10:20.002503 systemd[1]: Starting systemd-journal-flush.service... Dec 13 15:10:20.003384 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 15:10:20.005273 systemd[1]: Starting systemd-random-seed.service... Dec 13 15:10:20.006270 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 15:10:20.009196 systemd[1]: Starting systemd-sysctl.service... Dec 13 15:10:20.016763 systemd[1]: Finished flatcar-tmpfiles.service. Dec 13 15:10:20.017000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:10:20.017788 systemd[1]: Mounted sys-fs-fuse-connections.mount. Dec 13 15:10:20.018561 systemd[1]: Mounted sys-kernel-config.mount. Dec 13 15:10:20.023188 systemd[1]: Starting systemd-sysusers.service... Dec 13 15:10:20.039625 systemd-journald[983]: Time spent on flushing to /var/log/journal/869b445bda954e07a5507237d6d21b44 is 90.492ms for 1301 entries. Dec 13 15:10:20.039625 systemd-journald[983]: System Journal (/var/log/journal/869b445bda954e07a5507237d6d21b44) is 8.0M, max 584.8M, 576.8M free. Dec 13 15:10:20.161941 systemd-journald[983]: Received client request to flush runtime journal. Dec 13 15:10:20.046000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:10:20.050000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:10:20.074000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:10:20.135000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:10:20.043917 systemd[1]: Finished systemd-sysctl.service. Dec 13 15:10:20.049805 systemd[1]: Finished systemd-random-seed.service. Dec 13 15:10:20.050606 systemd[1]: Reached target first-boot-complete.target. Dec 13 15:10:20.073417 systemd[1]: Finished systemd-sysusers.service. Dec 13 15:10:20.076466 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Dec 13 15:10:20.134404 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Dec 13 15:10:20.164016 systemd[1]: Finished systemd-journal-flush.service. Dec 13 15:10:20.164000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:10:20.170345 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 15:10:20.170000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:10:20.173069 systemd[1]: Starting systemd-udev-settle.service... Dec 13 15:10:20.187282 udevadm[1014]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Dec 13 15:10:20.756025 systemd[1]: Finished systemd-hwdb-update.service. Dec 13 15:10:20.764744 kernel: kauditd_printk_skb: 102 callbacks suppressed Dec 13 15:10:20.764926 kernel: audit: type=1130 audit(1734102620.757:142): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:10:20.757000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:10:20.759000 audit: BPF prog-id=21 op=LOAD Dec 13 15:10:20.767164 kernel: audit: type=1334 audit(1734102620.759:143): prog-id=21 op=LOAD Dec 13 15:10:20.766006 systemd[1]: Starting systemd-udevd.service... Dec 13 15:10:20.764000 audit: BPF prog-id=22 op=LOAD Dec 13 15:10:20.764000 audit: BPF prog-id=7 op=UNLOAD Dec 13 15:10:20.764000 audit: BPF prog-id=8 op=UNLOAD Dec 13 15:10:20.769961 kernel: audit: type=1334 audit(1734102620.764:144): prog-id=22 op=LOAD Dec 13 15:10:20.770154 kernel: audit: type=1334 audit(1734102620.764:145): prog-id=7 op=UNLOAD Dec 13 15:10:20.770202 kernel: audit: type=1334 audit(1734102620.764:146): prog-id=8 op=UNLOAD Dec 13 15:10:20.796177 systemd-udevd[1015]: Using default interface naming scheme 'v252'. Dec 13 15:10:20.827796 systemd[1]: Started systemd-udevd.service. Dec 13 15:10:20.828000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:10:20.837388 kernel: audit: type=1130 audit(1734102620.828:147): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:10:20.837656 systemd[1]: Starting systemd-networkd.service... Dec 13 15:10:20.829000 audit: BPF prog-id=23 op=LOAD Dec 13 15:10:20.843364 kernel: audit: type=1334 audit(1734102620.829:148): prog-id=23 op=LOAD Dec 13 15:10:20.850000 audit: BPF prog-id=24 op=LOAD Dec 13 15:10:20.850000 audit: BPF prog-id=25 op=LOAD Dec 13 15:10:20.854743 kernel: audit: type=1334 audit(1734102620.850:149): prog-id=24 op=LOAD Dec 13 15:10:20.854847 kernel: audit: type=1334 audit(1734102620.850:150): prog-id=25 op=LOAD Dec 13 15:10:20.854964 systemd[1]: Starting systemd-userdbd.service... Dec 13 15:10:20.850000 audit: BPF prog-id=26 op=LOAD Dec 13 15:10:20.859313 kernel: audit: type=1334 audit(1734102620.850:151): prog-id=26 op=LOAD Dec 13 15:10:20.904913 systemd[1]: Started systemd-userdbd.service. Dec 13 15:10:20.905000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:10:20.947068 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Dec 13 15:10:20.980852 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Dec 13 15:10:21.029100 systemd-networkd[1030]: lo: Link UP Dec 13 15:10:21.029114 systemd-networkd[1030]: lo: Gained carrier Dec 13 15:10:21.029951 systemd-networkd[1030]: Enumeration completed Dec 13 15:10:21.030105 systemd[1]: Started systemd-networkd.service. Dec 13 15:10:21.030000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:10:21.031111 systemd-networkd[1030]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 15:10:21.033533 systemd-networkd[1030]: eth0: Link UP Dec 13 15:10:21.033550 systemd-networkd[1030]: eth0: Gained carrier Dec 13 15:10:21.034109 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 15:10:21.050497 systemd-networkd[1030]: eth0: DHCPv4 address 10.243.84.50/30, gateway 10.243.84.49 acquired from 10.243.84.49 Dec 13 15:10:21.125100 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Dec 13 15:10:21.151103 kernel: ACPI: button: Power Button [PWRF] Dec 13 15:10:21.146000 audit[1026]: AVC avc: denied { confidentiality } for pid=1026 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Dec 13 15:10:21.146000 audit[1026]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=564587a71180 a1=337fc a2=7fd3f6a8dbc5 a3=5 items=110 ppid=1015 pid=1026 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 15:10:21.146000 audit: CWD cwd="/" Dec 13 15:10:21.146000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:10:21.146000 audit: PATH item=1 name=(null) inode=13962 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:10:21.146000 audit: PATH item=2 name=(null) inode=13962 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:10:21.146000 audit: PATH item=3 name=(null) inode=13963 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:10:21.146000 audit: PATH item=4 name=(null) inode=13962 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:10:21.146000 audit: PATH item=5 name=(null) inode=13964 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:10:21.146000 audit: PATH item=6 name=(null) inode=13962 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:10:21.146000 audit: PATH item=7 name=(null) inode=13965 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:10:21.146000 audit: PATH item=8 name=(null) inode=13965 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:10:21.146000 audit: PATH item=9 name=(null) inode=13966 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:10:21.146000 audit: PATH item=10 name=(null) inode=13965 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:10:21.146000 audit: PATH item=11 name=(null) inode=13967 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:10:21.146000 audit: PATH item=12 name=(null) inode=13965 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:10:21.146000 audit: PATH item=13 name=(null) inode=13968 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:10:21.146000 audit: PATH item=14 name=(null) inode=13965 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:10:21.146000 audit: PATH item=15 name=(null) inode=13969 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:10:21.146000 audit: PATH item=16 name=(null) inode=13965 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:10:21.146000 audit: PATH item=17 name=(null) inode=13970 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:10:21.146000 audit: PATH item=18 name=(null) inode=13962 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:10:21.146000 audit: PATH item=19 name=(null) inode=13971 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:10:21.146000 audit: PATH item=20 name=(null) inode=13971 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:10:21.146000 audit: PATH item=21 name=(null) inode=13972 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:10:21.146000 audit: PATH item=22 name=(null) inode=13971 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:10:21.146000 audit: PATH item=23 name=(null) inode=13973 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:10:21.146000 audit: PATH item=24 name=(null) inode=13971 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:10:21.146000 audit: PATH item=25 name=(null) inode=13974 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:10:21.146000 audit: PATH item=26 name=(null) inode=13971 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:10:21.146000 audit: PATH item=27 name=(null) inode=13975 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:10:21.146000 audit: PATH item=28 name=(null) inode=13971 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:10:21.146000 audit: PATH item=29 name=(null) inode=13976 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:10:21.146000 audit: PATH item=30 name=(null) inode=13962 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:10:21.146000 audit: PATH item=31 name=(null) inode=13977 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:10:21.146000 audit: PATH item=32 name=(null) inode=13977 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:10:21.146000 audit: PATH item=33 name=(null) inode=13978 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:10:21.146000 audit: PATH item=34 name=(null) inode=13977 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:10:21.146000 audit: PATH item=35 name=(null) inode=13979 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:10:21.146000 audit: PATH item=36 name=(null) inode=13977 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:10:21.146000 audit: PATH item=37 name=(null) inode=13980 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:10:21.146000 audit: PATH item=38 name=(null) inode=13977 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:10:21.146000 audit: PATH item=39 name=(null) inode=13981 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:10:21.146000 audit: PATH item=40 name=(null) inode=13977 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:10:21.146000 audit: PATH item=41 name=(null) inode=13982 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:10:21.146000 audit: PATH item=42 name=(null) inode=13962 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:10:21.146000 audit: PATH item=43 name=(null) inode=13983 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:10:21.146000 audit: PATH item=44 name=(null) inode=13983 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:10:21.146000 audit: PATH item=45 name=(null) inode=13984 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:10:21.146000 audit: PATH item=46 name=(null) inode=13983 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:10:21.146000 audit: PATH item=47 name=(null) inode=13985 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:10:21.146000 audit: PATH item=48 name=(null) inode=13983 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:10:21.146000 audit: PATH item=49 name=(null) inode=13986 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:10:21.146000 audit: PATH item=50 name=(null) inode=13983 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:10:21.146000 audit: PATH item=51 name=(null) inode=13987 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:10:21.146000 audit: PATH item=52 name=(null) inode=13983 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:10:21.146000 audit: PATH item=53 name=(null) inode=13988 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:10:21.146000 audit: PATH item=54 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:10:21.146000 audit: PATH item=55 name=(null) inode=13989 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:10:21.146000 audit: PATH item=56 name=(null) inode=13989 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:10:21.146000 audit: PATH item=57 name=(null) inode=13990 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:10:21.146000 audit: PATH item=58 name=(null) inode=13989 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:10:21.146000 audit: PATH item=59 name=(null) inode=13991 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:10:21.146000 audit: PATH item=60 name=(null) inode=13989 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:10:21.146000 audit: PATH item=61 name=(null) inode=13992 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:10:21.146000 audit: PATH item=62 name=(null) inode=13992 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:10:21.146000 audit: PATH item=63 name=(null) inode=13993 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:10:21.146000 audit: PATH item=64 name=(null) inode=13992 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:10:21.146000 audit: PATH item=65 name=(null) inode=13994 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:10:21.146000 audit: PATH item=66 name=(null) inode=13992 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:10:21.146000 audit: PATH item=67 name=(null) inode=13995 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:10:21.146000 audit: PATH item=68 name=(null) inode=13992 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:10:21.146000 audit: PATH item=69 name=(null) inode=13996 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:10:21.146000 audit: PATH item=70 name=(null) inode=13992 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:10:21.146000 audit: PATH item=71 name=(null) inode=13997 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:10:21.146000 audit: PATH item=72 name=(null) inode=13989 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:10:21.146000 audit: PATH item=73 name=(null) inode=13998 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:10:21.146000 audit: PATH item=74 name=(null) inode=13998 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:10:21.146000 audit: PATH item=75 name=(null) inode=13999 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:10:21.146000 audit: PATH item=76 name=(null) inode=13998 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:10:21.146000 audit: PATH item=77 name=(null) inode=14000 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:10:21.146000 audit: PATH item=78 name=(null) inode=13998 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:10:21.146000 audit: PATH item=79 name=(null) inode=14001 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:10:21.146000 audit: PATH item=80 name=(null) inode=13998 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:10:21.146000 audit: PATH item=81 name=(null) inode=14002 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:10:21.146000 audit: PATH item=82 name=(null) inode=13998 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:10:21.146000 audit: PATH item=83 name=(null) inode=14003 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:10:21.146000 audit: PATH item=84 name=(null) inode=13989 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:10:21.146000 audit: PATH item=85 name=(null) inode=14004 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:10:21.146000 audit: PATH item=86 name=(null) inode=14004 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:10:21.146000 audit: PATH item=87 name=(null) inode=14005 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:10:21.146000 audit: PATH item=88 name=(null) inode=14004 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:10:21.146000 audit: PATH item=89 name=(null) inode=14006 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:10:21.146000 audit: PATH item=90 name=(null) inode=14004 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:10:21.146000 audit: PATH item=91 name=(null) inode=14007 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:10:21.146000 audit: PATH item=92 name=(null) inode=14004 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:10:21.146000 audit: PATH item=93 name=(null) inode=14008 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:10:21.146000 audit: PATH item=94 name=(null) inode=14004 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:10:21.146000 audit: PATH item=95 name=(null) inode=14009 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:10:21.146000 audit: PATH item=96 name=(null) inode=13989 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:10:21.146000 audit: PATH item=97 name=(null) inode=14010 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:10:21.146000 audit: PATH item=98 name=(null) inode=14010 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:10:21.146000 audit: PATH item=99 name=(null) inode=14011 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:10:21.146000 audit: PATH item=100 name=(null) inode=14010 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:10:21.146000 audit: PATH item=101 name=(null) inode=14012 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:10:21.146000 audit: PATH item=102 name=(null) inode=14010 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:10:21.146000 audit: PATH item=103 name=(null) inode=14013 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:10:21.146000 audit: PATH item=104 name=(null) inode=14010 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:10:21.146000 audit: PATH item=105 name=(null) inode=14014 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:10:21.146000 audit: PATH item=106 name=(null) inode=14010 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:10:21.146000 audit: PATH item=107 name=(null) inode=14015 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:10:21.146000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:10:21.146000 audit: PATH item=109 name=(null) inode=14016 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:10:21.146000 audit: PROCTITLE proctitle="(udev-worker)" Dec 13 15:10:21.176082 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Dec 13 15:10:21.186778 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Dec 13 15:10:21.186835 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Dec 13 15:10:21.187088 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Dec 13 15:10:21.310758 systemd[1]: Finished systemd-udev-settle.service. Dec 13 15:10:21.311000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:10:21.313420 systemd[1]: Starting lvm2-activation-early.service... Dec 13 15:10:21.336562 lvm[1044]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 15:10:21.367537 systemd[1]: Finished lvm2-activation-early.service. Dec 13 15:10:21.368000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:10:21.368465 systemd[1]: Reached target cryptsetup.target. Dec 13 15:10:21.370692 systemd[1]: Starting lvm2-activation.service... Dec 13 15:10:21.376883 lvm[1045]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 15:10:21.402566 systemd[1]: Finished lvm2-activation.service. Dec 13 15:10:21.403000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:10:21.403454 systemd[1]: Reached target local-fs-pre.target. Dec 13 15:10:21.404091 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 15:10:21.404136 systemd[1]: Reached target local-fs.target. Dec 13 15:10:21.404738 systemd[1]: Reached target machines.target. Dec 13 15:10:21.407275 systemd[1]: Starting ldconfig.service... Dec 13 15:10:21.408480 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 15:10:21.408536 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 15:10:21.409977 systemd[1]: Starting systemd-boot-update.service... Dec 13 15:10:21.411955 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Dec 13 15:10:21.417282 systemd[1]: Starting systemd-machine-id-commit.service... Dec 13 15:10:21.421226 systemd[1]: Starting systemd-sysext.service... Dec 13 15:10:21.424590 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1047 (bootctl) Dec 13 15:10:21.426371 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Dec 13 15:10:21.533000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:10:21.532655 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Dec 13 15:10:21.551974 systemd[1]: Unmounting usr-share-oem.mount... Dec 13 15:10:21.587356 systemd[1]: usr-share-oem.mount: Deactivated successfully. Dec 13 15:10:21.587633 systemd[1]: Unmounted usr-share-oem.mount. Dec 13 15:10:21.672101 kernel: loop0: detected capacity change from 0 to 211296 Dec 13 15:10:21.681520 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 15:10:21.683158 systemd[1]: Finished systemd-machine-id-commit.service. Dec 13 15:10:21.683000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:10:21.714326 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 15:10:21.736167 kernel: loop1: detected capacity change from 0 to 211296 Dec 13 15:10:21.751780 (sd-sysext)[1059]: Using extensions 'kubernetes'. Dec 13 15:10:21.754877 (sd-sysext)[1059]: Merged extensions into '/usr'. Dec 13 15:10:21.758297 systemd-fsck[1056]: fsck.fat 4.2 (2021-01-31) Dec 13 15:10:21.758297 systemd-fsck[1056]: /dev/vda1: 789 files, 119291/258078 clusters Dec 13 15:10:21.762000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:10:21.762089 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Dec 13 15:10:21.764545 systemd[1]: Mounting boot.mount... Dec 13 15:10:21.798302 systemd[1]: Mounted boot.mount. Dec 13 15:10:21.805532 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 15:10:21.808667 systemd[1]: Mounting usr-share-oem.mount... Dec 13 15:10:21.811364 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 15:10:21.815406 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 15:10:21.818671 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 15:10:21.821258 systemd[1]: Starting modprobe@loop.service... Dec 13 15:10:21.822628 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 15:10:21.823032 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 15:10:21.823361 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 15:10:21.827664 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 15:10:21.827999 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 15:10:21.829000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:10:21.829000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:10:21.829665 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 15:10:21.829838 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 15:10:21.831000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:10:21.831000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:10:21.832070 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 15:10:21.832243 systemd[1]: Finished modprobe@loop.service. Dec 13 15:10:21.833000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:10:21.833000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:10:21.834102 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 15:10:21.834254 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 15:10:21.837722 systemd[1]: Finished systemd-boot-update.service. Dec 13 15:10:21.838000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:10:21.838729 systemd[1]: Mounted usr-share-oem.mount. Dec 13 15:10:21.840926 systemd[1]: Finished systemd-sysext.service. Dec 13 15:10:21.841000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:10:21.843311 systemd[1]: Starting ensure-sysext.service... Dec 13 15:10:21.845392 systemd[1]: Starting systemd-tmpfiles-setup.service... Dec 13 15:10:21.856451 systemd[1]: Reloading. Dec 13 15:10:21.881369 systemd-tmpfiles[1067]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Dec 13 15:10:21.885191 systemd-tmpfiles[1067]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 15:10:21.896813 systemd-tmpfiles[1067]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 15:10:21.979277 /usr/lib/systemd/system-generators/torcx-generator[1089]: time="2024-12-13T15:10:21Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 15:10:21.979325 /usr/lib/systemd/system-generators/torcx-generator[1089]: time="2024-12-13T15:10:21Z" level=info msg="torcx already run" Dec 13 15:10:22.101735 ldconfig[1046]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 15:10:22.139695 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 15:10:22.139728 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 15:10:22.166585 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 15:10:22.243000 audit: BPF prog-id=27 op=LOAD Dec 13 15:10:22.243000 audit: BPF prog-id=18 op=UNLOAD Dec 13 15:10:22.243000 audit: BPF prog-id=28 op=LOAD Dec 13 15:10:22.243000 audit: BPF prog-id=29 op=LOAD Dec 13 15:10:22.243000 audit: BPF prog-id=19 op=UNLOAD Dec 13 15:10:22.243000 audit: BPF prog-id=20 op=UNLOAD Dec 13 15:10:22.248000 audit: BPF prog-id=30 op=LOAD Dec 13 15:10:22.248000 audit: BPF prog-id=23 op=UNLOAD Dec 13 15:10:22.247935 systemd-networkd[1030]: eth0: Gained IPv6LL Dec 13 15:10:22.249000 audit: BPF prog-id=31 op=LOAD Dec 13 15:10:22.250000 audit: BPF prog-id=32 op=LOAD Dec 13 15:10:22.250000 audit: BPF prog-id=21 op=UNLOAD Dec 13 15:10:22.250000 audit: BPF prog-id=22 op=UNLOAD Dec 13 15:10:22.250000 audit: BPF prog-id=33 op=LOAD Dec 13 15:10:22.250000 audit: BPF prog-id=24 op=UNLOAD Dec 13 15:10:22.251000 audit: BPF prog-id=34 op=LOAD Dec 13 15:10:22.251000 audit: BPF prog-id=35 op=LOAD Dec 13 15:10:22.251000 audit: BPF prog-id=25 op=UNLOAD Dec 13 15:10:22.251000 audit: BPF prog-id=26 op=UNLOAD Dec 13 15:10:22.255163 systemd[1]: Finished ldconfig.service. Dec 13 15:10:22.255000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:10:22.257449 systemd[1]: Finished systemd-tmpfiles-setup.service. Dec 13 15:10:22.258000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:10:22.262634 systemd[1]: Starting audit-rules.service... Dec 13 15:10:22.264884 systemd[1]: Starting clean-ca-certificates.service... Dec 13 15:10:22.270137 systemd[1]: Starting systemd-journal-catalog-update.service... Dec 13 15:10:22.272000 audit: BPF prog-id=36 op=LOAD Dec 13 15:10:22.273336 systemd[1]: Starting systemd-resolved.service... Dec 13 15:10:22.277000 audit: BPF prog-id=37 op=LOAD Dec 13 15:10:22.278643 systemd[1]: Starting systemd-timesyncd.service... Dec 13 15:10:22.280903 systemd[1]: Starting systemd-update-utmp.service... Dec 13 15:10:22.283531 systemd[1]: Finished clean-ca-certificates.service. Dec 13 15:10:22.284000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:10:22.288371 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 15:10:22.299000 audit[1140]: SYSTEM_BOOT pid=1140 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Dec 13 15:10:22.302000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:10:22.301587 systemd[1]: Finished systemd-update-utmp.service. Dec 13 15:10:22.306451 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 15:10:22.308327 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 15:10:22.310741 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 15:10:22.314250 systemd[1]: Starting modprobe@loop.service... Dec 13 15:10:22.314968 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 15:10:22.315169 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 15:10:22.315353 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 15:10:22.320000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:10:22.320000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:10:22.319087 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 15:10:22.319329 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 15:10:22.320904 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 15:10:22.323888 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 15:10:22.327251 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 15:10:22.327973 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 15:10:22.328125 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 15:10:22.328260 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 15:10:22.329140 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 15:10:22.330000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:10:22.330000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:10:22.329329 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 15:10:22.331376 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 15:10:22.335589 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 15:10:22.338517 systemd[1]: Starting modprobe@drm.service... Dec 13 15:10:22.342418 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 15:10:22.343409 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 15:10:22.343576 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 15:10:22.350000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:10:22.350000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:10:22.347208 systemd[1]: Starting systemd-networkd-wait-online.service... Dec 13 15:10:22.348085 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 15:10:22.349559 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 15:10:22.349792 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 15:10:22.355000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:10:22.355228 systemd[1]: Finished ensure-sysext.service. Dec 13 15:10:22.358000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:10:22.358000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:10:22.359000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:10:22.357476 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 15:10:22.357662 systemd[1]: Finished modprobe@loop.service. Dec 13 15:10:22.358656 systemd[1]: Finished systemd-networkd-wait-online.service. Dec 13 15:10:22.359523 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 15:10:22.362000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:10:22.362000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:10:22.361825 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 15:10:22.361989 systemd[1]: Finished modprobe@drm.service. Dec 13 15:10:22.365460 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 15:10:22.365645 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 15:10:22.380000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:10:22.381000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:10:22.382362 systemd[1]: Finished systemd-journal-catalog-update.service. Dec 13 15:10:22.383000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:10:22.385720 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 15:10:22.390284 systemd[1]: Starting systemd-update-done.service... Dec 13 15:10:22.400000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:10:22.400218 systemd[1]: Finished systemd-update-done.service. Dec 13 15:10:22.421000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Dec 13 15:10:22.421000 audit[1162]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffebe16d360 a2=420 a3=0 items=0 ppid=1134 pid=1162 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 15:10:22.421000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 13 15:10:22.421916 augenrules[1162]: No rules Dec 13 15:10:22.422495 systemd[1]: Finished audit-rules.service. Dec 13 15:10:22.433872 systemd[1]: Started systemd-timesyncd.service. Dec 13 15:10:22.434662 systemd[1]: Reached target time-set.target. Dec 13 15:10:22.435859 systemd-resolved[1137]: Positive Trust Anchors: Dec 13 15:10:22.436237 systemd-resolved[1137]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 15:10:22.436400 systemd-resolved[1137]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 15:10:22.444359 systemd-resolved[1137]: Using system hostname 'srv-iw0hd.gb1.brightbox.com'. Dec 13 15:10:22.447091 systemd[1]: Started systemd-resolved.service. Dec 13 15:10:22.447870 systemd[1]: Reached target network.target. Dec 13 15:10:22.448499 systemd[1]: Reached target network-online.target. Dec 13 15:10:22.449145 systemd[1]: Reached target nss-lookup.target. Dec 13 15:10:22.449792 systemd[1]: Reached target sysinit.target. Dec 13 15:10:22.450501 systemd[1]: Started motdgen.path. Dec 13 15:10:22.451136 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Dec 13 15:10:22.452044 systemd[1]: Started logrotate.timer. Dec 13 15:10:22.452783 systemd[1]: Started mdadm.timer. Dec 13 15:10:22.453446 systemd[1]: Started systemd-tmpfiles-clean.timer. Dec 13 15:10:23.410055 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 15:10:23.410095 systemd[1]: Reached target paths.target. Dec 13 15:10:23.410170 systemd-timesyncd[1139]: Contacted time server 212.82.85.226:123 (0.flatcar.pool.ntp.org). Dec 13 15:10:23.410262 systemd-timesyncd[1139]: Initial clock synchronization to Fri 2024-12-13 15:10:23.410025 UTC. Dec 13 15:10:23.410686 systemd[1]: Reached target timers.target. Dec 13 15:10:23.411697 systemd[1]: Listening on dbus.socket. Dec 13 15:10:23.414005 systemd[1]: Starting docker.socket... Dec 13 15:10:23.414587 systemd-resolved[1137]: Clock change detected. Flushing caches. Dec 13 15:10:23.418575 systemd[1]: Listening on sshd.socket. Dec 13 15:10:23.419350 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 15:10:23.420101 systemd[1]: Listening on docker.socket. Dec 13 15:10:23.421037 systemd[1]: Reached target sockets.target. Dec 13 15:10:23.421639 systemd[1]: Reached target basic.target. Dec 13 15:10:23.422300 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 15:10:23.422382 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 15:10:23.423912 systemd[1]: Starting containerd.service... Dec 13 15:10:23.425971 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Dec 13 15:10:23.428398 systemd[1]: Starting dbus.service... Dec 13 15:10:23.432813 systemd[1]: Starting enable-oem-cloudinit.service... Dec 13 15:10:23.435914 systemd[1]: Starting extend-filesystems.service... Dec 13 15:10:23.437008 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Dec 13 15:10:23.443767 systemd[1]: Starting kubelet.service... Dec 13 15:10:23.448235 systemd[1]: Starting motdgen.service... Dec 13 15:10:23.451654 systemd[1]: Starting prepare-helm.service... Dec 13 15:10:23.455920 systemd[1]: Starting ssh-key-proc-cmdline.service... Dec 13 15:10:23.461359 systemd[1]: Starting sshd-keygen.service... Dec 13 15:10:23.467925 systemd[1]: Starting systemd-logind.service... Dec 13 15:10:23.469168 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 15:10:23.469341 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 15:10:23.469917 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 15:10:23.472446 systemd[1]: Starting update-engine.service... Dec 13 15:10:23.477173 jq[1175]: false Dec 13 15:10:23.475626 systemd[1]: Starting update-ssh-keys-after-ignition.service... Dec 13 15:10:23.485984 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 15:10:23.503270 jq[1186]: true Dec 13 15:10:23.486292 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Dec 13 15:10:23.488647 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 15:10:23.488920 systemd[1]: Finished ssh-key-proc-cmdline.service. Dec 13 15:10:23.513892 tar[1194]: linux-amd64/helm Dec 13 15:10:23.518525 jq[1195]: true Dec 13 15:10:23.544492 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 15:10:23.544562 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 15:10:23.557497 dbus-daemon[1172]: [system] SELinux support is enabled Dec 13 15:10:23.557731 systemd[1]: Started dbus.service. Dec 13 15:10:23.560750 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 15:10:23.560805 systemd[1]: Reached target system-config.target. Dec 13 15:10:23.561493 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 15:10:23.561532 systemd[1]: Reached target user-config.target. Dec 13 15:10:23.569508 extend-filesystems[1176]: Found loop1 Dec 13 15:10:23.569508 extend-filesystems[1176]: Found vda Dec 13 15:10:23.569508 extend-filesystems[1176]: Found vda1 Dec 13 15:10:23.569508 extend-filesystems[1176]: Found vda2 Dec 13 15:10:23.569508 extend-filesystems[1176]: Found vda3 Dec 13 15:10:23.573173 extend-filesystems[1176]: Found usr Dec 13 15:10:23.573173 extend-filesystems[1176]: Found vda4 Dec 13 15:10:23.573173 extend-filesystems[1176]: Found vda6 Dec 13 15:10:23.573173 extend-filesystems[1176]: Found vda7 Dec 13 15:10:23.573173 extend-filesystems[1176]: Found vda9 Dec 13 15:10:23.573173 extend-filesystems[1176]: Checking size of /dev/vda9 Dec 13 15:10:23.573377 dbus-daemon[1172]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1030 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Dec 13 15:10:23.578467 systemd[1]: Starting systemd-hostnamed.service... Dec 13 15:10:23.580682 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 15:10:23.580947 systemd[1]: Finished motdgen.service. Dec 13 15:10:23.615069 extend-filesystems[1176]: Resized partition /dev/vda9 Dec 13 15:10:23.624296 extend-filesystems[1227]: resize2fs 1.46.5 (30-Dec-2021) Dec 13 15:10:23.633013 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 15121403 blocks Dec 13 15:10:23.665217 update_engine[1183]: I1213 15:10:23.664276 1183 main.cc:92] Flatcar Update Engine starting Dec 13 15:10:23.669583 systemd[1]: Started update-engine.service. Dec 13 15:10:23.669812 update_engine[1183]: I1213 15:10:23.669781 1183 update_check_scheduler.cc:74] Next update check in 9m34s Dec 13 15:10:23.673077 systemd[1]: Started locksmithd.service. Dec 13 15:10:23.706991 env[1196]: time="2024-12-13T15:10:23.704703578Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Dec 13 15:10:23.725924 bash[1228]: Updated "/home/core/.ssh/authorized_keys" Dec 13 15:10:23.726574 systemd[1]: Finished update-ssh-keys-after-ignition.service. Dec 13 15:10:23.762779 env[1196]: time="2024-12-13T15:10:23.762591335Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 15:10:23.763139 env[1196]: time="2024-12-13T15:10:23.763063604Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 15:10:23.766506 env[1196]: time="2024-12-13T15:10:23.766424661Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.173-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 15:10:23.766581 env[1196]: time="2024-12-13T15:10:23.766503441Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 15:10:23.767061 env[1196]: time="2024-12-13T15:10:23.767021921Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 15:10:23.767138 env[1196]: time="2024-12-13T15:10:23.767100795Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 15:10:23.767189 env[1196]: time="2024-12-13T15:10:23.767136524Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Dec 13 15:10:23.767233 env[1196]: time="2024-12-13T15:10:23.767192810Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 15:10:23.768109 env[1196]: time="2024-12-13T15:10:23.767432782Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 15:10:23.768344 env[1196]: time="2024-12-13T15:10:23.768309162Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 15:10:23.768618 env[1196]: time="2024-12-13T15:10:23.768571062Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 15:10:23.768701 env[1196]: time="2024-12-13T15:10:23.768609809Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 15:10:23.768824 env[1196]: time="2024-12-13T15:10:23.768788461Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Dec 13 15:10:23.768896 env[1196]: time="2024-12-13T15:10:23.768856165Z" level=info msg="metadata content store policy set" policy=shared Dec 13 15:10:23.790991 env[1196]: time="2024-12-13T15:10:23.788828561Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 15:10:23.790991 env[1196]: time="2024-12-13T15:10:23.788909840Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 15:10:23.790991 env[1196]: time="2024-12-13T15:10:23.788933370Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 15:10:23.790991 env[1196]: time="2024-12-13T15:10:23.789030691Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 15:10:23.790991 env[1196]: time="2024-12-13T15:10:23.789056200Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 15:10:23.790991 env[1196]: time="2024-12-13T15:10:23.789077880Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 15:10:23.790991 env[1196]: time="2024-12-13T15:10:23.789099413Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 15:10:23.790991 env[1196]: time="2024-12-13T15:10:23.789119079Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 15:10:23.790991 env[1196]: time="2024-12-13T15:10:23.789139610Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Dec 13 15:10:23.790991 env[1196]: time="2024-12-13T15:10:23.789165641Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 15:10:23.790991 env[1196]: time="2024-12-13T15:10:23.789188415Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 15:10:23.790991 env[1196]: time="2024-12-13T15:10:23.789221724Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 15:10:23.790991 env[1196]: time="2024-12-13T15:10:23.789389104Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 15:10:23.790991 env[1196]: time="2024-12-13T15:10:23.789532939Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 15:10:23.791579 env[1196]: time="2024-12-13T15:10:23.789826924Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 15:10:23.791579 env[1196]: time="2024-12-13T15:10:23.789876725Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 15:10:23.791579 env[1196]: time="2024-12-13T15:10:23.789899899Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 15:10:23.791579 env[1196]: time="2024-12-13T15:10:23.789996395Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 15:10:23.791579 env[1196]: time="2024-12-13T15:10:23.790021051Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 15:10:23.791579 env[1196]: time="2024-12-13T15:10:23.790045448Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 15:10:23.791579 env[1196]: time="2024-12-13T15:10:23.790069258Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 15:10:23.791579 env[1196]: time="2024-12-13T15:10:23.790089244Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 15:10:23.791579 env[1196]: time="2024-12-13T15:10:23.790107638Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 15:10:23.791579 env[1196]: time="2024-12-13T15:10:23.790124916Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 15:10:23.791579 env[1196]: time="2024-12-13T15:10:23.790142258Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 15:10:23.791579 env[1196]: time="2024-12-13T15:10:23.790165343Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 15:10:23.791579 env[1196]: time="2024-12-13T15:10:23.790353901Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 15:10:23.791579 env[1196]: time="2024-12-13T15:10:23.790389575Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 15:10:23.791579 env[1196]: time="2024-12-13T15:10:23.790421313Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 15:10:23.792208 env[1196]: time="2024-12-13T15:10:23.790439876Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 15:10:23.792208 env[1196]: time="2024-12-13T15:10:23.790462143Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Dec 13 15:10:23.792208 env[1196]: time="2024-12-13T15:10:23.790486443Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 15:10:23.792208 env[1196]: time="2024-12-13T15:10:23.790530606Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Dec 13 15:10:23.792208 env[1196]: time="2024-12-13T15:10:23.790590848Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 15:10:23.792412 env[1196]: time="2024-12-13T15:10:23.790855895Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 15:10:23.792412 env[1196]: time="2024-12-13T15:10:23.790942536Z" level=info msg="Connect containerd service" Dec 13 15:10:23.795200 env[1196]: time="2024-12-13T15:10:23.792624656Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 15:10:23.795200 env[1196]: time="2024-12-13T15:10:23.794345345Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 15:10:23.798789 env[1196]: time="2024-12-13T15:10:23.798760498Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 15:10:23.799899 env[1196]: time="2024-12-13T15:10:23.799870641Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 15:10:23.800681 systemd[1]: Created slice system-sshd.slice. Dec 13 15:10:23.801476 systemd[1]: Started containerd.service. Dec 13 15:10:23.802888 env[1196]: time="2024-12-13T15:10:23.801984826Z" level=info msg="containerd successfully booted in 0.103448s" Dec 13 15:10:23.811858 systemd-logind[1182]: Watching system buttons on /dev/input/event2 (Power Button) Dec 13 15:10:23.812666 systemd-logind[1182]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 13 15:10:23.813014 env[1196]: time="2024-12-13T15:10:23.812503306Z" level=info msg="Start subscribing containerd event" Dec 13 15:10:23.813941 env[1196]: time="2024-12-13T15:10:23.813235168Z" level=info msg="Start recovering state" Dec 13 15:10:23.815572 systemd-logind[1182]: New seat seat0. Dec 13 15:10:23.815818 env[1196]: time="2024-12-13T15:10:23.815790022Z" level=info msg="Start event monitor" Dec 13 15:10:23.816524 env[1196]: time="2024-12-13T15:10:23.816473739Z" level=info msg="Start snapshots syncer" Dec 13 15:10:23.817043 env[1196]: time="2024-12-13T15:10:23.817012869Z" level=info msg="Start cni network conf syncer for default" Dec 13 15:10:23.818274 env[1196]: time="2024-12-13T15:10:23.818244802Z" level=info msg="Start streaming server" Dec 13 15:10:23.822993 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Dec 13 15:10:23.833583 systemd[1]: Started systemd-logind.service. Dec 13 15:10:23.841384 extend-filesystems[1227]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Dec 13 15:10:23.841384 extend-filesystems[1227]: old_desc_blocks = 1, new_desc_blocks = 8 Dec 13 15:10:23.841384 extend-filesystems[1227]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Dec 13 15:10:23.844051 extend-filesystems[1176]: Resized filesystem in /dev/vda9 Dec 13 15:10:23.844207 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 15:10:23.844411 systemd[1]: Finished extend-filesystems.service. Dec 13 15:10:23.882117 dbus-daemon[1172]: [system] Successfully activated service 'org.freedesktop.hostname1' Dec 13 15:10:23.882276 systemd[1]: Started systemd-hostnamed.service. Dec 13 15:10:23.889669 dbus-daemon[1172]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1217 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Dec 13 15:10:23.893101 systemd[1]: Starting polkit.service... Dec 13 15:10:23.920760 polkitd[1236]: Started polkitd version 121 Dec 13 15:10:23.941757 polkitd[1236]: Loading rules from directory /etc/polkit-1/rules.d Dec 13 15:10:23.942011 polkitd[1236]: Loading rules from directory /usr/share/polkit-1/rules.d Dec 13 15:10:23.949429 polkitd[1236]: Finished loading, compiling and executing 2 rules Dec 13 15:10:23.950043 dbus-daemon[1172]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Dec 13 15:10:23.950269 systemd[1]: Started polkit.service. Dec 13 15:10:23.951554 polkitd[1236]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Dec 13 15:10:23.979818 systemd-hostnamed[1217]: Hostname set to (static) Dec 13 15:10:23.987960 systemd-networkd[1030]: eth0: Ignoring DHCPv6 address 2a02:1348:17c:d50c:24:19ff:fef3:5432/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:17c:d50c:24:19ff:fef3:5432/64 assigned by NDisc. Dec 13 15:10:23.988033 systemd-networkd[1030]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Dec 13 15:10:24.367156 tar[1194]: linux-amd64/LICENSE Dec 13 15:10:24.367351 tar[1194]: linux-amd64/README.md Dec 13 15:10:24.374940 systemd[1]: Finished prepare-helm.service. Dec 13 15:10:24.537782 locksmithd[1231]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 15:10:24.857017 systemd[1]: Started kubelet.service. Dec 13 15:10:25.430867 sshd_keygen[1198]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 15:10:25.462893 systemd[1]: Finished sshd-keygen.service. Dec 13 15:10:25.468531 systemd[1]: Starting issuegen.service... Dec 13 15:10:25.473833 systemd[1]: Started sshd@0-10.243.84.50:22-139.178.68.195:44254.service. Dec 13 15:10:25.489898 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 15:10:25.490196 systemd[1]: Finished issuegen.service. Dec 13 15:10:25.493310 systemd[1]: Starting systemd-user-sessions.service... Dec 13 15:10:25.507509 systemd[1]: Finished systemd-user-sessions.service. Dec 13 15:10:25.510869 systemd[1]: Started getty@tty1.service. Dec 13 15:10:25.515551 systemd[1]: Started serial-getty@ttyS0.service. Dec 13 15:10:25.516802 systemd[1]: Reached target getty.target. Dec 13 15:10:25.690027 kubelet[1252]: E1213 15:10:25.689780 1252 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 15:10:25.692487 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 15:10:25.692730 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 15:10:25.693217 systemd[1]: kubelet.service: Consumed 1.095s CPU time. Dec 13 15:10:26.388033 sshd[1266]: Accepted publickey for core from 139.178.68.195 port 44254 ssh2: RSA SHA256:BRWuvX4vngANWcecei9LW91Zd3OWx+vtbErQ53ehsZc Dec 13 15:10:26.390734 sshd[1266]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 15:10:26.408030 systemd-logind[1182]: New session 1 of user core. Dec 13 15:10:26.409238 systemd[1]: Created slice user-500.slice. Dec 13 15:10:26.415422 systemd[1]: Starting user-runtime-dir@500.service... Dec 13 15:10:26.430776 systemd[1]: Finished user-runtime-dir@500.service. Dec 13 15:10:26.434104 systemd[1]: Starting user@500.service... Dec 13 15:10:26.439737 (systemd)[1275]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 15:10:26.538583 systemd[1275]: Queued start job for default target default.target. Dec 13 15:10:26.539418 systemd[1275]: Reached target paths.target. Dec 13 15:10:26.539452 systemd[1275]: Reached target sockets.target. Dec 13 15:10:26.539472 systemd[1275]: Reached target timers.target. Dec 13 15:10:26.539490 systemd[1275]: Reached target basic.target. Dec 13 15:10:26.539652 systemd[1]: Started user@500.service. Dec 13 15:10:26.541673 systemd[1]: Started session-1.scope. Dec 13 15:10:26.542901 systemd[1275]: Reached target default.target. Dec 13 15:10:26.543127 systemd[1275]: Startup finished in 94ms. Dec 13 15:10:27.167532 systemd[1]: Started sshd@1-10.243.84.50:22-139.178.68.195:56812.service. Dec 13 15:10:28.054022 sshd[1285]: Accepted publickey for core from 139.178.68.195 port 56812 ssh2: RSA SHA256:BRWuvX4vngANWcecei9LW91Zd3OWx+vtbErQ53ehsZc Dec 13 15:10:28.055840 sshd[1285]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 15:10:28.063665 systemd[1]: Started session-2.scope. Dec 13 15:10:28.067644 systemd-logind[1182]: New session 2 of user core. Dec 13 15:10:28.673782 sshd[1285]: pam_unix(sshd:session): session closed for user core Dec 13 15:10:28.677607 systemd-logind[1182]: Session 2 logged out. Waiting for processes to exit. Dec 13 15:10:28.678599 systemd[1]: sshd@1-10.243.84.50:22-139.178.68.195:56812.service: Deactivated successfully. Dec 13 15:10:28.679623 systemd[1]: session-2.scope: Deactivated successfully. Dec 13 15:10:28.680634 systemd-logind[1182]: Removed session 2. Dec 13 15:10:28.820468 systemd[1]: Started sshd@2-10.243.84.50:22-139.178.68.195:56816.service. Dec 13 15:10:29.701908 sshd[1291]: Accepted publickey for core from 139.178.68.195 port 56816 ssh2: RSA SHA256:BRWuvX4vngANWcecei9LW91Zd3OWx+vtbErQ53ehsZc Dec 13 15:10:29.703816 sshd[1291]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 15:10:29.710527 systemd-logind[1182]: New session 3 of user core. Dec 13 15:10:29.711251 systemd[1]: Started session-3.scope. Dec 13 15:10:30.318692 sshd[1291]: pam_unix(sshd:session): session closed for user core Dec 13 15:10:30.322118 systemd[1]: sshd@2-10.243.84.50:22-139.178.68.195:56816.service: Deactivated successfully. Dec 13 15:10:30.323061 systemd[1]: session-3.scope: Deactivated successfully. Dec 13 15:10:30.323835 systemd-logind[1182]: Session 3 logged out. Waiting for processes to exit. Dec 13 15:10:30.325388 systemd-logind[1182]: Removed session 3. Dec 13 15:10:30.636665 coreos-metadata[1171]: Dec 13 15:10:30.636 WARN failed to locate config-drive, using the metadata service API instead Dec 13 15:10:30.688063 coreos-metadata[1171]: Dec 13 15:10:30.687 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Dec 13 15:10:30.715919 coreos-metadata[1171]: Dec 13 15:10:30.715 INFO Fetch successful Dec 13 15:10:30.716026 coreos-metadata[1171]: Dec 13 15:10:30.715 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Dec 13 15:10:30.754388 coreos-metadata[1171]: Dec 13 15:10:30.754 INFO Fetch successful Dec 13 15:10:30.756589 unknown[1171]: wrote ssh authorized keys file for user: core Dec 13 15:10:30.768845 update-ssh-keys[1298]: Updated "/home/core/.ssh/authorized_keys" Dec 13 15:10:30.769839 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Dec 13 15:10:30.770487 systemd[1]: Reached target multi-user.target. Dec 13 15:10:30.772466 systemd[1]: Starting systemd-update-utmp-runlevel.service... Dec 13 15:10:30.782412 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Dec 13 15:10:30.782616 systemd[1]: Finished systemd-update-utmp-runlevel.service. Dec 13 15:10:30.783192 systemd[1]: Startup finished in 1.121s (kernel) + 7.506s (initrd) + 13.641s (userspace) = 22.269s. Dec 13 15:10:35.836470 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 15:10:35.836863 systemd[1]: Stopped kubelet.service. Dec 13 15:10:35.836977 systemd[1]: kubelet.service: Consumed 1.095s CPU time. Dec 13 15:10:35.840049 systemd[1]: Starting kubelet.service... Dec 13 15:10:36.004275 systemd[1]: Started kubelet.service. Dec 13 15:10:36.104456 kubelet[1304]: E1213 15:10:36.104281 1304 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 15:10:36.109164 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 15:10:36.109375 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 15:10:40.465079 systemd[1]: Started sshd@3-10.243.84.50:22-139.178.68.195:50032.service. Dec 13 15:10:41.347674 sshd[1311]: Accepted publickey for core from 139.178.68.195 port 50032 ssh2: RSA SHA256:BRWuvX4vngANWcecei9LW91Zd3OWx+vtbErQ53ehsZc Dec 13 15:10:41.349653 sshd[1311]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 15:10:41.356024 systemd-logind[1182]: New session 4 of user core. Dec 13 15:10:41.356735 systemd[1]: Started session-4.scope. Dec 13 15:10:41.966387 sshd[1311]: pam_unix(sshd:session): session closed for user core Dec 13 15:10:41.969630 systemd[1]: sshd@3-10.243.84.50:22-139.178.68.195:50032.service: Deactivated successfully. Dec 13 15:10:41.970578 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 15:10:41.971426 systemd-logind[1182]: Session 4 logged out. Waiting for processes to exit. Dec 13 15:10:41.972685 systemd-logind[1182]: Removed session 4. Dec 13 15:10:42.113398 systemd[1]: Started sshd@4-10.243.84.50:22-139.178.68.195:50038.service. Dec 13 15:10:42.998483 sshd[1317]: Accepted publickey for core from 139.178.68.195 port 50038 ssh2: RSA SHA256:BRWuvX4vngANWcecei9LW91Zd3OWx+vtbErQ53ehsZc Dec 13 15:10:43.000337 sshd[1317]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 15:10:43.006433 systemd-logind[1182]: New session 5 of user core. Dec 13 15:10:43.007153 systemd[1]: Started session-5.scope. Dec 13 15:10:43.611646 sshd[1317]: pam_unix(sshd:session): session closed for user core Dec 13 15:10:43.615371 systemd[1]: sshd@4-10.243.84.50:22-139.178.68.195:50038.service: Deactivated successfully. Dec 13 15:10:43.616434 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 15:10:43.617309 systemd-logind[1182]: Session 5 logged out. Waiting for processes to exit. Dec 13 15:10:43.618891 systemd-logind[1182]: Removed session 5. Dec 13 15:10:43.759288 systemd[1]: Started sshd@5-10.243.84.50:22-139.178.68.195:50050.service. Dec 13 15:10:44.647916 sshd[1323]: Accepted publickey for core from 139.178.68.195 port 50050 ssh2: RSA SHA256:BRWuvX4vngANWcecei9LW91Zd3OWx+vtbErQ53ehsZc Dec 13 15:10:44.649795 sshd[1323]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 15:10:44.656501 systemd[1]: Started session-6.scope. Dec 13 15:10:44.657202 systemd-logind[1182]: New session 6 of user core. Dec 13 15:10:45.272314 sshd[1323]: pam_unix(sshd:session): session closed for user core Dec 13 15:10:45.276804 systemd-logind[1182]: Session 6 logged out. Waiting for processes to exit. Dec 13 15:10:45.277291 systemd[1]: sshd@5-10.243.84.50:22-139.178.68.195:50050.service: Deactivated successfully. Dec 13 15:10:45.278257 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 15:10:45.279499 systemd-logind[1182]: Removed session 6. Dec 13 15:10:45.418247 systemd[1]: Started sshd@6-10.243.84.50:22-139.178.68.195:50062.service. Dec 13 15:10:46.149947 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 15:10:46.150280 systemd[1]: Stopped kubelet.service. Dec 13 15:10:46.153032 systemd[1]: Starting kubelet.service... Dec 13 15:10:46.300210 systemd[1]: Started kubelet.service. Dec 13 15:10:46.305659 sshd[1329]: Accepted publickey for core from 139.178.68.195 port 50062 ssh2: RSA SHA256:BRWuvX4vngANWcecei9LW91Zd3OWx+vtbErQ53ehsZc Dec 13 15:10:46.306724 sshd[1329]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 15:10:46.314963 systemd[1]: Started session-7.scope. Dec 13 15:10:46.315534 systemd-logind[1182]: New session 7 of user core. Dec 13 15:10:46.394490 kubelet[1335]: E1213 15:10:46.394401 1335 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 15:10:46.397995 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 15:10:46.398212 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 15:10:46.796143 sudo[1342]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 15:10:46.797318 sudo[1342]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Dec 13 15:10:46.840691 systemd[1]: Starting docker.service... Dec 13 15:10:46.900869 env[1352]: time="2024-12-13T15:10:46.900732181Z" level=info msg="Starting up" Dec 13 15:10:46.905004 env[1352]: time="2024-12-13T15:10:46.904933893Z" level=info msg="parsed scheme: \"unix\"" module=grpc Dec 13 15:10:46.905004 env[1352]: time="2024-12-13T15:10:46.904986623Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Dec 13 15:10:46.905187 env[1352]: time="2024-12-13T15:10:46.905039808Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Dec 13 15:10:46.905187 env[1352]: time="2024-12-13T15:10:46.905076172Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Dec 13 15:10:46.909911 env[1352]: time="2024-12-13T15:10:46.909867369Z" level=info msg="parsed scheme: \"unix\"" module=grpc Dec 13 15:10:46.909911 env[1352]: time="2024-12-13T15:10:46.909895244Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Dec 13 15:10:46.909911 env[1352]: time="2024-12-13T15:10:46.909913120Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Dec 13 15:10:46.910195 env[1352]: time="2024-12-13T15:10:46.909928251Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Dec 13 15:10:46.925891 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2347564738-merged.mount: Deactivated successfully. Dec 13 15:10:46.950857 env[1352]: time="2024-12-13T15:10:46.950739702Z" level=info msg="Loading containers: start." Dec 13 15:10:47.133016 kernel: Initializing XFRM netlink socket Dec 13 15:10:47.178923 env[1352]: time="2024-12-13T15:10:47.178823810Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Dec 13 15:10:47.284835 systemd-networkd[1030]: docker0: Link UP Dec 13 15:10:47.305343 env[1352]: time="2024-12-13T15:10:47.305280385Z" level=info msg="Loading containers: done." Dec 13 15:10:47.331047 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1791906907-merged.mount: Deactivated successfully. Dec 13 15:10:47.347395 env[1352]: time="2024-12-13T15:10:47.347297760Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 13 15:10:47.347910 env[1352]: time="2024-12-13T15:10:47.347872086Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Dec 13 15:10:47.348220 env[1352]: time="2024-12-13T15:10:47.348186400Z" level=info msg="Daemon has completed initialization" Dec 13 15:10:47.367122 systemd[1]: Started docker.service. Dec 13 15:10:47.378805 env[1352]: time="2024-12-13T15:10:47.378498723Z" level=info msg="API listen on /run/docker.sock" Dec 13 15:10:48.894521 env[1196]: time="2024-12-13T15:10:48.894388896Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\"" Dec 13 15:10:49.780843 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3932777612.mount: Deactivated successfully. Dec 13 15:10:52.896760 env[1196]: time="2024-12-13T15:10:52.896528409Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 15:10:52.900756 env[1196]: time="2024-12-13T15:10:52.900720623Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 15:10:52.903050 env[1196]: time="2024-12-13T15:10:52.903017856Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 15:10:52.904223 env[1196]: time="2024-12-13T15:10:52.904172249Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\" returns image reference \"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\"" Dec 13 15:10:52.908476 env[1196]: time="2024-12-13T15:10:52.905399131Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 15:10:52.923954 env[1196]: time="2024-12-13T15:10:52.923899193Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\"" Dec 13 15:10:54.018181 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Dec 13 15:10:56.586543 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Dec 13 15:10:56.587023 systemd[1]: Stopped kubelet.service. Dec 13 15:10:56.590370 systemd[1]: Starting kubelet.service... Dec 13 15:10:56.911949 env[1196]: time="2024-12-13T15:10:56.911784523Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 15:10:56.914149 env[1196]: time="2024-12-13T15:10:56.914117865Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 15:10:56.963751 systemd[1]: Started kubelet.service. Dec 13 15:10:56.992567 env[1196]: time="2024-12-13T15:10:56.992423654Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 15:10:56.995487 env[1196]: time="2024-12-13T15:10:56.995399828Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 15:10:56.997005 env[1196]: time="2024-12-13T15:10:56.996917590Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\" returns image reference \"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\"" Dec 13 15:10:57.011876 env[1196]: time="2024-12-13T15:10:57.011516397Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\"" Dec 13 15:10:57.069421 kubelet[1498]: E1213 15:10:57.069338 1498 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 15:10:57.071999 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 15:10:57.072234 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 15:10:59.205740 env[1196]: time="2024-12-13T15:10:59.205573628Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 15:10:59.208529 env[1196]: time="2024-12-13T15:10:59.208490173Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 15:10:59.211263 env[1196]: time="2024-12-13T15:10:59.211219146Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 15:10:59.214055 env[1196]: time="2024-12-13T15:10:59.214017164Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 15:10:59.215720 env[1196]: time="2024-12-13T15:10:59.215655813Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\" returns image reference \"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\"" Dec 13 15:10:59.233285 env[1196]: time="2024-12-13T15:10:59.233241113Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\"" Dec 13 15:11:00.788570 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1077730707.mount: Deactivated successfully. Dec 13 15:11:01.682112 env[1196]: time="2024-12-13T15:11:01.682035850Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 15:11:01.684375 env[1196]: time="2024-12-13T15:11:01.684340876Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 15:11:01.686190 env[1196]: time="2024-12-13T15:11:01.686157840Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 15:11:01.688783 env[1196]: time="2024-12-13T15:11:01.688750695Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 15:11:01.689735 env[1196]: time="2024-12-13T15:11:01.689700080Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\" returns image reference \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\"" Dec 13 15:11:01.704109 env[1196]: time="2024-12-13T15:11:01.704023881Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Dec 13 15:11:02.342443 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3964489839.mount: Deactivated successfully. Dec 13 15:11:03.810776 env[1196]: time="2024-12-13T15:11:03.810684875Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 15:11:03.813541 env[1196]: time="2024-12-13T15:11:03.813458745Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 15:11:03.816887 env[1196]: time="2024-12-13T15:11:03.816844119Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 15:11:03.820312 env[1196]: time="2024-12-13T15:11:03.820276837Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 15:11:03.821372 env[1196]: time="2024-12-13T15:11:03.821325102Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Dec 13 15:11:03.836837 env[1196]: time="2024-12-13T15:11:03.836772336Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Dec 13 15:11:04.418287 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount134597576.mount: Deactivated successfully. Dec 13 15:11:04.432915 env[1196]: time="2024-12-13T15:11:04.432870394Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 15:11:04.434430 env[1196]: time="2024-12-13T15:11:04.434397801Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 15:11:04.441490 env[1196]: time="2024-12-13T15:11:04.441455015Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 15:11:04.444003 env[1196]: time="2024-12-13T15:11:04.443941198Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 15:11:04.445033 env[1196]: time="2024-12-13T15:11:04.444962459Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Dec 13 15:11:04.459154 env[1196]: time="2024-12-13T15:11:04.459097885Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Dec 13 15:11:05.103774 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1892901681.mount: Deactivated successfully. Dec 13 15:11:07.088741 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Dec 13 15:11:07.089323 systemd[1]: Stopped kubelet.service. Dec 13 15:11:07.093914 systemd[1]: Starting kubelet.service... Dec 13 15:11:07.711566 systemd[1]: Started kubelet.service. Dec 13 15:11:07.830842 kubelet[1535]: E1213 15:11:07.830733 1535 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 15:11:07.833623 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 15:11:07.833860 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 15:11:08.528383 update_engine[1183]: I1213 15:11:08.528263 1183 update_attempter.cc:509] Updating boot flags... Dec 13 15:11:09.508051 env[1196]: time="2024-12-13T15:11:09.507886561Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.10-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 15:11:09.511014 env[1196]: time="2024-12-13T15:11:09.510955305Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 15:11:09.513592 env[1196]: time="2024-12-13T15:11:09.513552286Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.10-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 15:11:09.515954 env[1196]: time="2024-12-13T15:11:09.515919863Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 15:11:09.517297 env[1196]: time="2024-12-13T15:11:09.517231528Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Dec 13 15:11:13.267324 systemd[1]: Stopped kubelet.service. Dec 13 15:11:13.274899 systemd[1]: Starting kubelet.service... Dec 13 15:11:13.308287 systemd[1]: Reloading. Dec 13 15:11:13.453761 /usr/lib/systemd/system-generators/torcx-generator[1640]: time="2024-12-13T15:11:13Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 15:11:13.453816 /usr/lib/systemd/system-generators/torcx-generator[1640]: time="2024-12-13T15:11:13Z" level=info msg="torcx already run" Dec 13 15:11:13.545897 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 15:11:13.546868 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 15:11:13.576214 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 15:11:13.708418 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 13 15:11:13.708829 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 13 15:11:13.709501 systemd[1]: Stopped kubelet.service. Dec 13 15:11:13.713604 systemd[1]: Starting kubelet.service... Dec 13 15:11:13.978798 systemd[1]: Started kubelet.service. Dec 13 15:11:14.069934 kubelet[1690]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 15:11:14.069934 kubelet[1690]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 15:11:14.069934 kubelet[1690]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 15:11:14.070660 kubelet[1690]: I1213 15:11:14.070033 1690 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 15:11:14.969681 kubelet[1690]: I1213 15:11:14.969577 1690 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 15:11:14.970018 kubelet[1690]: I1213 15:11:14.969994 1690 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 15:11:14.970460 kubelet[1690]: I1213 15:11:14.970436 1690 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 15:11:15.007602 kubelet[1690]: E1213 15:11:15.007031 1690 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.243.84.50:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.243.84.50:6443: connect: connection refused Dec 13 15:11:15.010227 kubelet[1690]: I1213 15:11:15.010190 1690 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 15:11:15.027247 kubelet[1690]: I1213 15:11:15.027208 1690 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 15:11:15.029403 kubelet[1690]: I1213 15:11:15.029379 1690 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 15:11:15.029817 kubelet[1690]: I1213 15:11:15.029788 1690 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 15:11:15.030754 kubelet[1690]: I1213 15:11:15.030727 1690 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 15:11:15.030909 kubelet[1690]: I1213 15:11:15.030887 1690 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 15:11:15.031269 kubelet[1690]: I1213 15:11:15.031246 1690 state_mem.go:36] "Initialized new in-memory state store" Dec 13 15:11:15.031572 kubelet[1690]: I1213 15:11:15.031549 1690 kubelet.go:396] "Attempting to sync node with API server" Dec 13 15:11:15.031719 kubelet[1690]: I1213 15:11:15.031696 1690 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 15:11:15.031884 kubelet[1690]: I1213 15:11:15.031861 1690 kubelet.go:312] "Adding apiserver pod source" Dec 13 15:11:15.032085 kubelet[1690]: I1213 15:11:15.032064 1690 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 15:11:15.034277 kubelet[1690]: W1213 15:11:15.034203 1690 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.243.84.50:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.243.84.50:6443: connect: connection refused Dec 13 15:11:15.034477 kubelet[1690]: E1213 15:11:15.034454 1690 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.243.84.50:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.243.84.50:6443: connect: connection refused Dec 13 15:11:15.034704 kubelet[1690]: W1213 15:11:15.034648 1690 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.243.84.50:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-iw0hd.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.243.84.50:6443: connect: connection refused Dec 13 15:11:15.034835 kubelet[1690]: E1213 15:11:15.034811 1690 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.243.84.50:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-iw0hd.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.243.84.50:6443: connect: connection refused Dec 13 15:11:15.035107 kubelet[1690]: I1213 15:11:15.035070 1690 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Dec 13 15:11:15.039807 kubelet[1690]: I1213 15:11:15.039780 1690 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 15:11:15.040058 kubelet[1690]: W1213 15:11:15.040036 1690 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 15:11:15.041339 kubelet[1690]: I1213 15:11:15.041316 1690 server.go:1256] "Started kubelet" Dec 13 15:11:15.046726 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Dec 13 15:11:15.047078 kubelet[1690]: I1213 15:11:15.047055 1690 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 15:11:15.047398 kubelet[1690]: E1213 15:11:15.047369 1690 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 15:11:15.050144 kubelet[1690]: I1213 15:11:15.050119 1690 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 15:11:15.051302 kubelet[1690]: I1213 15:11:15.051276 1690 server.go:461] "Adding debug handlers to kubelet server" Dec 13 15:11:15.052742 kubelet[1690]: I1213 15:11:15.052713 1690 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 15:11:15.053089 kubelet[1690]: I1213 15:11:15.053053 1690 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 15:11:15.059496 kubelet[1690]: I1213 15:11:15.059471 1690 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 15:11:15.062690 kubelet[1690]: E1213 15:11:15.062609 1690 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.243.84.50:6443/api/v1/namespaces/default/events\": dial tcp 10.243.84.50:6443: connect: connection refused" event="&Event{ObjectMeta:{srv-iw0hd.gb1.brightbox.com.1810c531fde3eac2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-iw0hd.gb1.brightbox.com,UID:srv-iw0hd.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-iw0hd.gb1.brightbox.com,},FirstTimestamp:2024-12-13 15:11:15.041282754 +0000 UTC m=+1.055211604,LastTimestamp:2024-12-13 15:11:15.041282754 +0000 UTC m=+1.055211604,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-iw0hd.gb1.brightbox.com,}" Dec 13 15:11:15.063283 kubelet[1690]: E1213 15:11:15.063241 1690 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.243.84.50:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-iw0hd.gb1.brightbox.com?timeout=10s\": dial tcp 10.243.84.50:6443: connect: connection refused" interval="200ms" Dec 13 15:11:15.063648 kubelet[1690]: I1213 15:11:15.063620 1690 factory.go:221] Registration of the systemd container factory successfully Dec 13 15:11:15.063769 kubelet[1690]: I1213 15:11:15.063740 1690 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 15:11:15.065383 kubelet[1690]: I1213 15:11:15.065351 1690 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 15:11:15.065472 kubelet[1690]: I1213 15:11:15.065420 1690 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 15:11:15.065861 kubelet[1690]: W1213 15:11:15.065817 1690 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.243.84.50:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.243.84.50:6443: connect: connection refused Dec 13 15:11:15.065958 kubelet[1690]: E1213 15:11:15.065869 1690 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.243.84.50:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.243.84.50:6443: connect: connection refused Dec 13 15:11:15.066113 kubelet[1690]: I1213 15:11:15.066084 1690 factory.go:221] Registration of the containerd container factory successfully Dec 13 15:11:15.080355 kubelet[1690]: I1213 15:11:15.080314 1690 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 15:11:15.082322 kubelet[1690]: I1213 15:11:15.082299 1690 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 15:11:15.082493 kubelet[1690]: I1213 15:11:15.082469 1690 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 15:11:15.082671 kubelet[1690]: I1213 15:11:15.082636 1690 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 15:11:15.082872 kubelet[1690]: E1213 15:11:15.082849 1690 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 15:11:15.092563 kubelet[1690]: W1213 15:11:15.092517 1690 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.243.84.50:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.243.84.50:6443: connect: connection refused Dec 13 15:11:15.092745 kubelet[1690]: E1213 15:11:15.092723 1690 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.243.84.50:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.243.84.50:6443: connect: connection refused Dec 13 15:11:15.114557 kubelet[1690]: I1213 15:11:15.114503 1690 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 15:11:15.114818 kubelet[1690]: I1213 15:11:15.114796 1690 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 15:11:15.115044 kubelet[1690]: I1213 15:11:15.115023 1690 state_mem.go:36] "Initialized new in-memory state store" Dec 13 15:11:15.125257 kubelet[1690]: I1213 15:11:15.125231 1690 policy_none.go:49] "None policy: Start" Dec 13 15:11:15.126313 kubelet[1690]: I1213 15:11:15.126290 1690 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 15:11:15.126621 kubelet[1690]: I1213 15:11:15.126588 1690 state_mem.go:35] "Initializing new in-memory state store" Dec 13 15:11:15.134132 systemd[1]: Created slice kubepods.slice. Dec 13 15:11:15.141948 systemd[1]: Created slice kubepods-burstable.slice. Dec 13 15:11:15.147111 systemd[1]: Created slice kubepods-besteffort.slice. Dec 13 15:11:15.153448 kubelet[1690]: I1213 15:11:15.153375 1690 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 15:11:15.153791 kubelet[1690]: I1213 15:11:15.153760 1690 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 15:11:15.158157 kubelet[1690]: E1213 15:11:15.157983 1690 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"srv-iw0hd.gb1.brightbox.com\" not found" Dec 13 15:11:15.163538 kubelet[1690]: I1213 15:11:15.163514 1690 kubelet_node_status.go:73] "Attempting to register node" node="srv-iw0hd.gb1.brightbox.com" Dec 13 15:11:15.164316 kubelet[1690]: E1213 15:11:15.164288 1690 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.243.84.50:6443/api/v1/nodes\": dial tcp 10.243.84.50:6443: connect: connection refused" node="srv-iw0hd.gb1.brightbox.com" Dec 13 15:11:15.183723 kubelet[1690]: I1213 15:11:15.183672 1690 topology_manager.go:215] "Topology Admit Handler" podUID="53c60e6868ba5a756e6cfe8802f1a7ee" podNamespace="kube-system" podName="kube-controller-manager-srv-iw0hd.gb1.brightbox.com" Dec 13 15:11:15.186829 kubelet[1690]: I1213 15:11:15.186803 1690 topology_manager.go:215] "Topology Admit Handler" podUID="0288134d9499acf4813b9083829a4e97" podNamespace="kube-system" podName="kube-scheduler-srv-iw0hd.gb1.brightbox.com" Dec 13 15:11:15.189255 kubelet[1690]: I1213 15:11:15.189230 1690 topology_manager.go:215] "Topology Admit Handler" podUID="ed4e02d90cc7cb2a2af075bc9a58762f" podNamespace="kube-system" podName="kube-apiserver-srv-iw0hd.gb1.brightbox.com" Dec 13 15:11:15.198606 systemd[1]: Created slice kubepods-burstable-pod53c60e6868ba5a756e6cfe8802f1a7ee.slice. Dec 13 15:11:15.207532 systemd[1]: Created slice kubepods-burstable-poded4e02d90cc7cb2a2af075bc9a58762f.slice. Dec 13 15:11:15.223939 systemd[1]: Created slice kubepods-burstable-pod0288134d9499acf4813b9083829a4e97.slice. Dec 13 15:11:15.264064 kubelet[1690]: E1213 15:11:15.264011 1690 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.243.84.50:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-iw0hd.gb1.brightbox.com?timeout=10s\": dial tcp 10.243.84.50:6443: connect: connection refused" interval="400ms" Dec 13 15:11:15.266748 kubelet[1690]: I1213 15:11:15.266620 1690 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/53c60e6868ba5a756e6cfe8802f1a7ee-ca-certs\") pod \"kube-controller-manager-srv-iw0hd.gb1.brightbox.com\" (UID: \"53c60e6868ba5a756e6cfe8802f1a7ee\") " pod="kube-system/kube-controller-manager-srv-iw0hd.gb1.brightbox.com" Dec 13 15:11:15.266748 kubelet[1690]: I1213 15:11:15.266723 1690 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/53c60e6868ba5a756e6cfe8802f1a7ee-kubeconfig\") pod \"kube-controller-manager-srv-iw0hd.gb1.brightbox.com\" (UID: \"53c60e6868ba5a756e6cfe8802f1a7ee\") " pod="kube-system/kube-controller-manager-srv-iw0hd.gb1.brightbox.com" Dec 13 15:11:15.266938 kubelet[1690]: I1213 15:11:15.266774 1690 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0288134d9499acf4813b9083829a4e97-kubeconfig\") pod \"kube-scheduler-srv-iw0hd.gb1.brightbox.com\" (UID: \"0288134d9499acf4813b9083829a4e97\") " pod="kube-system/kube-scheduler-srv-iw0hd.gb1.brightbox.com" Dec 13 15:11:15.266938 kubelet[1690]: I1213 15:11:15.266806 1690 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ed4e02d90cc7cb2a2af075bc9a58762f-ca-certs\") pod \"kube-apiserver-srv-iw0hd.gb1.brightbox.com\" (UID: \"ed4e02d90cc7cb2a2af075bc9a58762f\") " pod="kube-system/kube-apiserver-srv-iw0hd.gb1.brightbox.com" Dec 13 15:11:15.266938 kubelet[1690]: I1213 15:11:15.266851 1690 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ed4e02d90cc7cb2a2af075bc9a58762f-k8s-certs\") pod \"kube-apiserver-srv-iw0hd.gb1.brightbox.com\" (UID: \"ed4e02d90cc7cb2a2af075bc9a58762f\") " pod="kube-system/kube-apiserver-srv-iw0hd.gb1.brightbox.com" Dec 13 15:11:15.266938 kubelet[1690]: I1213 15:11:15.266883 1690 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/53c60e6868ba5a756e6cfe8802f1a7ee-k8s-certs\") pod \"kube-controller-manager-srv-iw0hd.gb1.brightbox.com\" (UID: \"53c60e6868ba5a756e6cfe8802f1a7ee\") " pod="kube-system/kube-controller-manager-srv-iw0hd.gb1.brightbox.com" Dec 13 15:11:15.266938 kubelet[1690]: I1213 15:11:15.266931 1690 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/53c60e6868ba5a756e6cfe8802f1a7ee-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-iw0hd.gb1.brightbox.com\" (UID: \"53c60e6868ba5a756e6cfe8802f1a7ee\") " pod="kube-system/kube-controller-manager-srv-iw0hd.gb1.brightbox.com" Dec 13 15:11:15.267289 kubelet[1690]: I1213 15:11:15.267005 1690 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ed4e02d90cc7cb2a2af075bc9a58762f-usr-share-ca-certificates\") pod \"kube-apiserver-srv-iw0hd.gb1.brightbox.com\" (UID: \"ed4e02d90cc7cb2a2af075bc9a58762f\") " pod="kube-system/kube-apiserver-srv-iw0hd.gb1.brightbox.com" Dec 13 15:11:15.267289 kubelet[1690]: I1213 15:11:15.267049 1690 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/53c60e6868ba5a756e6cfe8802f1a7ee-flexvolume-dir\") pod \"kube-controller-manager-srv-iw0hd.gb1.brightbox.com\" (UID: \"53c60e6868ba5a756e6cfe8802f1a7ee\") " pod="kube-system/kube-controller-manager-srv-iw0hd.gb1.brightbox.com" Dec 13 15:11:15.368826 kubelet[1690]: I1213 15:11:15.368781 1690 kubelet_node_status.go:73] "Attempting to register node" node="srv-iw0hd.gb1.brightbox.com" Dec 13 15:11:15.369294 kubelet[1690]: E1213 15:11:15.369260 1690 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.243.84.50:6443/api/v1/nodes\": dial tcp 10.243.84.50:6443: connect: connection refused" node="srv-iw0hd.gb1.brightbox.com" Dec 13 15:11:15.523499 env[1196]: time="2024-12-13T15:11:15.522842997Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-iw0hd.gb1.brightbox.com,Uid:ed4e02d90cc7cb2a2af075bc9a58762f,Namespace:kube-system,Attempt:0,}" Dec 13 15:11:15.524172 env[1196]: time="2024-12-13T15:11:15.522786415Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-iw0hd.gb1.brightbox.com,Uid:53c60e6868ba5a756e6cfe8802f1a7ee,Namespace:kube-system,Attempt:0,}" Dec 13 15:11:15.529274 env[1196]: time="2024-12-13T15:11:15.528955261Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-iw0hd.gb1.brightbox.com,Uid:0288134d9499acf4813b9083829a4e97,Namespace:kube-system,Attempt:0,}" Dec 13 15:11:15.665615 kubelet[1690]: E1213 15:11:15.665555 1690 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.243.84.50:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-iw0hd.gb1.brightbox.com?timeout=10s\": dial tcp 10.243.84.50:6443: connect: connection refused" interval="800ms" Dec 13 15:11:15.773076 kubelet[1690]: I1213 15:11:15.773038 1690 kubelet_node_status.go:73] "Attempting to register node" node="srv-iw0hd.gb1.brightbox.com" Dec 13 15:11:15.773446 kubelet[1690]: E1213 15:11:15.773421 1690 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.243.84.50:6443/api/v1/nodes\": dial tcp 10.243.84.50:6443: connect: connection refused" node="srv-iw0hd.gb1.brightbox.com" Dec 13 15:11:15.918098 kubelet[1690]: W1213 15:11:15.917952 1690 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.243.84.50:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.243.84.50:6443: connect: connection refused Dec 13 15:11:15.918486 kubelet[1690]: E1213 15:11:15.918452 1690 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.243.84.50:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.243.84.50:6443: connect: connection refused Dec 13 15:11:15.939762 kubelet[1690]: W1213 15:11:15.939630 1690 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.243.84.50:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-iw0hd.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.243.84.50:6443: connect: connection refused Dec 13 15:11:15.939762 kubelet[1690]: E1213 15:11:15.939722 1690 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.243.84.50:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-iw0hd.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.243.84.50:6443: connect: connection refused Dec 13 15:11:16.007334 kubelet[1690]: W1213 15:11:16.007197 1690 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.243.84.50:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.243.84.50:6443: connect: connection refused Dec 13 15:11:16.007334 kubelet[1690]: E1213 15:11:16.007331 1690 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.243.84.50:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.243.84.50:6443: connect: connection refused Dec 13 15:11:16.123105 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount528402295.mount: Deactivated successfully. Dec 13 15:11:16.131716 env[1196]: time="2024-12-13T15:11:16.131628718Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 15:11:16.157233 env[1196]: time="2024-12-13T15:11:16.157151862Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 15:11:16.160333 env[1196]: time="2024-12-13T15:11:16.160288168Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 15:11:16.162533 env[1196]: time="2024-12-13T15:11:16.162492343Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 15:11:16.165418 env[1196]: time="2024-12-13T15:11:16.165381665Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 15:11:16.167350 env[1196]: time="2024-12-13T15:11:16.167317391Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 15:11:16.168936 env[1196]: time="2024-12-13T15:11:16.168371028Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 15:11:16.172752 env[1196]: time="2024-12-13T15:11:16.172718923Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 15:11:16.177304 env[1196]: time="2024-12-13T15:11:16.177241232Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 15:11:16.178634 env[1196]: time="2024-12-13T15:11:16.178593279Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 15:11:16.179557 env[1196]: time="2024-12-13T15:11:16.179519465Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 15:11:16.180464 env[1196]: time="2024-12-13T15:11:16.180426841Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 15:11:16.228847 env[1196]: time="2024-12-13T15:11:16.228741560Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 15:11:16.229406 env[1196]: time="2024-12-13T15:11:16.229316161Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 15:11:16.229406 env[1196]: time="2024-12-13T15:11:16.229352155Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 15:11:16.229866 env[1196]: time="2024-12-13T15:11:16.229803569Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2a9af76d6f4eee5cdc9f3611d7d61aa073a3f2dab48ab279d786ed2e2369c713 pid=1742 runtime=io.containerd.runc.v2 Dec 13 15:11:16.232135 env[1196]: time="2024-12-13T15:11:16.232038416Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 15:11:16.232258 env[1196]: time="2024-12-13T15:11:16.232165781Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 15:11:16.232345 env[1196]: time="2024-12-13T15:11:16.232240997Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 15:11:16.232566 env[1196]: time="2024-12-13T15:11:16.232505933Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c28ea523b39473473db9a73196fea356861f7be548a2d2d9826bd353990b61d6 pid=1736 runtime=io.containerd.runc.v2 Dec 13 15:11:16.246497 env[1196]: time="2024-12-13T15:11:16.246207364Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 15:11:16.246497 env[1196]: time="2024-12-13T15:11:16.246276740Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 15:11:16.246497 env[1196]: time="2024-12-13T15:11:16.246293272Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 15:11:16.246898 env[1196]: time="2024-12-13T15:11:16.246803858Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/933e5aa77b2967b48a46b7cf4e873506fd6ba80857f20c072b20fa9cb980589a pid=1753 runtime=io.containerd.runc.v2 Dec 13 15:11:16.276285 systemd[1]: Started cri-containerd-2a9af76d6f4eee5cdc9f3611d7d61aa073a3f2dab48ab279d786ed2e2369c713.scope. Dec 13 15:11:16.297811 systemd[1]: Started cri-containerd-933e5aa77b2967b48a46b7cf4e873506fd6ba80857f20c072b20fa9cb980589a.scope. Dec 13 15:11:16.322553 systemd[1]: Started cri-containerd-c28ea523b39473473db9a73196fea356861f7be548a2d2d9826bd353990b61d6.scope. Dec 13 15:11:16.412821 env[1196]: time="2024-12-13T15:11:16.412766130Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-iw0hd.gb1.brightbox.com,Uid:ed4e02d90cc7cb2a2af075bc9a58762f,Namespace:kube-system,Attempt:0,} returns sandbox id \"2a9af76d6f4eee5cdc9f3611d7d61aa073a3f2dab48ab279d786ed2e2369c713\"" Dec 13 15:11:16.422881 env[1196]: time="2024-12-13T15:11:16.422735176Z" level=info msg="CreateContainer within sandbox \"2a9af76d6f4eee5cdc9f3611d7d61aa073a3f2dab48ab279d786ed2e2369c713\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 13 15:11:16.440855 kubelet[1690]: W1213 15:11:16.440729 1690 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.243.84.50:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.243.84.50:6443: connect: connection refused Dec 13 15:11:16.441425 kubelet[1690]: E1213 15:11:16.440887 1690 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.243.84.50:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.243.84.50:6443: connect: connection refused Dec 13 15:11:16.447597 env[1196]: time="2024-12-13T15:11:16.447521979Z" level=info msg="CreateContainer within sandbox \"2a9af76d6f4eee5cdc9f3611d7d61aa073a3f2dab48ab279d786ed2e2369c713\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"2c429420b59104bca0cf05b6c8f5830e9afbfca2e30edc2e3a7e803edcfe2e0e\"" Dec 13 15:11:16.448895 env[1196]: time="2024-12-13T15:11:16.448851670Z" level=info msg="StartContainer for \"2c429420b59104bca0cf05b6c8f5830e9afbfca2e30edc2e3a7e803edcfe2e0e\"" Dec 13 15:11:16.463390 env[1196]: time="2024-12-13T15:11:16.463328367Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-iw0hd.gb1.brightbox.com,Uid:53c60e6868ba5a756e6cfe8802f1a7ee,Namespace:kube-system,Attempt:0,} returns sandbox id \"c28ea523b39473473db9a73196fea356861f7be548a2d2d9826bd353990b61d6\"" Dec 13 15:11:16.466161 kubelet[1690]: E1213 15:11:16.466106 1690 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.243.84.50:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-iw0hd.gb1.brightbox.com?timeout=10s\": dial tcp 10.243.84.50:6443: connect: connection refused" interval="1.6s" Dec 13 15:11:16.471445 env[1196]: time="2024-12-13T15:11:16.471368653Z" level=info msg="CreateContainer within sandbox \"c28ea523b39473473db9a73196fea356861f7be548a2d2d9826bd353990b61d6\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 13 15:11:16.494228 systemd[1]: Started cri-containerd-2c429420b59104bca0cf05b6c8f5830e9afbfca2e30edc2e3a7e803edcfe2e0e.scope. Dec 13 15:11:16.509301 env[1196]: time="2024-12-13T15:11:16.509222241Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-iw0hd.gb1.brightbox.com,Uid:0288134d9499acf4813b9083829a4e97,Namespace:kube-system,Attempt:0,} returns sandbox id \"933e5aa77b2967b48a46b7cf4e873506fd6ba80857f20c072b20fa9cb980589a\"" Dec 13 15:11:16.514343 env[1196]: time="2024-12-13T15:11:16.514283315Z" level=info msg="CreateContainer within sandbox \"c28ea523b39473473db9a73196fea356861f7be548a2d2d9826bd353990b61d6\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"f568e64ba639eebe96fd39222ab28e05b0afe31d86a0a593f0dc1b4b322bb2df\"" Dec 13 15:11:16.514863 env[1196]: time="2024-12-13T15:11:16.514828920Z" level=info msg="StartContainer for \"f568e64ba639eebe96fd39222ab28e05b0afe31d86a0a593f0dc1b4b322bb2df\"" Dec 13 15:11:16.525200 env[1196]: time="2024-12-13T15:11:16.525139535Z" level=info msg="CreateContainer within sandbox \"933e5aa77b2967b48a46b7cf4e873506fd6ba80857f20c072b20fa9cb980589a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 13 15:11:16.561671 systemd[1]: Started cri-containerd-f568e64ba639eebe96fd39222ab28e05b0afe31d86a0a593f0dc1b4b322bb2df.scope. Dec 13 15:11:16.574188 env[1196]: time="2024-12-13T15:11:16.574123268Z" level=info msg="CreateContainer within sandbox \"933e5aa77b2967b48a46b7cf4e873506fd6ba80857f20c072b20fa9cb980589a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"7c9f2691ff7be0faba67099f49534a07bf413f3c9abc8d43b3034426f0f22c08\"" Dec 13 15:11:16.575917 env[1196]: time="2024-12-13T15:11:16.575877070Z" level=info msg="StartContainer for \"7c9f2691ff7be0faba67099f49534a07bf413f3c9abc8d43b3034426f0f22c08\"" Dec 13 15:11:16.577922 kubelet[1690]: I1213 15:11:16.577886 1690 kubelet_node_status.go:73] "Attempting to register node" node="srv-iw0hd.gb1.brightbox.com" Dec 13 15:11:16.580750 kubelet[1690]: E1213 15:11:16.578600 1690 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.243.84.50:6443/api/v1/nodes\": dial tcp 10.243.84.50:6443: connect: connection refused" node="srv-iw0hd.gb1.brightbox.com" Dec 13 15:11:16.602743 env[1196]: time="2024-12-13T15:11:16.602684387Z" level=info msg="StartContainer for \"2c429420b59104bca0cf05b6c8f5830e9afbfca2e30edc2e3a7e803edcfe2e0e\" returns successfully" Dec 13 15:11:16.630652 systemd[1]: Started cri-containerd-7c9f2691ff7be0faba67099f49534a07bf413f3c9abc8d43b3034426f0f22c08.scope. Dec 13 15:11:16.683745 env[1196]: time="2024-12-13T15:11:16.683561586Z" level=info msg="StartContainer for \"f568e64ba639eebe96fd39222ab28e05b0afe31d86a0a593f0dc1b4b322bb2df\" returns successfully" Dec 13 15:11:16.738195 env[1196]: time="2024-12-13T15:11:16.738133110Z" level=info msg="StartContainer for \"7c9f2691ff7be0faba67099f49534a07bf413f3c9abc8d43b3034426f0f22c08\" returns successfully" Dec 13 15:11:17.171414 kubelet[1690]: E1213 15:11:17.171359 1690 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.243.84.50:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.243.84.50:6443: connect: connection refused Dec 13 15:11:18.182289 kubelet[1690]: I1213 15:11:18.182233 1690 kubelet_node_status.go:73] "Attempting to register node" node="srv-iw0hd.gb1.brightbox.com" Dec 13 15:11:19.645527 kubelet[1690]: I1213 15:11:19.645473 1690 kubelet_node_status.go:76] "Successfully registered node" node="srv-iw0hd.gb1.brightbox.com" Dec 13 15:11:20.037402 kubelet[1690]: I1213 15:11:20.036850 1690 apiserver.go:52] "Watching apiserver" Dec 13 15:11:20.065735 kubelet[1690]: I1213 15:11:20.065684 1690 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 15:11:22.690881 systemd[1]: Reloading. Dec 13 15:11:22.848914 /usr/lib/systemd/system-generators/torcx-generator[1983]: time="2024-12-13T15:11:22Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 15:11:22.849018 /usr/lib/systemd/system-generators/torcx-generator[1983]: time="2024-12-13T15:11:22Z" level=info msg="torcx already run" Dec 13 15:11:22.956068 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 15:11:22.956575 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 15:11:22.985726 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 15:11:23.108796 kubelet[1690]: W1213 15:11:23.108727 1690 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 15:11:23.178850 kubelet[1690]: I1213 15:11:23.178768 1690 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 15:11:23.181622 systemd[1]: Stopping kubelet.service... Dec 13 15:11:23.200045 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 15:11:23.200434 systemd[1]: Stopped kubelet.service. Dec 13 15:11:23.200959 systemd[1]: kubelet.service: Consumed 1.635s CPU time. Dec 13 15:11:23.205908 systemd[1]: Starting kubelet.service... Dec 13 15:11:24.315068 systemd[1]: Started kubelet.service. Dec 13 15:11:24.456079 sudo[2044]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Dec 13 15:11:24.457204 kubelet[2033]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 15:11:24.457204 kubelet[2033]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 15:11:24.457204 kubelet[2033]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 15:11:24.457204 kubelet[2033]: I1213 15:11:24.457063 2033 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 15:11:24.457861 sudo[2044]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Dec 13 15:11:24.465601 kubelet[2033]: I1213 15:11:24.465540 2033 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 15:11:24.465601 kubelet[2033]: I1213 15:11:24.465570 2033 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 15:11:24.465861 kubelet[2033]: I1213 15:11:24.465833 2033 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 15:11:24.468062 kubelet[2033]: I1213 15:11:24.468029 2033 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 13 15:11:24.471046 kubelet[2033]: I1213 15:11:24.471017 2033 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 15:11:24.501658 kubelet[2033]: I1213 15:11:24.501043 2033 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 15:11:24.501658 kubelet[2033]: I1213 15:11:24.501463 2033 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 15:11:24.501896 kubelet[2033]: I1213 15:11:24.501735 2033 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 15:11:24.501896 kubelet[2033]: I1213 15:11:24.501798 2033 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 15:11:24.501896 kubelet[2033]: I1213 15:11:24.501816 2033 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 15:11:24.501896 kubelet[2033]: I1213 15:11:24.501892 2033 state_mem.go:36] "Initialized new in-memory state store" Dec 13 15:11:24.503042 kubelet[2033]: I1213 15:11:24.502366 2033 kubelet.go:396] "Attempting to sync node with API server" Dec 13 15:11:24.503042 kubelet[2033]: I1213 15:11:24.502412 2033 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 15:11:24.503042 kubelet[2033]: I1213 15:11:24.502504 2033 kubelet.go:312] "Adding apiserver pod source" Dec 13 15:11:24.503042 kubelet[2033]: I1213 15:11:24.502534 2033 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 15:11:24.508151 kubelet[2033]: I1213 15:11:24.508125 2033 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Dec 13 15:11:24.508610 kubelet[2033]: I1213 15:11:24.508585 2033 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 15:11:24.509449 kubelet[2033]: I1213 15:11:24.509426 2033 server.go:1256] "Started kubelet" Dec 13 15:11:24.520788 kubelet[2033]: I1213 15:11:24.513375 2033 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 15:11:24.520788 kubelet[2033]: I1213 15:11:24.513951 2033 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 15:11:24.520788 kubelet[2033]: I1213 15:11:24.514031 2033 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 15:11:24.520788 kubelet[2033]: I1213 15:11:24.515355 2033 server.go:461] "Adding debug handlers to kubelet server" Dec 13 15:11:24.522746 kubelet[2033]: I1213 15:11:24.522719 2033 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 15:11:24.544589 kubelet[2033]: I1213 15:11:24.544561 2033 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 15:11:24.545311 kubelet[2033]: I1213 15:11:24.545288 2033 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 15:11:24.546682 kubelet[2033]: I1213 15:11:24.546658 2033 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 15:11:24.565514 kubelet[2033]: I1213 15:11:24.565469 2033 factory.go:221] Registration of the systemd container factory successfully Dec 13 15:11:24.565863 kubelet[2033]: I1213 15:11:24.565826 2033 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 15:11:24.573382 kubelet[2033]: E1213 15:11:24.573278 2033 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 15:11:24.573595 kubelet[2033]: I1213 15:11:24.573562 2033 factory.go:221] Registration of the containerd container factory successfully Dec 13 15:11:24.586635 kubelet[2033]: I1213 15:11:24.586599 2033 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 15:11:24.589008 kubelet[2033]: I1213 15:11:24.588220 2033 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 15:11:24.589008 kubelet[2033]: I1213 15:11:24.588356 2033 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 15:11:24.589008 kubelet[2033]: I1213 15:11:24.588393 2033 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 15:11:24.589008 kubelet[2033]: E1213 15:11:24.588512 2033 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 15:11:24.661741 kubelet[2033]: I1213 15:11:24.661706 2033 kubelet_node_status.go:73] "Attempting to register node" node="srv-iw0hd.gb1.brightbox.com" Dec 13 15:11:24.673729 kubelet[2033]: I1213 15:11:24.673699 2033 kubelet_node_status.go:112] "Node was previously registered" node="srv-iw0hd.gb1.brightbox.com" Dec 13 15:11:24.674002 kubelet[2033]: I1213 15:11:24.673948 2033 kubelet_node_status.go:76] "Successfully registered node" node="srv-iw0hd.gb1.brightbox.com" Dec 13 15:11:24.689403 kubelet[2033]: E1213 15:11:24.689025 2033 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 13 15:11:24.702709 kubelet[2033]: I1213 15:11:24.702649 2033 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 15:11:24.702709 kubelet[2033]: I1213 15:11:24.702693 2033 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 15:11:24.702964 kubelet[2033]: I1213 15:11:24.702729 2033 state_mem.go:36] "Initialized new in-memory state store" Dec 13 15:11:24.703683 kubelet[2033]: I1213 15:11:24.703100 2033 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 13 15:11:24.703683 kubelet[2033]: I1213 15:11:24.703172 2033 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 13 15:11:24.703683 kubelet[2033]: I1213 15:11:24.703203 2033 policy_none.go:49] "None policy: Start" Dec 13 15:11:24.704488 kubelet[2033]: I1213 15:11:24.704423 2033 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 15:11:24.704596 kubelet[2033]: I1213 15:11:24.704503 2033 state_mem.go:35] "Initializing new in-memory state store" Dec 13 15:11:24.704795 kubelet[2033]: I1213 15:11:24.704712 2033 state_mem.go:75] "Updated machine memory state" Dec 13 15:11:24.717576 kubelet[2033]: I1213 15:11:24.717543 2033 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 15:11:24.720793 kubelet[2033]: I1213 15:11:24.720760 2033 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 15:11:24.890059 kubelet[2033]: I1213 15:11:24.890011 2033 topology_manager.go:215] "Topology Admit Handler" podUID="ed4e02d90cc7cb2a2af075bc9a58762f" podNamespace="kube-system" podName="kube-apiserver-srv-iw0hd.gb1.brightbox.com" Dec 13 15:11:24.891262 kubelet[2033]: I1213 15:11:24.891238 2033 topology_manager.go:215] "Topology Admit Handler" podUID="53c60e6868ba5a756e6cfe8802f1a7ee" podNamespace="kube-system" podName="kube-controller-manager-srv-iw0hd.gb1.brightbox.com" Dec 13 15:11:24.891516 kubelet[2033]: I1213 15:11:24.891492 2033 topology_manager.go:215] "Topology Admit Handler" podUID="0288134d9499acf4813b9083829a4e97" podNamespace="kube-system" podName="kube-scheduler-srv-iw0hd.gb1.brightbox.com" Dec 13 15:11:24.904525 kubelet[2033]: W1213 15:11:24.904488 2033 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 15:11:24.904846 kubelet[2033]: W1213 15:11:24.904502 2033 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 15:11:24.906348 kubelet[2033]: W1213 15:11:24.904540 2033 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 15:11:24.906448 kubelet[2033]: E1213 15:11:24.906407 2033 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-srv-iw0hd.gb1.brightbox.com\" already exists" pod="kube-system/kube-controller-manager-srv-iw0hd.gb1.brightbox.com" Dec 13 15:11:24.950759 kubelet[2033]: I1213 15:11:24.950665 2033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ed4e02d90cc7cb2a2af075bc9a58762f-k8s-certs\") pod \"kube-apiserver-srv-iw0hd.gb1.brightbox.com\" (UID: \"ed4e02d90cc7cb2a2af075bc9a58762f\") " pod="kube-system/kube-apiserver-srv-iw0hd.gb1.brightbox.com" Dec 13 15:11:24.951213 kubelet[2033]: I1213 15:11:24.951182 2033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ed4e02d90cc7cb2a2af075bc9a58762f-usr-share-ca-certificates\") pod \"kube-apiserver-srv-iw0hd.gb1.brightbox.com\" (UID: \"ed4e02d90cc7cb2a2af075bc9a58762f\") " pod="kube-system/kube-apiserver-srv-iw0hd.gb1.brightbox.com" Dec 13 15:11:24.952605 kubelet[2033]: I1213 15:11:24.952582 2033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/53c60e6868ba5a756e6cfe8802f1a7ee-ca-certs\") pod \"kube-controller-manager-srv-iw0hd.gb1.brightbox.com\" (UID: \"53c60e6868ba5a756e6cfe8802f1a7ee\") " pod="kube-system/kube-controller-manager-srv-iw0hd.gb1.brightbox.com" Dec 13 15:11:24.952805 kubelet[2033]: I1213 15:11:24.952775 2033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/53c60e6868ba5a756e6cfe8802f1a7ee-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-iw0hd.gb1.brightbox.com\" (UID: \"53c60e6868ba5a756e6cfe8802f1a7ee\") " pod="kube-system/kube-controller-manager-srv-iw0hd.gb1.brightbox.com" Dec 13 15:11:24.952980 kubelet[2033]: I1213 15:11:24.952948 2033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0288134d9499acf4813b9083829a4e97-kubeconfig\") pod \"kube-scheduler-srv-iw0hd.gb1.brightbox.com\" (UID: \"0288134d9499acf4813b9083829a4e97\") " pod="kube-system/kube-scheduler-srv-iw0hd.gb1.brightbox.com" Dec 13 15:11:24.953162 kubelet[2033]: I1213 15:11:24.953133 2033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ed4e02d90cc7cb2a2af075bc9a58762f-ca-certs\") pod \"kube-apiserver-srv-iw0hd.gb1.brightbox.com\" (UID: \"ed4e02d90cc7cb2a2af075bc9a58762f\") " pod="kube-system/kube-apiserver-srv-iw0hd.gb1.brightbox.com" Dec 13 15:11:24.953357 kubelet[2033]: I1213 15:11:24.953327 2033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/53c60e6868ba5a756e6cfe8802f1a7ee-flexvolume-dir\") pod \"kube-controller-manager-srv-iw0hd.gb1.brightbox.com\" (UID: \"53c60e6868ba5a756e6cfe8802f1a7ee\") " pod="kube-system/kube-controller-manager-srv-iw0hd.gb1.brightbox.com" Dec 13 15:11:24.953549 kubelet[2033]: I1213 15:11:24.953513 2033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/53c60e6868ba5a756e6cfe8802f1a7ee-k8s-certs\") pod \"kube-controller-manager-srv-iw0hd.gb1.brightbox.com\" (UID: \"53c60e6868ba5a756e6cfe8802f1a7ee\") " pod="kube-system/kube-controller-manager-srv-iw0hd.gb1.brightbox.com" Dec 13 15:11:24.953720 kubelet[2033]: I1213 15:11:24.953699 2033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/53c60e6868ba5a756e6cfe8802f1a7ee-kubeconfig\") pod \"kube-controller-manager-srv-iw0hd.gb1.brightbox.com\" (UID: \"53c60e6868ba5a756e6cfe8802f1a7ee\") " pod="kube-system/kube-controller-manager-srv-iw0hd.gb1.brightbox.com" Dec 13 15:11:25.423047 sudo[2044]: pam_unix(sudo:session): session closed for user root Dec 13 15:11:25.512084 kubelet[2033]: I1213 15:11:25.512033 2033 apiserver.go:52] "Watching apiserver" Dec 13 15:11:25.546109 kubelet[2033]: I1213 15:11:25.546031 2033 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 15:11:25.674209 kubelet[2033]: W1213 15:11:25.674023 2033 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 15:11:25.674209 kubelet[2033]: E1213 15:11:25.674137 2033 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-srv-iw0hd.gb1.brightbox.com\" already exists" pod="kube-system/kube-apiserver-srv-iw0hd.gb1.brightbox.com" Dec 13 15:11:25.674869 kubelet[2033]: W1213 15:11:25.674843 2033 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 15:11:25.675225 kubelet[2033]: E1213 15:11:25.675073 2033 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-srv-iw0hd.gb1.brightbox.com\" already exists" pod="kube-system/kube-controller-manager-srv-iw0hd.gb1.brightbox.com" Dec 13 15:11:25.713251 kubelet[2033]: I1213 15:11:25.713214 2033 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-srv-iw0hd.gb1.brightbox.com" podStartSLOduration=2.713158938 podStartE2EDuration="2.713158938s" podCreationTimestamp="2024-12-13 15:11:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 15:11:25.702611194 +0000 UTC m=+1.362875754" watchObservedRunningTime="2024-12-13 15:11:25.713158938 +0000 UTC m=+1.373423475" Dec 13 15:11:25.713721 kubelet[2033]: I1213 15:11:25.713671 2033 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-srv-iw0hd.gb1.brightbox.com" podStartSLOduration=1.713644302 podStartE2EDuration="1.713644302s" podCreationTimestamp="2024-12-13 15:11:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 15:11:25.713632263 +0000 UTC m=+1.373896818" watchObservedRunningTime="2024-12-13 15:11:25.713644302 +0000 UTC m=+1.373908851" Dec 13 15:11:25.724892 kubelet[2033]: I1213 15:11:25.724852 2033 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-srv-iw0hd.gb1.brightbox.com" podStartSLOduration=1.72481342 podStartE2EDuration="1.72481342s" podCreationTimestamp="2024-12-13 15:11:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 15:11:25.723334411 +0000 UTC m=+1.383598976" watchObservedRunningTime="2024-12-13 15:11:25.72481342 +0000 UTC m=+1.385077975" Dec 13 15:11:27.350210 sudo[1342]: pam_unix(sudo:session): session closed for user root Dec 13 15:11:27.494606 sshd[1329]: pam_unix(sshd:session): session closed for user core Dec 13 15:11:27.499625 systemd-logind[1182]: Session 7 logged out. Waiting for processes to exit. Dec 13 15:11:27.500734 systemd[1]: sshd@6-10.243.84.50:22-139.178.68.195:50062.service: Deactivated successfully. Dec 13 15:11:27.502044 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 15:11:27.502342 systemd[1]: session-7.scope: Consumed 5.958s CPU time. Dec 13 15:11:27.503277 systemd-logind[1182]: Removed session 7. Dec 13 15:11:34.890385 kubelet[2033]: I1213 15:11:34.890311 2033 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 13 15:11:34.891464 env[1196]: time="2024-12-13T15:11:34.891211528Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 15:11:34.891946 kubelet[2033]: I1213 15:11:34.891791 2033 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 13 15:11:35.677767 kubelet[2033]: I1213 15:11:35.677715 2033 topology_manager.go:215] "Topology Admit Handler" podUID="689cc6a6-c24a-464b-85f8-949093eb9c3a" podNamespace="kube-system" podName="kube-proxy-vvbfm" Dec 13 15:11:35.689060 systemd[1]: Created slice kubepods-besteffort-pod689cc6a6_c24a_464b_85f8_949093eb9c3a.slice. Dec 13 15:11:35.698372 kubelet[2033]: I1213 15:11:35.698310 2033 topology_manager.go:215] "Topology Admit Handler" podUID="d64fc5fa-9b45-4651-8aae-160965a0cf6b" podNamespace="kube-system" podName="cilium-4cqql" Dec 13 15:11:35.706558 systemd[1]: Created slice kubepods-burstable-podd64fc5fa_9b45_4651_8aae_160965a0cf6b.slice. Dec 13 15:11:35.716663 kubelet[2033]: I1213 15:11:35.716607 2033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d64fc5fa-9b45-4651-8aae-160965a0cf6b-hostproc\") pod \"cilium-4cqql\" (UID: \"d64fc5fa-9b45-4651-8aae-160965a0cf6b\") " pod="kube-system/cilium-4cqql" Dec 13 15:11:35.716859 kubelet[2033]: I1213 15:11:35.716688 2033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9jg9\" (UniqueName: \"kubernetes.io/projected/d64fc5fa-9b45-4651-8aae-160965a0cf6b-kube-api-access-r9jg9\") pod \"cilium-4cqql\" (UID: \"d64fc5fa-9b45-4651-8aae-160965a0cf6b\") " pod="kube-system/cilium-4cqql" Dec 13 15:11:35.716859 kubelet[2033]: I1213 15:11:35.716723 2033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d64fc5fa-9b45-4651-8aae-160965a0cf6b-xtables-lock\") pod \"cilium-4cqql\" (UID: \"d64fc5fa-9b45-4651-8aae-160965a0cf6b\") " pod="kube-system/cilium-4cqql" Dec 13 15:11:35.716859 kubelet[2033]: I1213 15:11:35.716759 2033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d64fc5fa-9b45-4651-8aae-160965a0cf6b-host-proc-sys-net\") pod \"cilium-4cqql\" (UID: \"d64fc5fa-9b45-4651-8aae-160965a0cf6b\") " pod="kube-system/cilium-4cqql" Dec 13 15:11:35.716859 kubelet[2033]: I1213 15:11:35.716816 2033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvtg2\" (UniqueName: \"kubernetes.io/projected/689cc6a6-c24a-464b-85f8-949093eb9c3a-kube-api-access-zvtg2\") pod \"kube-proxy-vvbfm\" (UID: \"689cc6a6-c24a-464b-85f8-949093eb9c3a\") " pod="kube-system/kube-proxy-vvbfm" Dec 13 15:11:35.717162 kubelet[2033]: I1213 15:11:35.716867 2033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d64fc5fa-9b45-4651-8aae-160965a0cf6b-cilium-config-path\") pod \"cilium-4cqql\" (UID: \"d64fc5fa-9b45-4651-8aae-160965a0cf6b\") " pod="kube-system/cilium-4cqql" Dec 13 15:11:35.717162 kubelet[2033]: I1213 15:11:35.716907 2033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/689cc6a6-c24a-464b-85f8-949093eb9c3a-xtables-lock\") pod \"kube-proxy-vvbfm\" (UID: \"689cc6a6-c24a-464b-85f8-949093eb9c3a\") " pod="kube-system/kube-proxy-vvbfm" Dec 13 15:11:35.717162 kubelet[2033]: I1213 15:11:35.716936 2033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d64fc5fa-9b45-4651-8aae-160965a0cf6b-hubble-tls\") pod \"cilium-4cqql\" (UID: \"d64fc5fa-9b45-4651-8aae-160965a0cf6b\") " pod="kube-system/cilium-4cqql" Dec 13 15:11:35.717162 kubelet[2033]: I1213 15:11:35.717007 2033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/689cc6a6-c24a-464b-85f8-949093eb9c3a-kube-proxy\") pod \"kube-proxy-vvbfm\" (UID: \"689cc6a6-c24a-464b-85f8-949093eb9c3a\") " pod="kube-system/kube-proxy-vvbfm" Dec 13 15:11:35.717162 kubelet[2033]: I1213 15:11:35.717064 2033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d64fc5fa-9b45-4651-8aae-160965a0cf6b-lib-modules\") pod \"cilium-4cqql\" (UID: \"d64fc5fa-9b45-4651-8aae-160965a0cf6b\") " pod="kube-system/cilium-4cqql" Dec 13 15:11:35.717162 kubelet[2033]: I1213 15:11:35.717094 2033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d64fc5fa-9b45-4651-8aae-160965a0cf6b-cilium-run\") pod \"cilium-4cqql\" (UID: \"d64fc5fa-9b45-4651-8aae-160965a0cf6b\") " pod="kube-system/cilium-4cqql" Dec 13 15:11:35.717506 kubelet[2033]: I1213 15:11:35.717125 2033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d64fc5fa-9b45-4651-8aae-160965a0cf6b-bpf-maps\") pod \"cilium-4cqql\" (UID: \"d64fc5fa-9b45-4651-8aae-160965a0cf6b\") " pod="kube-system/cilium-4cqql" Dec 13 15:11:35.717506 kubelet[2033]: I1213 15:11:35.717181 2033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d64fc5fa-9b45-4651-8aae-160965a0cf6b-cni-path\") pod \"cilium-4cqql\" (UID: \"d64fc5fa-9b45-4651-8aae-160965a0cf6b\") " pod="kube-system/cilium-4cqql" Dec 13 15:11:35.717506 kubelet[2033]: I1213 15:11:35.717216 2033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d64fc5fa-9b45-4651-8aae-160965a0cf6b-clustermesh-secrets\") pod \"cilium-4cqql\" (UID: \"d64fc5fa-9b45-4651-8aae-160965a0cf6b\") " pod="kube-system/cilium-4cqql" Dec 13 15:11:35.717506 kubelet[2033]: I1213 15:11:35.717246 2033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/689cc6a6-c24a-464b-85f8-949093eb9c3a-lib-modules\") pod \"kube-proxy-vvbfm\" (UID: \"689cc6a6-c24a-464b-85f8-949093eb9c3a\") " pod="kube-system/kube-proxy-vvbfm" Dec 13 15:11:35.717506 kubelet[2033]: I1213 15:11:35.717276 2033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d64fc5fa-9b45-4651-8aae-160965a0cf6b-cilium-cgroup\") pod \"cilium-4cqql\" (UID: \"d64fc5fa-9b45-4651-8aae-160965a0cf6b\") " pod="kube-system/cilium-4cqql" Dec 13 15:11:35.717506 kubelet[2033]: I1213 15:11:35.717303 2033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d64fc5fa-9b45-4651-8aae-160965a0cf6b-etc-cni-netd\") pod \"cilium-4cqql\" (UID: \"d64fc5fa-9b45-4651-8aae-160965a0cf6b\") " pod="kube-system/cilium-4cqql" Dec 13 15:11:35.717804 kubelet[2033]: I1213 15:11:35.717343 2033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d64fc5fa-9b45-4651-8aae-160965a0cf6b-host-proc-sys-kernel\") pod \"cilium-4cqql\" (UID: \"d64fc5fa-9b45-4651-8aae-160965a0cf6b\") " pod="kube-system/cilium-4cqql" Dec 13 15:11:35.999946 env[1196]: time="2024-12-13T15:11:35.999755988Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vvbfm,Uid:689cc6a6-c24a-464b-85f8-949093eb9c3a,Namespace:kube-system,Attempt:0,}" Dec 13 15:11:36.010619 env[1196]: time="2024-12-13T15:11:36.010536931Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4cqql,Uid:d64fc5fa-9b45-4651-8aae-160965a0cf6b,Namespace:kube-system,Attempt:0,}" Dec 13 15:11:36.018245 kubelet[2033]: I1213 15:11:36.018195 2033 topology_manager.go:215] "Topology Admit Handler" podUID="b782c812-a1ed-493d-9345-e18658c8b5eb" podNamespace="kube-system" podName="cilium-operator-5cc964979-pnqsv" Dec 13 15:11:36.036394 systemd[1]: Created slice kubepods-besteffort-podb782c812_a1ed_493d_9345_e18658c8b5eb.slice. Dec 13 15:11:36.074129 env[1196]: time="2024-12-13T15:11:36.068555152Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 15:11:36.074129 env[1196]: time="2024-12-13T15:11:36.068616897Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 15:11:36.074129 env[1196]: time="2024-12-13T15:11:36.068633336Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 15:11:36.074129 env[1196]: time="2024-12-13T15:11:36.068790209Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e86e73bc089ffee5e8fc026b0c1d49b5aef64eaf5994c239d8aed2013f52b071 pid=2132 runtime=io.containerd.runc.v2 Dec 13 15:11:36.074555 env[1196]: time="2024-12-13T15:11:36.053845151Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 15:11:36.074555 env[1196]: time="2024-12-13T15:11:36.053946246Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 15:11:36.074555 env[1196]: time="2024-12-13T15:11:36.053962967Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 15:11:36.074555 env[1196]: time="2024-12-13T15:11:36.054354477Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b39f4d2c109f56f44792f05acce5cbcd9f53b771ac8c73034528007209f65e3f pid=2114 runtime=io.containerd.runc.v2 Dec 13 15:11:36.121444 kubelet[2033]: I1213 15:11:36.121386 2033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b782c812-a1ed-493d-9345-e18658c8b5eb-cilium-config-path\") pod \"cilium-operator-5cc964979-pnqsv\" (UID: \"b782c812-a1ed-493d-9345-e18658c8b5eb\") " pod="kube-system/cilium-operator-5cc964979-pnqsv" Dec 13 15:11:36.121444 kubelet[2033]: I1213 15:11:36.121456 2033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kgb9p\" (UniqueName: \"kubernetes.io/projected/b782c812-a1ed-493d-9345-e18658c8b5eb-kube-api-access-kgb9p\") pod \"cilium-operator-5cc964979-pnqsv\" (UID: \"b782c812-a1ed-493d-9345-e18658c8b5eb\") " pod="kube-system/cilium-operator-5cc964979-pnqsv" Dec 13 15:11:36.130155 systemd[1]: Started cri-containerd-e86e73bc089ffee5e8fc026b0c1d49b5aef64eaf5994c239d8aed2013f52b071.scope. Dec 13 15:11:36.156056 systemd[1]: Started cri-containerd-b39f4d2c109f56f44792f05acce5cbcd9f53b771ac8c73034528007209f65e3f.scope. Dec 13 15:11:36.213271 env[1196]: time="2024-12-13T15:11:36.213206857Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vvbfm,Uid:689cc6a6-c24a-464b-85f8-949093eb9c3a,Namespace:kube-system,Attempt:0,} returns sandbox id \"e86e73bc089ffee5e8fc026b0c1d49b5aef64eaf5994c239d8aed2013f52b071\"" Dec 13 15:11:36.219010 env[1196]: time="2024-12-13T15:11:36.217980431Z" level=info msg="CreateContainer within sandbox \"e86e73bc089ffee5e8fc026b0c1d49b5aef64eaf5994c239d8aed2013f52b071\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 15:11:36.242857 env[1196]: time="2024-12-13T15:11:36.242793754Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4cqql,Uid:d64fc5fa-9b45-4651-8aae-160965a0cf6b,Namespace:kube-system,Attempt:0,} returns sandbox id \"b39f4d2c109f56f44792f05acce5cbcd9f53b771ac8c73034528007209f65e3f\"" Dec 13 15:11:36.247262 env[1196]: time="2024-12-13T15:11:36.247207318Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Dec 13 15:11:36.263648 env[1196]: time="2024-12-13T15:11:36.262759344Z" level=info msg="CreateContainer within sandbox \"e86e73bc089ffee5e8fc026b0c1d49b5aef64eaf5994c239d8aed2013f52b071\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"8e661296f79b683d25efdd72a6c0fc70f342c1c82ae2070399ab6e6d84f74186\"" Dec 13 15:11:36.264694 env[1196]: time="2024-12-13T15:11:36.264661686Z" level=info msg="StartContainer for \"8e661296f79b683d25efdd72a6c0fc70f342c1c82ae2070399ab6e6d84f74186\"" Dec 13 15:11:36.296459 systemd[1]: Started cri-containerd-8e661296f79b683d25efdd72a6c0fc70f342c1c82ae2070399ab6e6d84f74186.scope. Dec 13 15:11:36.342498 env[1196]: time="2024-12-13T15:11:36.342405347Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-pnqsv,Uid:b782c812-a1ed-493d-9345-e18658c8b5eb,Namespace:kube-system,Attempt:0,}" Dec 13 15:11:36.368356 env[1196]: time="2024-12-13T15:11:36.368309817Z" level=info msg="StartContainer for \"8e661296f79b683d25efdd72a6c0fc70f342c1c82ae2070399ab6e6d84f74186\" returns successfully" Dec 13 15:11:36.386953 env[1196]: time="2024-12-13T15:11:36.386838470Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 15:11:36.387341 env[1196]: time="2024-12-13T15:11:36.386913849Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 15:11:36.387341 env[1196]: time="2024-12-13T15:11:36.386936064Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 15:11:36.387590 env[1196]: time="2024-12-13T15:11:36.387262069Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e40da2288295e47ba0ad02306d3cda2bb3fb8b731b289bfaa47fd40be413ba7c pid=2229 runtime=io.containerd.runc.v2 Dec 13 15:11:36.409781 systemd[1]: Started cri-containerd-e40da2288295e47ba0ad02306d3cda2bb3fb8b731b289bfaa47fd40be413ba7c.scope. Dec 13 15:11:36.498101 env[1196]: time="2024-12-13T15:11:36.498048942Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-pnqsv,Uid:b782c812-a1ed-493d-9345-e18658c8b5eb,Namespace:kube-system,Attempt:0,} returns sandbox id \"e40da2288295e47ba0ad02306d3cda2bb3fb8b731b289bfaa47fd40be413ba7c\"" Dec 13 15:11:36.703723 kubelet[2033]: I1213 15:11:36.703626 2033 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-vvbfm" podStartSLOduration=1.703461754 podStartE2EDuration="1.703461754s" podCreationTimestamp="2024-12-13 15:11:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 15:11:36.699174432 +0000 UTC m=+12.359438976" watchObservedRunningTime="2024-12-13 15:11:36.703461754 +0000 UTC m=+12.363726318" Dec 13 15:11:45.400084 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount916415046.mount: Deactivated successfully. Dec 13 15:11:50.071843 env[1196]: time="2024-12-13T15:11:50.071682432Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 15:11:50.076044 env[1196]: time="2024-12-13T15:11:50.075938401Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 15:11:50.078928 env[1196]: time="2024-12-13T15:11:50.078865819Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 15:11:50.079339 env[1196]: time="2024-12-13T15:11:50.079271601Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Dec 13 15:11:50.082533 env[1196]: time="2024-12-13T15:11:50.082486024Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Dec 13 15:11:50.084474 env[1196]: time="2024-12-13T15:11:50.083997762Z" level=info msg="CreateContainer within sandbox \"b39f4d2c109f56f44792f05acce5cbcd9f53b771ac8c73034528007209f65e3f\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 15:11:50.106618 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount824054897.mount: Deactivated successfully. Dec 13 15:11:50.116736 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1789697752.mount: Deactivated successfully. Dec 13 15:11:50.119547 env[1196]: time="2024-12-13T15:11:50.119487598Z" level=info msg="CreateContainer within sandbox \"b39f4d2c109f56f44792f05acce5cbcd9f53b771ac8c73034528007209f65e3f\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"e35e9ddce5c4c5b57934a52d23dd079f7b28f6c149bc6d8b82404a06e154a81d\"" Dec 13 15:11:50.121686 env[1196]: time="2024-12-13T15:11:50.121643031Z" level=info msg="StartContainer for \"e35e9ddce5c4c5b57934a52d23dd079f7b28f6c149bc6d8b82404a06e154a81d\"" Dec 13 15:11:50.162431 systemd[1]: Started cri-containerd-e35e9ddce5c4c5b57934a52d23dd079f7b28f6c149bc6d8b82404a06e154a81d.scope. Dec 13 15:11:50.243333 env[1196]: time="2024-12-13T15:11:50.243259620Z" level=info msg="StartContainer for \"e35e9ddce5c4c5b57934a52d23dd079f7b28f6c149bc6d8b82404a06e154a81d\" returns successfully" Dec 13 15:11:50.253207 systemd[1]: cri-containerd-e35e9ddce5c4c5b57934a52d23dd079f7b28f6c149bc6d8b82404a06e154a81d.scope: Deactivated successfully. Dec 13 15:11:50.480667 env[1196]: time="2024-12-13T15:11:50.480583388Z" level=info msg="shim disconnected" id=e35e9ddce5c4c5b57934a52d23dd079f7b28f6c149bc6d8b82404a06e154a81d Dec 13 15:11:50.481076 env[1196]: time="2024-12-13T15:11:50.481044767Z" level=warning msg="cleaning up after shim disconnected" id=e35e9ddce5c4c5b57934a52d23dd079f7b28f6c149bc6d8b82404a06e154a81d namespace=k8s.io Dec 13 15:11:50.481234 env[1196]: time="2024-12-13T15:11:50.481206729Z" level=info msg="cleaning up dead shim" Dec 13 15:11:50.495270 env[1196]: time="2024-12-13T15:11:50.495209278Z" level=warning msg="cleanup warnings time=\"2024-12-13T15:11:50Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2438 runtime=io.containerd.runc.v2\n" Dec 13 15:11:50.715814 env[1196]: time="2024-12-13T15:11:50.715511995Z" level=info msg="CreateContainer within sandbox \"b39f4d2c109f56f44792f05acce5cbcd9f53b771ac8c73034528007209f65e3f\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 15:11:50.736891 env[1196]: time="2024-12-13T15:11:50.736762307Z" level=info msg="CreateContainer within sandbox \"b39f4d2c109f56f44792f05acce5cbcd9f53b771ac8c73034528007209f65e3f\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"adc5c456117a73a26e4865f7232a62d43a4377f6bc960ce65d8d5c7b3f02e439\"" Dec 13 15:11:50.738203 env[1196]: time="2024-12-13T15:11:50.738161737Z" level=info msg="StartContainer for \"adc5c456117a73a26e4865f7232a62d43a4377f6bc960ce65d8d5c7b3f02e439\"" Dec 13 15:11:50.766507 systemd[1]: Started cri-containerd-adc5c456117a73a26e4865f7232a62d43a4377f6bc960ce65d8d5c7b3f02e439.scope. Dec 13 15:11:50.811527 env[1196]: time="2024-12-13T15:11:50.811460181Z" level=info msg="StartContainer for \"adc5c456117a73a26e4865f7232a62d43a4377f6bc960ce65d8d5c7b3f02e439\" returns successfully" Dec 13 15:11:50.831193 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 15:11:50.832205 systemd[1]: Stopped systemd-sysctl.service. Dec 13 15:11:50.833360 systemd[1]: Stopping systemd-sysctl.service... Dec 13 15:11:50.838745 systemd[1]: Starting systemd-sysctl.service... Dec 13 15:11:50.841005 systemd[1]: cri-containerd-adc5c456117a73a26e4865f7232a62d43a4377f6bc960ce65d8d5c7b3f02e439.scope: Deactivated successfully. Dec 13 15:11:50.857078 systemd[1]: Finished systemd-sysctl.service. Dec 13 15:11:50.891416 env[1196]: time="2024-12-13T15:11:50.891351669Z" level=info msg="shim disconnected" id=adc5c456117a73a26e4865f7232a62d43a4377f6bc960ce65d8d5c7b3f02e439 Dec 13 15:11:50.891839 env[1196]: time="2024-12-13T15:11:50.891808782Z" level=warning msg="cleaning up after shim disconnected" id=adc5c456117a73a26e4865f7232a62d43a4377f6bc960ce65d8d5c7b3f02e439 namespace=k8s.io Dec 13 15:11:50.892010 env[1196]: time="2024-12-13T15:11:50.891963757Z" level=info msg="cleaning up dead shim" Dec 13 15:11:50.902459 env[1196]: time="2024-12-13T15:11:50.902400148Z" level=warning msg="cleanup warnings time=\"2024-12-13T15:11:50Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2502 runtime=io.containerd.runc.v2\n" Dec 13 15:11:51.100872 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e35e9ddce5c4c5b57934a52d23dd079f7b28f6c149bc6d8b82404a06e154a81d-rootfs.mount: Deactivated successfully. Dec 13 15:11:51.717397 env[1196]: time="2024-12-13T15:11:51.715765132Z" level=info msg="CreateContainer within sandbox \"b39f4d2c109f56f44792f05acce5cbcd9f53b771ac8c73034528007209f65e3f\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 15:11:51.744515 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2224952219.mount: Deactivated successfully. Dec 13 15:11:51.760290 env[1196]: time="2024-12-13T15:11:51.760215179Z" level=info msg="CreateContainer within sandbox \"b39f4d2c109f56f44792f05acce5cbcd9f53b771ac8c73034528007209f65e3f\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"f68c361d5599718b4ffc929520279d6cf42b04fd50482dd7054efff029d72fa8\"" Dec 13 15:11:51.762579 env[1196]: time="2024-12-13T15:11:51.761337934Z" level=info msg="StartContainer for \"f68c361d5599718b4ffc929520279d6cf42b04fd50482dd7054efff029d72fa8\"" Dec 13 15:11:51.789289 systemd[1]: Started cri-containerd-f68c361d5599718b4ffc929520279d6cf42b04fd50482dd7054efff029d72fa8.scope. Dec 13 15:11:51.842212 env[1196]: time="2024-12-13T15:11:51.842157582Z" level=info msg="StartContainer for \"f68c361d5599718b4ffc929520279d6cf42b04fd50482dd7054efff029d72fa8\" returns successfully" Dec 13 15:11:51.848900 systemd[1]: cri-containerd-f68c361d5599718b4ffc929520279d6cf42b04fd50482dd7054efff029d72fa8.scope: Deactivated successfully. Dec 13 15:11:51.876947 env[1196]: time="2024-12-13T15:11:51.876872111Z" level=info msg="shim disconnected" id=f68c361d5599718b4ffc929520279d6cf42b04fd50482dd7054efff029d72fa8 Dec 13 15:11:51.876947 env[1196]: time="2024-12-13T15:11:51.876948084Z" level=warning msg="cleaning up after shim disconnected" id=f68c361d5599718b4ffc929520279d6cf42b04fd50482dd7054efff029d72fa8 namespace=k8s.io Dec 13 15:11:51.877302 env[1196]: time="2024-12-13T15:11:51.876963425Z" level=info msg="cleaning up dead shim" Dec 13 15:11:51.887526 env[1196]: time="2024-12-13T15:11:51.887457258Z" level=warning msg="cleanup warnings time=\"2024-12-13T15:11:51Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2560 runtime=io.containerd.runc.v2\n" Dec 13 15:11:52.721651 env[1196]: time="2024-12-13T15:11:52.721333037Z" level=info msg="CreateContainer within sandbox \"b39f4d2c109f56f44792f05acce5cbcd9f53b771ac8c73034528007209f65e3f\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 15:11:52.739621 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3434933007.mount: Deactivated successfully. Dec 13 15:11:52.752256 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2001291567.mount: Deactivated successfully. Dec 13 15:11:52.753426 env[1196]: time="2024-12-13T15:11:52.753375039Z" level=info msg="CreateContainer within sandbox \"b39f4d2c109f56f44792f05acce5cbcd9f53b771ac8c73034528007209f65e3f\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"963978c60a11a621f59a7af4aee6514d6c68b7593f0037738da7fe18ea52b41e\"" Dec 13 15:11:52.754693 env[1196]: time="2024-12-13T15:11:52.754646865Z" level=info msg="StartContainer for \"963978c60a11a621f59a7af4aee6514d6c68b7593f0037738da7fe18ea52b41e\"" Dec 13 15:11:52.787110 systemd[1]: Started cri-containerd-963978c60a11a621f59a7af4aee6514d6c68b7593f0037738da7fe18ea52b41e.scope. Dec 13 15:11:52.837166 systemd[1]: cri-containerd-963978c60a11a621f59a7af4aee6514d6c68b7593f0037738da7fe18ea52b41e.scope: Deactivated successfully. Dec 13 15:11:52.840105 env[1196]: time="2024-12-13T15:11:52.840042036Z" level=info msg="StartContainer for \"963978c60a11a621f59a7af4aee6514d6c68b7593f0037738da7fe18ea52b41e\" returns successfully" Dec 13 15:11:52.875814 env[1196]: time="2024-12-13T15:11:52.875754141Z" level=info msg="shim disconnected" id=963978c60a11a621f59a7af4aee6514d6c68b7593f0037738da7fe18ea52b41e Dec 13 15:11:52.875814 env[1196]: time="2024-12-13T15:11:52.875815749Z" level=warning msg="cleaning up after shim disconnected" id=963978c60a11a621f59a7af4aee6514d6c68b7593f0037738da7fe18ea52b41e namespace=k8s.io Dec 13 15:11:52.876173 env[1196]: time="2024-12-13T15:11:52.875843549Z" level=info msg="cleaning up dead shim" Dec 13 15:11:52.888851 env[1196]: time="2024-12-13T15:11:52.888791917Z" level=warning msg="cleanup warnings time=\"2024-12-13T15:11:52Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2612 runtime=io.containerd.runc.v2\n" Dec 13 15:11:53.726666 env[1196]: time="2024-12-13T15:11:53.726254277Z" level=info msg="CreateContainer within sandbox \"b39f4d2c109f56f44792f05acce5cbcd9f53b771ac8c73034528007209f65e3f\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 15:11:53.748607 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2039508535.mount: Deactivated successfully. Dec 13 15:11:53.755760 env[1196]: time="2024-12-13T15:11:53.755684702Z" level=info msg="CreateContainer within sandbox \"b39f4d2c109f56f44792f05acce5cbcd9f53b771ac8c73034528007209f65e3f\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"19c8d13059b8a0dafc895e11693c742f5d3c22643191a5f3a724d136dc7d6680\"" Dec 13 15:11:53.760030 env[1196]: time="2024-12-13T15:11:53.759988974Z" level=info msg="StartContainer for \"19c8d13059b8a0dafc895e11693c742f5d3c22643191a5f3a724d136dc7d6680\"" Dec 13 15:11:53.795104 systemd[1]: Started cri-containerd-19c8d13059b8a0dafc895e11693c742f5d3c22643191a5f3a724d136dc7d6680.scope. Dec 13 15:11:53.857448 env[1196]: time="2024-12-13T15:11:53.857359770Z" level=info msg="StartContainer for \"19c8d13059b8a0dafc895e11693c742f5d3c22643191a5f3a724d136dc7d6680\" returns successfully" Dec 13 15:11:54.117136 kubelet[2033]: I1213 15:11:54.117082 2033 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Dec 13 15:11:54.160223 kubelet[2033]: I1213 15:11:54.160156 2033 topology_manager.go:215] "Topology Admit Handler" podUID="1d03b7d9-110f-4161-a1fa-11aac10ee57a" podNamespace="kube-system" podName="coredns-76f75df574-b5pwt" Dec 13 15:11:54.167940 systemd[1]: Created slice kubepods-burstable-pod1d03b7d9_110f_4161_a1fa_11aac10ee57a.slice. Dec 13 15:11:54.174990 kubelet[2033]: I1213 15:11:54.174943 2033 topology_manager.go:215] "Topology Admit Handler" podUID="d61eaac1-74df-4055-bc27-5229eb7f96cf" podNamespace="kube-system" podName="coredns-76f75df574-h8xdx" Dec 13 15:11:54.181933 systemd[1]: Created slice kubepods-burstable-podd61eaac1_74df_4055_bc27_5229eb7f96cf.slice. Dec 13 15:11:54.271053 kubelet[2033]: I1213 15:11:54.271012 2033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1d03b7d9-110f-4161-a1fa-11aac10ee57a-config-volume\") pod \"coredns-76f75df574-b5pwt\" (UID: \"1d03b7d9-110f-4161-a1fa-11aac10ee57a\") " pod="kube-system/coredns-76f75df574-b5pwt" Dec 13 15:11:54.271053 kubelet[2033]: I1213 15:11:54.271070 2033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d61eaac1-74df-4055-bc27-5229eb7f96cf-config-volume\") pod \"coredns-76f75df574-h8xdx\" (UID: \"d61eaac1-74df-4055-bc27-5229eb7f96cf\") " pod="kube-system/coredns-76f75df574-h8xdx" Dec 13 15:11:54.271352 kubelet[2033]: I1213 15:11:54.271101 2033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7sb28\" (UniqueName: \"kubernetes.io/projected/d61eaac1-74df-4055-bc27-5229eb7f96cf-kube-api-access-7sb28\") pod \"coredns-76f75df574-h8xdx\" (UID: \"d61eaac1-74df-4055-bc27-5229eb7f96cf\") " pod="kube-system/coredns-76f75df574-h8xdx" Dec 13 15:11:54.271352 kubelet[2033]: I1213 15:11:54.271130 2033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxqkb\" (UniqueName: \"kubernetes.io/projected/1d03b7d9-110f-4161-a1fa-11aac10ee57a-kube-api-access-vxqkb\") pod \"coredns-76f75df574-b5pwt\" (UID: \"1d03b7d9-110f-4161-a1fa-11aac10ee57a\") " pod="kube-system/coredns-76f75df574-b5pwt" Dec 13 15:11:54.473442 env[1196]: time="2024-12-13T15:11:54.473326279Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-b5pwt,Uid:1d03b7d9-110f-4161-a1fa-11aac10ee57a,Namespace:kube-system,Attempt:0,}" Dec 13 15:11:54.491013 env[1196]: time="2024-12-13T15:11:54.487334690Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-h8xdx,Uid:d61eaac1-74df-4055-bc27-5229eb7f96cf,Namespace:kube-system,Attempt:0,}" Dec 13 15:11:54.758407 kubelet[2033]: I1213 15:11:54.758177 2033 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-4cqql" podStartSLOduration=5.921861556 podStartE2EDuration="19.757996029s" podCreationTimestamp="2024-12-13 15:11:35 +0000 UTC" firstStartedPulling="2024-12-13 15:11:36.244997054 +0000 UTC m=+11.905261591" lastFinishedPulling="2024-12-13 15:11:50.081131513 +0000 UTC m=+25.741396064" observedRunningTime="2024-12-13 15:11:54.754997122 +0000 UTC m=+30.415261670" watchObservedRunningTime="2024-12-13 15:11:54.757996029 +0000 UTC m=+30.418260583" Dec 13 15:11:56.703165 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2393137114.mount: Deactivated successfully. Dec 13 15:11:57.856198 env[1196]: time="2024-12-13T15:11:57.856033146Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 15:11:57.859921 env[1196]: time="2024-12-13T15:11:57.859850511Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 15:11:57.863158 env[1196]: time="2024-12-13T15:11:57.863099795Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 15:11:57.865105 env[1196]: time="2024-12-13T15:11:57.864125866Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Dec 13 15:11:57.872864 env[1196]: time="2024-12-13T15:11:57.872821736Z" level=info msg="CreateContainer within sandbox \"e40da2288295e47ba0ad02306d3cda2bb3fb8b731b289bfaa47fd40be413ba7c\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Dec 13 15:11:57.893167 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3246999925.mount: Deactivated successfully. Dec 13 15:11:57.903879 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4177504448.mount: Deactivated successfully. Dec 13 15:11:57.907322 env[1196]: time="2024-12-13T15:11:57.907258615Z" level=info msg="CreateContainer within sandbox \"e40da2288295e47ba0ad02306d3cda2bb3fb8b731b289bfaa47fd40be413ba7c\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"f67c9d01960740fb74164776b5cb88eaad8c41cac5a8ab4a933515aa7a76c53b\"" Dec 13 15:11:57.908544 env[1196]: time="2024-12-13T15:11:57.908433406Z" level=info msg="StartContainer for \"f67c9d01960740fb74164776b5cb88eaad8c41cac5a8ab4a933515aa7a76c53b\"" Dec 13 15:11:57.953763 systemd[1]: Started cri-containerd-f67c9d01960740fb74164776b5cb88eaad8c41cac5a8ab4a933515aa7a76c53b.scope. Dec 13 15:11:58.072053 env[1196]: time="2024-12-13T15:11:58.071942233Z" level=info msg="StartContainer for \"f67c9d01960740fb74164776b5cb88eaad8c41cac5a8ab4a933515aa7a76c53b\" returns successfully" Dec 13 15:11:58.846612 kubelet[2033]: I1213 15:11:58.846521 2033 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-5cc964979-pnqsv" podStartSLOduration=2.481546524 podStartE2EDuration="23.845259986s" podCreationTimestamp="2024-12-13 15:11:35 +0000 UTC" firstStartedPulling="2024-12-13 15:11:36.501727295 +0000 UTC m=+12.161991835" lastFinishedPulling="2024-12-13 15:11:57.865440754 +0000 UTC m=+33.525705297" observedRunningTime="2024-12-13 15:11:58.84507552 +0000 UTC m=+34.505340063" watchObservedRunningTime="2024-12-13 15:11:58.845259986 +0000 UTC m=+34.505524540" Dec 13 15:12:02.377685 systemd-networkd[1030]: cilium_host: Link UP Dec 13 15:12:02.387666 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Dec 13 15:12:02.387878 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Dec 13 15:12:02.383091 systemd-networkd[1030]: cilium_net: Link UP Dec 13 15:12:02.384875 systemd-networkd[1030]: cilium_net: Gained carrier Dec 13 15:12:02.387513 systemd-networkd[1030]: cilium_host: Gained carrier Dec 13 15:12:02.446522 systemd-networkd[1030]: cilium_host: Gained IPv6LL Dec 13 15:12:02.578243 systemd-networkd[1030]: cilium_vxlan: Link UP Dec 13 15:12:02.578255 systemd-networkd[1030]: cilium_vxlan: Gained carrier Dec 13 15:12:03.044214 systemd-networkd[1030]: cilium_net: Gained IPv6LL Dec 13 15:12:03.197094 kernel: NET: Registered PF_ALG protocol family Dec 13 15:12:04.274393 systemd-networkd[1030]: lxc_health: Link UP Dec 13 15:12:04.311076 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Dec 13 15:12:04.308314 systemd-networkd[1030]: lxc_health: Gained carrier Dec 13 15:12:04.323370 systemd-networkd[1030]: cilium_vxlan: Gained IPv6LL Dec 13 15:12:04.548579 systemd-networkd[1030]: lxcf2aa982f4d09: Link UP Dec 13 15:12:04.560028 kernel: eth0: renamed from tmpb60a9 Dec 13 15:12:04.575184 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcf2aa982f4d09: link becomes ready Dec 13 15:12:04.574623 systemd-networkd[1030]: lxcf2aa982f4d09: Gained carrier Dec 13 15:12:04.606349 systemd-networkd[1030]: lxce935bee76d9a: Link UP Dec 13 15:12:04.622995 kernel: eth0: renamed from tmpc64d6 Dec 13 15:12:04.632010 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxce935bee76d9a: link becomes ready Dec 13 15:12:04.629217 systemd-networkd[1030]: lxce935bee76d9a: Gained carrier Dec 13 15:12:05.411258 systemd-networkd[1030]: lxc_health: Gained IPv6LL Dec 13 15:12:06.638310 systemd-networkd[1030]: lxcf2aa982f4d09: Gained IPv6LL Dec 13 15:12:06.639137 systemd-networkd[1030]: lxce935bee76d9a: Gained IPv6LL Dec 13 15:12:10.379546 env[1196]: time="2024-12-13T15:12:10.379343768Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 15:12:10.380907 env[1196]: time="2024-12-13T15:12:10.379480523Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 15:12:10.380907 env[1196]: time="2024-12-13T15:12:10.379499566Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 15:12:10.380907 env[1196]: time="2024-12-13T15:12:10.379871630Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b60a954da2a38d305ce4da0d5696b7da06ce94fa462d257d925f0e4cc27b0b5d pid=3207 runtime=io.containerd.runc.v2 Dec 13 15:12:10.397554 env[1196]: time="2024-12-13T15:12:10.397424375Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 15:12:10.397874 env[1196]: time="2024-12-13T15:12:10.397831240Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 15:12:10.398217 env[1196]: time="2024-12-13T15:12:10.398145404Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 15:12:10.404224 env[1196]: time="2024-12-13T15:12:10.399139774Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c64d6ebd1ce4c2f970a99ea9496c4bb9379eea6b1085f4ec2633383c443f9055 pid=3206 runtime=io.containerd.runc.v2 Dec 13 15:12:10.441046 systemd[1]: run-containerd-runc-k8s.io-b60a954da2a38d305ce4da0d5696b7da06ce94fa462d257d925f0e4cc27b0b5d-runc.RXJQQ3.mount: Deactivated successfully. Dec 13 15:12:10.455783 systemd[1]: Started cri-containerd-b60a954da2a38d305ce4da0d5696b7da06ce94fa462d257d925f0e4cc27b0b5d.scope. Dec 13 15:12:10.485255 systemd[1]: Started cri-containerd-c64d6ebd1ce4c2f970a99ea9496c4bb9379eea6b1085f4ec2633383c443f9055.scope. Dec 13 15:12:10.606746 env[1196]: time="2024-12-13T15:12:10.606655821Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-h8xdx,Uid:d61eaac1-74df-4055-bc27-5229eb7f96cf,Namespace:kube-system,Attempt:0,} returns sandbox id \"c64d6ebd1ce4c2f970a99ea9496c4bb9379eea6b1085f4ec2633383c443f9055\"" Dec 13 15:12:10.623392 env[1196]: time="2024-12-13T15:12:10.623333062Z" level=info msg="CreateContainer within sandbox \"c64d6ebd1ce4c2f970a99ea9496c4bb9379eea6b1085f4ec2633383c443f9055\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 15:12:10.635698 env[1196]: time="2024-12-13T15:12:10.635515283Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-b5pwt,Uid:1d03b7d9-110f-4161-a1fa-11aac10ee57a,Namespace:kube-system,Attempt:0,} returns sandbox id \"b60a954da2a38d305ce4da0d5696b7da06ce94fa462d257d925f0e4cc27b0b5d\"" Dec 13 15:12:10.640365 env[1196]: time="2024-12-13T15:12:10.640323032Z" level=info msg="CreateContainer within sandbox \"b60a954da2a38d305ce4da0d5696b7da06ce94fa462d257d925f0e4cc27b0b5d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 15:12:10.658880 env[1196]: time="2024-12-13T15:12:10.658822998Z" level=info msg="CreateContainer within sandbox \"c64d6ebd1ce4c2f970a99ea9496c4bb9379eea6b1085f4ec2633383c443f9055\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"79944eed850d55e38ad4ea7524dacc1f3cc88dbb96e4a64f5e2095b57d357a6e\"" Dec 13 15:12:10.660194 env[1196]: time="2024-12-13T15:12:10.660153415Z" level=info msg="StartContainer for \"79944eed850d55e38ad4ea7524dacc1f3cc88dbb96e4a64f5e2095b57d357a6e\"" Dec 13 15:12:10.661392 env[1196]: time="2024-12-13T15:12:10.661344880Z" level=info msg="CreateContainer within sandbox \"b60a954da2a38d305ce4da0d5696b7da06ce94fa462d257d925f0e4cc27b0b5d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"61a658b0db6241912c86eae914c6bab574155f33e1df285fe0ea1b09d5870751\"" Dec 13 15:12:10.661874 env[1196]: time="2024-12-13T15:12:10.661837229Z" level=info msg="StartContainer for \"61a658b0db6241912c86eae914c6bab574155f33e1df285fe0ea1b09d5870751\"" Dec 13 15:12:10.697950 systemd[1]: Started cri-containerd-79944eed850d55e38ad4ea7524dacc1f3cc88dbb96e4a64f5e2095b57d357a6e.scope. Dec 13 15:12:10.717517 systemd[1]: Started cri-containerd-61a658b0db6241912c86eae914c6bab574155f33e1df285fe0ea1b09d5870751.scope. Dec 13 15:12:10.776549 env[1196]: time="2024-12-13T15:12:10.776484726Z" level=info msg="StartContainer for \"79944eed850d55e38ad4ea7524dacc1f3cc88dbb96e4a64f5e2095b57d357a6e\" returns successfully" Dec 13 15:12:10.786061 env[1196]: time="2024-12-13T15:12:10.786008610Z" level=info msg="StartContainer for \"61a658b0db6241912c86eae914c6bab574155f33e1df285fe0ea1b09d5870751\" returns successfully" Dec 13 15:12:10.844107 kubelet[2033]: I1213 15:12:10.844047 2033 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-b5pwt" podStartSLOduration=34.843811192 podStartE2EDuration="34.843811192s" podCreationTimestamp="2024-12-13 15:11:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 15:12:10.842841303 +0000 UTC m=+46.503105858" watchObservedRunningTime="2024-12-13 15:12:10.843811192 +0000 UTC m=+46.504075746" Dec 13 15:12:10.868828 kubelet[2033]: I1213 15:12:10.868782 2033 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-h8xdx" podStartSLOduration=35.868713268 podStartE2EDuration="35.868713268s" podCreationTimestamp="2024-12-13 15:11:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 15:12:10.865164703 +0000 UTC m=+46.525429257" watchObservedRunningTime="2024-12-13 15:12:10.868713268 +0000 UTC m=+46.528977821" Dec 13 15:12:31.271870 systemd[1]: Started sshd@7-10.243.84.50:22-139.178.68.195:54362.service. Dec 13 15:12:32.181760 sshd[3361]: Accepted publickey for core from 139.178.68.195 port 54362 ssh2: RSA SHA256:BRWuvX4vngANWcecei9LW91Zd3OWx+vtbErQ53ehsZc Dec 13 15:12:32.185299 sshd[3361]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 15:12:32.194076 systemd-logind[1182]: New session 8 of user core. Dec 13 15:12:32.196144 systemd[1]: Started session-8.scope. Dec 13 15:12:33.002676 sshd[3361]: pam_unix(sshd:session): session closed for user core Dec 13 15:12:33.007141 systemd[1]: sshd@7-10.243.84.50:22-139.178.68.195:54362.service: Deactivated successfully. Dec 13 15:12:33.008565 systemd[1]: session-8.scope: Deactivated successfully. Dec 13 15:12:33.009462 systemd-logind[1182]: Session 8 logged out. Waiting for processes to exit. Dec 13 15:12:33.010722 systemd-logind[1182]: Removed session 8. Dec 13 15:12:38.152669 systemd[1]: Started sshd@8-10.243.84.50:22-139.178.68.195:57522.service. Dec 13 15:12:39.042712 sshd[3376]: Accepted publickey for core from 139.178.68.195 port 57522 ssh2: RSA SHA256:BRWuvX4vngANWcecei9LW91Zd3OWx+vtbErQ53ehsZc Dec 13 15:12:39.044871 sshd[3376]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 15:12:39.053232 systemd[1]: Started session-9.scope. Dec 13 15:12:39.055111 systemd-logind[1182]: New session 9 of user core. Dec 13 15:12:39.785530 sshd[3376]: pam_unix(sshd:session): session closed for user core Dec 13 15:12:39.789329 systemd-logind[1182]: Session 9 logged out. Waiting for processes to exit. Dec 13 15:12:39.790363 systemd[1]: sshd@8-10.243.84.50:22-139.178.68.195:57522.service: Deactivated successfully. Dec 13 15:12:39.791404 systemd[1]: session-9.scope: Deactivated successfully. Dec 13 15:12:39.792543 systemd-logind[1182]: Removed session 9. Dec 13 15:12:44.936788 systemd[1]: Started sshd@9-10.243.84.50:22-139.178.68.195:57538.service. Dec 13 15:12:45.832907 sshd[3390]: Accepted publickey for core from 139.178.68.195 port 57538 ssh2: RSA SHA256:BRWuvX4vngANWcecei9LW91Zd3OWx+vtbErQ53ehsZc Dec 13 15:12:45.834818 sshd[3390]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 15:12:45.842900 systemd-logind[1182]: New session 10 of user core. Dec 13 15:12:45.843246 systemd[1]: Started session-10.scope. Dec 13 15:12:46.559854 sshd[3390]: pam_unix(sshd:session): session closed for user core Dec 13 15:12:46.563823 systemd[1]: sshd@9-10.243.84.50:22-139.178.68.195:57538.service: Deactivated successfully. Dec 13 15:12:46.564902 systemd[1]: session-10.scope: Deactivated successfully. Dec 13 15:12:46.565831 systemd-logind[1182]: Session 10 logged out. Waiting for processes to exit. Dec 13 15:12:46.567401 systemd-logind[1182]: Removed session 10. Dec 13 15:12:51.707570 systemd[1]: Started sshd@10-10.243.84.50:22-139.178.68.195:58190.service. Dec 13 15:12:52.594388 sshd[3403]: Accepted publickey for core from 139.178.68.195 port 58190 ssh2: RSA SHA256:BRWuvX4vngANWcecei9LW91Zd3OWx+vtbErQ53ehsZc Dec 13 15:12:52.596986 sshd[3403]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 15:12:52.605040 systemd-logind[1182]: New session 11 of user core. Dec 13 15:12:52.606231 systemd[1]: Started session-11.scope. Dec 13 15:12:53.319045 sshd[3403]: pam_unix(sshd:session): session closed for user core Dec 13 15:12:53.323200 systemd[1]: sshd@10-10.243.84.50:22-139.178.68.195:58190.service: Deactivated successfully. Dec 13 15:12:53.324382 systemd[1]: session-11.scope: Deactivated successfully. Dec 13 15:12:53.325206 systemd-logind[1182]: Session 11 logged out. Waiting for processes to exit. Dec 13 15:12:53.326586 systemd-logind[1182]: Removed session 11. Dec 13 15:12:53.467403 systemd[1]: Started sshd@11-10.243.84.50:22-139.178.68.195:58194.service. Dec 13 15:12:54.350333 sshd[3416]: Accepted publickey for core from 139.178.68.195 port 58194 ssh2: RSA SHA256:BRWuvX4vngANWcecei9LW91Zd3OWx+vtbErQ53ehsZc Dec 13 15:12:54.353390 sshd[3416]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 15:12:54.363202 systemd[1]: Started session-12.scope. Dec 13 15:12:54.363834 systemd-logind[1182]: New session 12 of user core. Dec 13 15:12:55.130286 sshd[3416]: pam_unix(sshd:session): session closed for user core Dec 13 15:12:55.133837 systemd[1]: sshd@11-10.243.84.50:22-139.178.68.195:58194.service: Deactivated successfully. Dec 13 15:12:55.135177 systemd[1]: session-12.scope: Deactivated successfully. Dec 13 15:12:55.136307 systemd-logind[1182]: Session 12 logged out. Waiting for processes to exit. Dec 13 15:12:55.137785 systemd-logind[1182]: Removed session 12. Dec 13 15:12:55.276531 systemd[1]: Started sshd@12-10.243.84.50:22-139.178.68.195:58208.service. Dec 13 15:12:56.159620 sshd[3426]: Accepted publickey for core from 139.178.68.195 port 58208 ssh2: RSA SHA256:BRWuvX4vngANWcecei9LW91Zd3OWx+vtbErQ53ehsZc Dec 13 15:12:56.162257 sshd[3426]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 15:12:56.169499 systemd[1]: Started session-13.scope. Dec 13 15:12:56.170280 systemd-logind[1182]: New session 13 of user core. Dec 13 15:12:56.901215 sshd[3426]: pam_unix(sshd:session): session closed for user core Dec 13 15:12:56.905178 systemd[1]: sshd@12-10.243.84.50:22-139.178.68.195:58208.service: Deactivated successfully. Dec 13 15:12:56.906344 systemd[1]: session-13.scope: Deactivated successfully. Dec 13 15:12:56.907151 systemd-logind[1182]: Session 13 logged out. Waiting for processes to exit. Dec 13 15:12:56.908292 systemd-logind[1182]: Removed session 13. Dec 13 15:13:02.050923 systemd[1]: Started sshd@13-10.243.84.50:22-139.178.68.195:54200.service. Dec 13 15:13:02.940264 sshd[3439]: Accepted publickey for core from 139.178.68.195 port 54200 ssh2: RSA SHA256:BRWuvX4vngANWcecei9LW91Zd3OWx+vtbErQ53ehsZc Dec 13 15:13:02.942709 sshd[3439]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 15:13:02.951104 systemd-logind[1182]: New session 14 of user core. Dec 13 15:13:02.952268 systemd[1]: Started session-14.scope. Dec 13 15:13:03.655822 sshd[3439]: pam_unix(sshd:session): session closed for user core Dec 13 15:13:03.660332 systemd-logind[1182]: Session 14 logged out. Waiting for processes to exit. Dec 13 15:13:03.660773 systemd[1]: sshd@13-10.243.84.50:22-139.178.68.195:54200.service: Deactivated successfully. Dec 13 15:13:03.661850 systemd[1]: session-14.scope: Deactivated successfully. Dec 13 15:13:03.663170 systemd-logind[1182]: Removed session 14. Dec 13 15:13:08.807397 systemd[1]: Started sshd@14-10.243.84.50:22-139.178.68.195:35296.service. Dec 13 15:13:09.694017 sshd[3452]: Accepted publickey for core from 139.178.68.195 port 35296 ssh2: RSA SHA256:BRWuvX4vngANWcecei9LW91Zd3OWx+vtbErQ53ehsZc Dec 13 15:13:09.696861 sshd[3452]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 15:13:09.705839 systemd-logind[1182]: New session 15 of user core. Dec 13 15:13:09.706014 systemd[1]: Started session-15.scope. Dec 13 15:13:10.408953 sshd[3452]: pam_unix(sshd:session): session closed for user core Dec 13 15:13:10.413600 systemd-logind[1182]: Session 15 logged out. Waiting for processes to exit. Dec 13 15:13:10.413985 systemd[1]: sshd@14-10.243.84.50:22-139.178.68.195:35296.service: Deactivated successfully. Dec 13 15:13:10.415241 systemd[1]: session-15.scope: Deactivated successfully. Dec 13 15:13:10.416759 systemd-logind[1182]: Removed session 15. Dec 13 15:13:10.555399 systemd[1]: Started sshd@15-10.243.84.50:22-139.178.68.195:35312.service. Dec 13 15:13:11.438962 sshd[3464]: Accepted publickey for core from 139.178.68.195 port 35312 ssh2: RSA SHA256:BRWuvX4vngANWcecei9LW91Zd3OWx+vtbErQ53ehsZc Dec 13 15:13:11.440965 sshd[3464]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 15:13:11.448892 systemd[1]: Started session-16.scope. Dec 13 15:13:11.451132 systemd-logind[1182]: New session 16 of user core. Dec 13 15:13:12.542623 sshd[3464]: pam_unix(sshd:session): session closed for user core Dec 13 15:13:12.547209 systemd-logind[1182]: Session 16 logged out. Waiting for processes to exit. Dec 13 15:13:12.547794 systemd[1]: sshd@15-10.243.84.50:22-139.178.68.195:35312.service: Deactivated successfully. Dec 13 15:13:12.548933 systemd[1]: session-16.scope: Deactivated successfully. Dec 13 15:13:12.550182 systemd-logind[1182]: Removed session 16. Dec 13 15:13:12.690501 systemd[1]: Started sshd@16-10.243.84.50:22-139.178.68.195:35324.service. Dec 13 15:13:13.583046 sshd[3474]: Accepted publickey for core from 139.178.68.195 port 35324 ssh2: RSA SHA256:BRWuvX4vngANWcecei9LW91Zd3OWx+vtbErQ53ehsZc Dec 13 15:13:13.584958 sshd[3474]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 15:13:13.592719 systemd-logind[1182]: New session 17 of user core. Dec 13 15:13:13.593031 systemd[1]: Started session-17.scope. Dec 13 15:13:16.388559 sshd[3474]: pam_unix(sshd:session): session closed for user core Dec 13 15:13:16.398155 systemd[1]: sshd@16-10.243.84.50:22-139.178.68.195:35324.service: Deactivated successfully. Dec 13 15:13:16.399747 systemd[1]: session-17.scope: Deactivated successfully. Dec 13 15:13:16.401186 systemd-logind[1182]: Session 17 logged out. Waiting for processes to exit. Dec 13 15:13:16.403296 systemd-logind[1182]: Removed session 17. Dec 13 15:13:16.537260 systemd[1]: Started sshd@17-10.243.84.50:22-139.178.68.195:59670.service. Dec 13 15:13:17.430339 sshd[3491]: Accepted publickey for core from 139.178.68.195 port 59670 ssh2: RSA SHA256:BRWuvX4vngANWcecei9LW91Zd3OWx+vtbErQ53ehsZc Dec 13 15:13:17.431362 sshd[3491]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 15:13:17.440080 systemd[1]: Started session-18.scope. Dec 13 15:13:17.441931 systemd-logind[1182]: New session 18 of user core. Dec 13 15:13:18.375500 sshd[3491]: pam_unix(sshd:session): session closed for user core Dec 13 15:13:18.379434 systemd[1]: sshd@17-10.243.84.50:22-139.178.68.195:59670.service: Deactivated successfully. Dec 13 15:13:18.380513 systemd[1]: session-18.scope: Deactivated successfully. Dec 13 15:13:18.381264 systemd-logind[1182]: Session 18 logged out. Waiting for processes to exit. Dec 13 15:13:18.382314 systemd-logind[1182]: Removed session 18. Dec 13 15:13:18.525040 systemd[1]: Started sshd@18-10.243.84.50:22-139.178.68.195:59682.service. Dec 13 15:13:19.410819 sshd[3501]: Accepted publickey for core from 139.178.68.195 port 59682 ssh2: RSA SHA256:BRWuvX4vngANWcecei9LW91Zd3OWx+vtbErQ53ehsZc Dec 13 15:13:19.413187 sshd[3501]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 15:13:19.422426 systemd-logind[1182]: New session 19 of user core. Dec 13 15:13:19.422769 systemd[1]: Started session-19.scope. Dec 13 15:13:20.135947 sshd[3501]: pam_unix(sshd:session): session closed for user core Dec 13 15:13:20.141657 systemd-logind[1182]: Session 19 logged out. Waiting for processes to exit. Dec 13 15:13:20.142236 systemd[1]: sshd@18-10.243.84.50:22-139.178.68.195:59682.service: Deactivated successfully. Dec 13 15:13:20.143601 systemd[1]: session-19.scope: Deactivated successfully. Dec 13 15:13:20.145190 systemd-logind[1182]: Removed session 19. Dec 13 15:13:25.282688 systemd[1]: Started sshd@19-10.243.84.50:22-139.178.68.195:59690.service. Dec 13 15:13:26.170355 sshd[3518]: Accepted publickey for core from 139.178.68.195 port 59690 ssh2: RSA SHA256:BRWuvX4vngANWcecei9LW91Zd3OWx+vtbErQ53ehsZc Dec 13 15:13:26.173576 sshd[3518]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 15:13:26.182882 systemd[1]: Started session-20.scope. Dec 13 15:13:26.183537 systemd-logind[1182]: New session 20 of user core. Dec 13 15:13:26.885091 sshd[3518]: pam_unix(sshd:session): session closed for user core Dec 13 15:13:26.889624 systemd[1]: sshd@19-10.243.84.50:22-139.178.68.195:59690.service: Deactivated successfully. Dec 13 15:13:26.890891 systemd[1]: session-20.scope: Deactivated successfully. Dec 13 15:13:26.891634 systemd-logind[1182]: Session 20 logged out. Waiting for processes to exit. Dec 13 15:13:26.892725 systemd-logind[1182]: Removed session 20. Dec 13 15:13:32.035437 systemd[1]: Started sshd@20-10.243.84.50:22-139.178.68.195:51250.service. Dec 13 15:13:32.926658 sshd[3529]: Accepted publickey for core from 139.178.68.195 port 51250 ssh2: RSA SHA256:BRWuvX4vngANWcecei9LW91Zd3OWx+vtbErQ53ehsZc Dec 13 15:13:32.928501 sshd[3529]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 15:13:32.936083 systemd-logind[1182]: New session 21 of user core. Dec 13 15:13:32.936174 systemd[1]: Started session-21.scope. Dec 13 15:13:33.647856 sshd[3529]: pam_unix(sshd:session): session closed for user core Dec 13 15:13:33.652312 systemd[1]: sshd@20-10.243.84.50:22-139.178.68.195:51250.service: Deactivated successfully. Dec 13 15:13:33.653465 systemd[1]: session-21.scope: Deactivated successfully. Dec 13 15:13:33.654931 systemd-logind[1182]: Session 21 logged out. Waiting for processes to exit. Dec 13 15:13:33.656430 systemd-logind[1182]: Removed session 21. Dec 13 15:13:38.799816 systemd[1]: Started sshd@21-10.243.84.50:22-139.178.68.195:59070.service. Dec 13 15:13:39.688401 sshd[3543]: Accepted publickey for core from 139.178.68.195 port 59070 ssh2: RSA SHA256:BRWuvX4vngANWcecei9LW91Zd3OWx+vtbErQ53ehsZc Dec 13 15:13:39.691398 sshd[3543]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 15:13:39.698316 systemd-logind[1182]: New session 22 of user core. Dec 13 15:13:39.699370 systemd[1]: Started session-22.scope. Dec 13 15:13:40.397024 sshd[3543]: pam_unix(sshd:session): session closed for user core Dec 13 15:13:40.401620 systemd[1]: sshd@21-10.243.84.50:22-139.178.68.195:59070.service: Deactivated successfully. Dec 13 15:13:40.402709 systemd[1]: session-22.scope: Deactivated successfully. Dec 13 15:13:40.403391 systemd-logind[1182]: Session 22 logged out. Waiting for processes to exit. Dec 13 15:13:40.405177 systemd-logind[1182]: Removed session 22. Dec 13 15:13:40.545159 systemd[1]: Started sshd@22-10.243.84.50:22-139.178.68.195:59084.service. Dec 13 15:13:41.429851 sshd[3555]: Accepted publickey for core from 139.178.68.195 port 59084 ssh2: RSA SHA256:BRWuvX4vngANWcecei9LW91Zd3OWx+vtbErQ53ehsZc Dec 13 15:13:41.432618 sshd[3555]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 15:13:41.439049 systemd-logind[1182]: New session 23 of user core. Dec 13 15:13:41.440396 systemd[1]: Started session-23.scope. Dec 13 15:13:44.757489 systemd[1]: run-containerd-runc-k8s.io-19c8d13059b8a0dafc895e11693c742f5d3c22643191a5f3a724d136dc7d6680-runc.6HSp6L.mount: Deactivated successfully. Dec 13 15:13:44.786989 env[1196]: time="2024-12-13T15:13:44.786873043Z" level=info msg="StopContainer for \"f67c9d01960740fb74164776b5cb88eaad8c41cac5a8ab4a933515aa7a76c53b\" with timeout 30 (s)" Dec 13 15:13:44.788528 env[1196]: time="2024-12-13T15:13:44.788483791Z" level=info msg="Stop container \"f67c9d01960740fb74164776b5cb88eaad8c41cac5a8ab4a933515aa7a76c53b\" with signal terminated" Dec 13 15:13:44.829076 env[1196]: time="2024-12-13T15:13:44.828942529Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 15:13:44.834939 env[1196]: time="2024-12-13T15:13:44.834891763Z" level=info msg="StopContainer for \"19c8d13059b8a0dafc895e11693c742f5d3c22643191a5f3a724d136dc7d6680\" with timeout 2 (s)" Dec 13 15:13:44.835864 env[1196]: time="2024-12-13T15:13:44.835824460Z" level=info msg="Stop container \"19c8d13059b8a0dafc895e11693c742f5d3c22643191a5f3a724d136dc7d6680\" with signal terminated" Dec 13 15:13:44.845109 systemd[1]: cri-containerd-f67c9d01960740fb74164776b5cb88eaad8c41cac5a8ab4a933515aa7a76c53b.scope: Deactivated successfully. Dec 13 15:13:44.855126 systemd-networkd[1030]: lxc_health: Link DOWN Dec 13 15:13:44.855138 systemd-networkd[1030]: lxc_health: Lost carrier Dec 13 15:13:44.910525 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f67c9d01960740fb74164776b5cb88eaad8c41cac5a8ab4a933515aa7a76c53b-rootfs.mount: Deactivated successfully. Dec 13 15:13:44.918548 env[1196]: time="2024-12-13T15:13:44.918490645Z" level=info msg="shim disconnected" id=f67c9d01960740fb74164776b5cb88eaad8c41cac5a8ab4a933515aa7a76c53b Dec 13 15:13:44.918785 env[1196]: time="2024-12-13T15:13:44.918551470Z" level=warning msg="cleaning up after shim disconnected" id=f67c9d01960740fb74164776b5cb88eaad8c41cac5a8ab4a933515aa7a76c53b namespace=k8s.io Dec 13 15:13:44.918785 env[1196]: time="2024-12-13T15:13:44.918574787Z" level=info msg="cleaning up dead shim" Dec 13 15:13:44.927653 systemd[1]: cri-containerd-19c8d13059b8a0dafc895e11693c742f5d3c22643191a5f3a724d136dc7d6680.scope: Deactivated successfully. Dec 13 15:13:44.928040 systemd[1]: cri-containerd-19c8d13059b8a0dafc895e11693c742f5d3c22643191a5f3a724d136dc7d6680.scope: Consumed 10.161s CPU time. Dec 13 15:13:44.945563 env[1196]: time="2024-12-13T15:13:44.945496323Z" level=warning msg="cleanup warnings time=\"2024-12-13T15:13:44Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3608 runtime=io.containerd.runc.v2\n" Dec 13 15:13:44.948110 env[1196]: time="2024-12-13T15:13:44.948059196Z" level=info msg="StopContainer for \"f67c9d01960740fb74164776b5cb88eaad8c41cac5a8ab4a933515aa7a76c53b\" returns successfully" Dec 13 15:13:44.949651 env[1196]: time="2024-12-13T15:13:44.949601603Z" level=info msg="StopPodSandbox for \"e40da2288295e47ba0ad02306d3cda2bb3fb8b731b289bfaa47fd40be413ba7c\"" Dec 13 15:13:44.949754 env[1196]: time="2024-12-13T15:13:44.949718033Z" level=info msg="Container to stop \"f67c9d01960740fb74164776b5cb88eaad8c41cac5a8ab4a933515aa7a76c53b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 15:13:44.952341 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e40da2288295e47ba0ad02306d3cda2bb3fb8b731b289bfaa47fd40be413ba7c-shm.mount: Deactivated successfully. Dec 13 15:13:44.965737 systemd[1]: cri-containerd-e40da2288295e47ba0ad02306d3cda2bb3fb8b731b289bfaa47fd40be413ba7c.scope: Deactivated successfully. Dec 13 15:13:44.977546 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-19c8d13059b8a0dafc895e11693c742f5d3c22643191a5f3a724d136dc7d6680-rootfs.mount: Deactivated successfully. Dec 13 15:13:44.984565 env[1196]: time="2024-12-13T15:13:44.984490280Z" level=info msg="shim disconnected" id=19c8d13059b8a0dafc895e11693c742f5d3c22643191a5f3a724d136dc7d6680 Dec 13 15:13:44.984565 env[1196]: time="2024-12-13T15:13:44.984555376Z" level=warning msg="cleaning up after shim disconnected" id=19c8d13059b8a0dafc895e11693c742f5d3c22643191a5f3a724d136dc7d6680 namespace=k8s.io Dec 13 15:13:44.984565 env[1196]: time="2024-12-13T15:13:44.984571736Z" level=info msg="cleaning up dead shim" Dec 13 15:13:45.001426 env[1196]: time="2024-12-13T15:13:45.001327373Z" level=warning msg="cleanup warnings time=\"2024-12-13T15:13:44Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3640 runtime=io.containerd.runc.v2\n" Dec 13 15:13:45.003527 env[1196]: time="2024-12-13T15:13:45.003484566Z" level=info msg="StopContainer for \"19c8d13059b8a0dafc895e11693c742f5d3c22643191a5f3a724d136dc7d6680\" returns successfully" Dec 13 15:13:45.004421 env[1196]: time="2024-12-13T15:13:45.004372634Z" level=info msg="StopPodSandbox for \"b39f4d2c109f56f44792f05acce5cbcd9f53b771ac8c73034528007209f65e3f\"" Dec 13 15:13:45.004544 env[1196]: time="2024-12-13T15:13:45.004499621Z" level=info msg="Container to stop \"963978c60a11a621f59a7af4aee6514d6c68b7593f0037738da7fe18ea52b41e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 15:13:45.004624 env[1196]: time="2024-12-13T15:13:45.004544939Z" level=info msg="Container to stop \"adc5c456117a73a26e4865f7232a62d43a4377f6bc960ce65d8d5c7b3f02e439\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 15:13:45.004624 env[1196]: time="2024-12-13T15:13:45.004576521Z" level=info msg="Container to stop \"f68c361d5599718b4ffc929520279d6cf42b04fd50482dd7054efff029d72fa8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 15:13:45.004624 env[1196]: time="2024-12-13T15:13:45.004603755Z" level=info msg="Container to stop \"e35e9ddce5c4c5b57934a52d23dd079f7b28f6c149bc6d8b82404a06e154a81d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 15:13:45.004931 env[1196]: time="2024-12-13T15:13:45.004620244Z" level=info msg="Container to stop \"19c8d13059b8a0dafc895e11693c742f5d3c22643191a5f3a724d136dc7d6680\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 15:13:45.017521 env[1196]: time="2024-12-13T15:13:45.015846116Z" level=info msg="shim disconnected" id=e40da2288295e47ba0ad02306d3cda2bb3fb8b731b289bfaa47fd40be413ba7c Dec 13 15:13:45.017521 env[1196]: time="2024-12-13T15:13:45.015902308Z" level=warning msg="cleaning up after shim disconnected" id=e40da2288295e47ba0ad02306d3cda2bb3fb8b731b289bfaa47fd40be413ba7c namespace=k8s.io Dec 13 15:13:45.017521 env[1196]: time="2024-12-13T15:13:45.015918145Z" level=info msg="cleaning up dead shim" Dec 13 15:13:45.020688 systemd[1]: cri-containerd-b39f4d2c109f56f44792f05acce5cbcd9f53b771ac8c73034528007209f65e3f.scope: Deactivated successfully. Dec 13 15:13:45.035111 env[1196]: time="2024-12-13T15:13:45.035034915Z" level=warning msg="cleanup warnings time=\"2024-12-13T15:13:45Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3670 runtime=io.containerd.runc.v2\n" Dec 13 15:13:45.036050 env[1196]: time="2024-12-13T15:13:45.036010446Z" level=info msg="TearDown network for sandbox \"e40da2288295e47ba0ad02306d3cda2bb3fb8b731b289bfaa47fd40be413ba7c\" successfully" Dec 13 15:13:45.036152 env[1196]: time="2024-12-13T15:13:45.036048032Z" level=info msg="StopPodSandbox for \"e40da2288295e47ba0ad02306d3cda2bb3fb8b731b289bfaa47fd40be413ba7c\" returns successfully" Dec 13 15:13:45.048241 kubelet[2033]: I1213 15:13:45.048197 2033 scope.go:117] "RemoveContainer" containerID="f67c9d01960740fb74164776b5cb88eaad8c41cac5a8ab4a933515aa7a76c53b" Dec 13 15:13:45.052302 env[1196]: time="2024-12-13T15:13:45.052248724Z" level=info msg="RemoveContainer for \"f67c9d01960740fb74164776b5cb88eaad8c41cac5a8ab4a933515aa7a76c53b\"" Dec 13 15:13:45.063188 env[1196]: time="2024-12-13T15:13:45.063136103Z" level=info msg="RemoveContainer for \"f67c9d01960740fb74164776b5cb88eaad8c41cac5a8ab4a933515aa7a76c53b\" returns successfully" Dec 13 15:13:45.065375 kubelet[2033]: I1213 15:13:45.063686 2033 scope.go:117] "RemoveContainer" containerID="f67c9d01960740fb74164776b5cb88eaad8c41cac5a8ab4a933515aa7a76c53b" Dec 13 15:13:45.067016 env[1196]: time="2024-12-13T15:13:45.066870303Z" level=error msg="ContainerStatus for \"f67c9d01960740fb74164776b5cb88eaad8c41cac5a8ab4a933515aa7a76c53b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f67c9d01960740fb74164776b5cb88eaad8c41cac5a8ab4a933515aa7a76c53b\": not found" Dec 13 15:13:45.068003 kubelet[2033]: E1213 15:13:45.067943 2033 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f67c9d01960740fb74164776b5cb88eaad8c41cac5a8ab4a933515aa7a76c53b\": not found" containerID="f67c9d01960740fb74164776b5cb88eaad8c41cac5a8ab4a933515aa7a76c53b" Dec 13 15:13:45.070887 kubelet[2033]: I1213 15:13:45.070854 2033 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f67c9d01960740fb74164776b5cb88eaad8c41cac5a8ab4a933515aa7a76c53b"} err="failed to get container status \"f67c9d01960740fb74164776b5cb88eaad8c41cac5a8ab4a933515aa7a76c53b\": rpc error: code = NotFound desc = an error occurred when try to find container \"f67c9d01960740fb74164776b5cb88eaad8c41cac5a8ab4a933515aa7a76c53b\": not found" Dec 13 15:13:45.085143 env[1196]: time="2024-12-13T15:13:45.085081216Z" level=info msg="shim disconnected" id=b39f4d2c109f56f44792f05acce5cbcd9f53b771ac8c73034528007209f65e3f Dec 13 15:13:45.085396 env[1196]: time="2024-12-13T15:13:45.085145508Z" level=warning msg="cleaning up after shim disconnected" id=b39f4d2c109f56f44792f05acce5cbcd9f53b771ac8c73034528007209f65e3f namespace=k8s.io Dec 13 15:13:45.085396 env[1196]: time="2024-12-13T15:13:45.085161626Z" level=info msg="cleaning up dead shim" Dec 13 15:13:45.096694 env[1196]: time="2024-12-13T15:13:45.096628464Z" level=warning msg="cleanup warnings time=\"2024-12-13T15:13:45Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3696 runtime=io.containerd.runc.v2\n" Dec 13 15:13:45.097326 env[1196]: time="2024-12-13T15:13:45.097270879Z" level=info msg="TearDown network for sandbox \"b39f4d2c109f56f44792f05acce5cbcd9f53b771ac8c73034528007209f65e3f\" successfully" Dec 13 15:13:45.097430 env[1196]: time="2024-12-13T15:13:45.097323726Z" level=info msg="StopPodSandbox for \"b39f4d2c109f56f44792f05acce5cbcd9f53b771ac8c73034528007209f65e3f\" returns successfully" Dec 13 15:13:45.213681 kubelet[2033]: I1213 15:13:45.212487 2033 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d64fc5fa-9b45-4651-8aae-160965a0cf6b-hostproc\") pod \"d64fc5fa-9b45-4651-8aae-160965a0cf6b\" (UID: \"d64fc5fa-9b45-4651-8aae-160965a0cf6b\") " Dec 13 15:13:45.213681 kubelet[2033]: I1213 15:13:45.212610 2033 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r9jg9\" (UniqueName: \"kubernetes.io/projected/d64fc5fa-9b45-4651-8aae-160965a0cf6b-kube-api-access-r9jg9\") pod \"d64fc5fa-9b45-4651-8aae-160965a0cf6b\" (UID: \"d64fc5fa-9b45-4651-8aae-160965a0cf6b\") " Dec 13 15:13:45.213681 kubelet[2033]: I1213 15:13:45.212684 2033 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d64fc5fa-9b45-4651-8aae-160965a0cf6b-hubble-tls\") pod \"d64fc5fa-9b45-4651-8aae-160965a0cf6b\" (UID: \"d64fc5fa-9b45-4651-8aae-160965a0cf6b\") " Dec 13 15:13:45.213681 kubelet[2033]: I1213 15:13:45.212717 2033 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d64fc5fa-9b45-4651-8aae-160965a0cf6b-cilium-run\") pod \"d64fc5fa-9b45-4651-8aae-160965a0cf6b\" (UID: \"d64fc5fa-9b45-4651-8aae-160965a0cf6b\") " Dec 13 15:13:45.213681 kubelet[2033]: I1213 15:13:45.212812 2033 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b782c812-a1ed-493d-9345-e18658c8b5eb-cilium-config-path\") pod \"b782c812-a1ed-493d-9345-e18658c8b5eb\" (UID: \"b782c812-a1ed-493d-9345-e18658c8b5eb\") " Dec 13 15:13:45.213681 kubelet[2033]: I1213 15:13:45.212885 2033 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d64fc5fa-9b45-4651-8aae-160965a0cf6b-clustermesh-secrets\") pod \"d64fc5fa-9b45-4651-8aae-160965a0cf6b\" (UID: \"d64fc5fa-9b45-4651-8aae-160965a0cf6b\") " Dec 13 15:13:45.214276 kubelet[2033]: I1213 15:13:45.212920 2033 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kgb9p\" (UniqueName: \"kubernetes.io/projected/b782c812-a1ed-493d-9345-e18658c8b5eb-kube-api-access-kgb9p\") pod \"b782c812-a1ed-493d-9345-e18658c8b5eb\" (UID: \"b782c812-a1ed-493d-9345-e18658c8b5eb\") " Dec 13 15:13:45.214276 kubelet[2033]: I1213 15:13:45.212984 2033 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d64fc5fa-9b45-4651-8aae-160965a0cf6b-cilium-cgroup\") pod \"d64fc5fa-9b45-4651-8aae-160965a0cf6b\" (UID: \"d64fc5fa-9b45-4651-8aae-160965a0cf6b\") " Dec 13 15:13:45.214276 kubelet[2033]: I1213 15:13:45.213042 2033 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d64fc5fa-9b45-4651-8aae-160965a0cf6b-host-proc-sys-net\") pod \"d64fc5fa-9b45-4651-8aae-160965a0cf6b\" (UID: \"d64fc5fa-9b45-4651-8aae-160965a0cf6b\") " Dec 13 15:13:45.214276 kubelet[2033]: I1213 15:13:45.213072 2033 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d64fc5fa-9b45-4651-8aae-160965a0cf6b-bpf-maps\") pod \"d64fc5fa-9b45-4651-8aae-160965a0cf6b\" (UID: \"d64fc5fa-9b45-4651-8aae-160965a0cf6b\") " Dec 13 15:13:45.214276 kubelet[2033]: I1213 15:13:45.213124 2033 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d64fc5fa-9b45-4651-8aae-160965a0cf6b-cni-path\") pod \"d64fc5fa-9b45-4651-8aae-160965a0cf6b\" (UID: \"d64fc5fa-9b45-4651-8aae-160965a0cf6b\") " Dec 13 15:13:45.214276 kubelet[2033]: I1213 15:13:45.213168 2033 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d64fc5fa-9b45-4651-8aae-160965a0cf6b-xtables-lock\") pod \"d64fc5fa-9b45-4651-8aae-160965a0cf6b\" (UID: \"d64fc5fa-9b45-4651-8aae-160965a0cf6b\") " Dec 13 15:13:45.214686 kubelet[2033]: I1213 15:13:45.213235 2033 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d64fc5fa-9b45-4651-8aae-160965a0cf6b-cilium-config-path\") pod \"d64fc5fa-9b45-4651-8aae-160965a0cf6b\" (UID: \"d64fc5fa-9b45-4651-8aae-160965a0cf6b\") " Dec 13 15:13:45.214686 kubelet[2033]: I1213 15:13:45.213262 2033 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d64fc5fa-9b45-4651-8aae-160965a0cf6b-lib-modules\") pod \"d64fc5fa-9b45-4651-8aae-160965a0cf6b\" (UID: \"d64fc5fa-9b45-4651-8aae-160965a0cf6b\") " Dec 13 15:13:45.214686 kubelet[2033]: I1213 15:13:45.213324 2033 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d64fc5fa-9b45-4651-8aae-160965a0cf6b-host-proc-sys-kernel\") pod \"d64fc5fa-9b45-4651-8aae-160965a0cf6b\" (UID: \"d64fc5fa-9b45-4651-8aae-160965a0cf6b\") " Dec 13 15:13:45.214686 kubelet[2033]: I1213 15:13:45.213363 2033 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d64fc5fa-9b45-4651-8aae-160965a0cf6b-etc-cni-netd\") pod \"d64fc5fa-9b45-4651-8aae-160965a0cf6b\" (UID: \"d64fc5fa-9b45-4651-8aae-160965a0cf6b\") " Dec 13 15:13:45.219454 kubelet[2033]: I1213 15:13:45.217003 2033 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d64fc5fa-9b45-4651-8aae-160965a0cf6b-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "d64fc5fa-9b45-4651-8aae-160965a0cf6b" (UID: "d64fc5fa-9b45-4651-8aae-160965a0cf6b"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 15:13:45.219886 kubelet[2033]: I1213 15:13:45.216885 2033 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d64fc5fa-9b45-4651-8aae-160965a0cf6b-hostproc" (OuterVolumeSpecName: "hostproc") pod "d64fc5fa-9b45-4651-8aae-160965a0cf6b" (UID: "d64fc5fa-9b45-4651-8aae-160965a0cf6b"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 15:13:45.221755 kubelet[2033]: I1213 15:13:45.219921 2033 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d64fc5fa-9b45-4651-8aae-160965a0cf6b-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "d64fc5fa-9b45-4651-8aae-160965a0cf6b" (UID: "d64fc5fa-9b45-4651-8aae-160965a0cf6b"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 15:13:45.221913 kubelet[2033]: I1213 15:13:45.219938 2033 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d64fc5fa-9b45-4651-8aae-160965a0cf6b-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "d64fc5fa-9b45-4651-8aae-160965a0cf6b" (UID: "d64fc5fa-9b45-4651-8aae-160965a0cf6b"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 15:13:45.222257 kubelet[2033]: I1213 15:13:45.219948 2033 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d64fc5fa-9b45-4651-8aae-160965a0cf6b-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "d64fc5fa-9b45-4651-8aae-160965a0cf6b" (UID: "d64fc5fa-9b45-4651-8aae-160965a0cf6b"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 15:13:45.222257 kubelet[2033]: I1213 15:13:45.220102 2033 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d64fc5fa-9b45-4651-8aae-160965a0cf6b-cni-path" (OuterVolumeSpecName: "cni-path") pod "d64fc5fa-9b45-4651-8aae-160965a0cf6b" (UID: "d64fc5fa-9b45-4651-8aae-160965a0cf6b"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 15:13:45.222257 kubelet[2033]: I1213 15:13:45.222194 2033 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d64fc5fa-9b45-4651-8aae-160965a0cf6b-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "d64fc5fa-9b45-4651-8aae-160965a0cf6b" (UID: "d64fc5fa-9b45-4651-8aae-160965a0cf6b"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 15:13:45.225053 kubelet[2033]: I1213 15:13:45.225016 2033 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d64fc5fa-9b45-4651-8aae-160965a0cf6b-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "d64fc5fa-9b45-4651-8aae-160965a0cf6b" (UID: "d64fc5fa-9b45-4651-8aae-160965a0cf6b"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 15:13:45.225671 kubelet[2033]: I1213 15:13:45.225583 2033 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d64fc5fa-9b45-4651-8aae-160965a0cf6b-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "d64fc5fa-9b45-4651-8aae-160965a0cf6b" (UID: "d64fc5fa-9b45-4651-8aae-160965a0cf6b"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 15:13:45.226299 kubelet[2033]: I1213 15:13:45.226271 2033 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d64fc5fa-9b45-4651-8aae-160965a0cf6b-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "d64fc5fa-9b45-4651-8aae-160965a0cf6b" (UID: "d64fc5fa-9b45-4651-8aae-160965a0cf6b"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 15:13:45.234758 kubelet[2033]: I1213 15:13:45.234692 2033 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b782c812-a1ed-493d-9345-e18658c8b5eb-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "b782c812-a1ed-493d-9345-e18658c8b5eb" (UID: "b782c812-a1ed-493d-9345-e18658c8b5eb"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 15:13:45.236764 kubelet[2033]: I1213 15:13:45.236722 2033 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d64fc5fa-9b45-4651-8aae-160965a0cf6b-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "d64fc5fa-9b45-4651-8aae-160965a0cf6b" (UID: "d64fc5fa-9b45-4651-8aae-160965a0cf6b"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 15:13:45.236907 kubelet[2033]: I1213 15:13:45.236832 2033 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d64fc5fa-9b45-4651-8aae-160965a0cf6b-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "d64fc5fa-9b45-4651-8aae-160965a0cf6b" (UID: "d64fc5fa-9b45-4651-8aae-160965a0cf6b"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 15:13:45.237285 kubelet[2033]: I1213 15:13:45.237247 2033 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b782c812-a1ed-493d-9345-e18658c8b5eb-kube-api-access-kgb9p" (OuterVolumeSpecName: "kube-api-access-kgb9p") pod "b782c812-a1ed-493d-9345-e18658c8b5eb" (UID: "b782c812-a1ed-493d-9345-e18658c8b5eb"). InnerVolumeSpecName "kube-api-access-kgb9p". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 15:13:45.237515 kubelet[2033]: I1213 15:13:45.237481 2033 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d64fc5fa-9b45-4651-8aae-160965a0cf6b-kube-api-access-r9jg9" (OuterVolumeSpecName: "kube-api-access-r9jg9") pod "d64fc5fa-9b45-4651-8aae-160965a0cf6b" (UID: "d64fc5fa-9b45-4651-8aae-160965a0cf6b"). InnerVolumeSpecName "kube-api-access-r9jg9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 15:13:45.238936 kubelet[2033]: I1213 15:13:45.238907 2033 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d64fc5fa-9b45-4651-8aae-160965a0cf6b-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "d64fc5fa-9b45-4651-8aae-160965a0cf6b" (UID: "d64fc5fa-9b45-4651-8aae-160965a0cf6b"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 15:13:45.317048 kubelet[2033]: I1213 15:13:45.314580 2033 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d64fc5fa-9b45-4651-8aae-160965a0cf6b-clustermesh-secrets\") on node \"srv-iw0hd.gb1.brightbox.com\" DevicePath \"\"" Dec 13 15:13:45.317048 kubelet[2033]: I1213 15:13:45.314636 2033 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-kgb9p\" (UniqueName: \"kubernetes.io/projected/b782c812-a1ed-493d-9345-e18658c8b5eb-kube-api-access-kgb9p\") on node \"srv-iw0hd.gb1.brightbox.com\" DevicePath \"\"" Dec 13 15:13:45.317048 kubelet[2033]: I1213 15:13:45.314655 2033 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d64fc5fa-9b45-4651-8aae-160965a0cf6b-cilium-cgroup\") on node \"srv-iw0hd.gb1.brightbox.com\" DevicePath \"\"" Dec 13 15:13:45.317048 kubelet[2033]: I1213 15:13:45.314672 2033 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d64fc5fa-9b45-4651-8aae-160965a0cf6b-host-proc-sys-net\") on node \"srv-iw0hd.gb1.brightbox.com\" DevicePath \"\"" Dec 13 15:13:45.317048 kubelet[2033]: I1213 15:13:45.314689 2033 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d64fc5fa-9b45-4651-8aae-160965a0cf6b-cni-path\") on node \"srv-iw0hd.gb1.brightbox.com\" DevicePath \"\"" Dec 13 15:13:45.317048 kubelet[2033]: I1213 15:13:45.314714 2033 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d64fc5fa-9b45-4651-8aae-160965a0cf6b-xtables-lock\") on node \"srv-iw0hd.gb1.brightbox.com\" DevicePath \"\"" Dec 13 15:13:45.317048 kubelet[2033]: I1213 15:13:45.314737 2033 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d64fc5fa-9b45-4651-8aae-160965a0cf6b-bpf-maps\") on node \"srv-iw0hd.gb1.brightbox.com\" DevicePath \"\"" Dec 13 15:13:45.317048 kubelet[2033]: I1213 15:13:45.314752 2033 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d64fc5fa-9b45-4651-8aae-160965a0cf6b-cilium-config-path\") on node \"srv-iw0hd.gb1.brightbox.com\" DevicePath \"\"" Dec 13 15:13:45.318471 kubelet[2033]: I1213 15:13:45.314784 2033 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d64fc5fa-9b45-4651-8aae-160965a0cf6b-lib-modules\") on node \"srv-iw0hd.gb1.brightbox.com\" DevicePath \"\"" Dec 13 15:13:45.318471 kubelet[2033]: I1213 15:13:45.314801 2033 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d64fc5fa-9b45-4651-8aae-160965a0cf6b-host-proc-sys-kernel\") on node \"srv-iw0hd.gb1.brightbox.com\" DevicePath \"\"" Dec 13 15:13:45.318471 kubelet[2033]: I1213 15:13:45.314820 2033 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d64fc5fa-9b45-4651-8aae-160965a0cf6b-etc-cni-netd\") on node \"srv-iw0hd.gb1.brightbox.com\" DevicePath \"\"" Dec 13 15:13:45.318471 kubelet[2033]: I1213 15:13:45.314850 2033 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d64fc5fa-9b45-4651-8aae-160965a0cf6b-hostproc\") on node \"srv-iw0hd.gb1.brightbox.com\" DevicePath \"\"" Dec 13 15:13:45.318471 kubelet[2033]: I1213 15:13:45.314868 2033 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-r9jg9\" (UniqueName: \"kubernetes.io/projected/d64fc5fa-9b45-4651-8aae-160965a0cf6b-kube-api-access-r9jg9\") on node \"srv-iw0hd.gb1.brightbox.com\" DevicePath \"\"" Dec 13 15:13:45.318471 kubelet[2033]: I1213 15:13:45.314883 2033 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d64fc5fa-9b45-4651-8aae-160965a0cf6b-hubble-tls\") on node \"srv-iw0hd.gb1.brightbox.com\" DevicePath \"\"" Dec 13 15:13:45.318471 kubelet[2033]: I1213 15:13:45.314899 2033 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d64fc5fa-9b45-4651-8aae-160965a0cf6b-cilium-run\") on node \"srv-iw0hd.gb1.brightbox.com\" DevicePath \"\"" Dec 13 15:13:45.318471 kubelet[2033]: I1213 15:13:45.314915 2033 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b782c812-a1ed-493d-9345-e18658c8b5eb-cilium-config-path\") on node \"srv-iw0hd.gb1.brightbox.com\" DevicePath \"\"" Dec 13 15:13:45.748413 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e40da2288295e47ba0ad02306d3cda2bb3fb8b731b289bfaa47fd40be413ba7c-rootfs.mount: Deactivated successfully. Dec 13 15:13:45.748551 systemd[1]: var-lib-kubelet-pods-b782c812\x2da1ed\x2d493d\x2d9345\x2de18658c8b5eb-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dkgb9p.mount: Deactivated successfully. Dec 13 15:13:45.748677 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b39f4d2c109f56f44792f05acce5cbcd9f53b771ac8c73034528007209f65e3f-rootfs.mount: Deactivated successfully. Dec 13 15:13:45.748777 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b39f4d2c109f56f44792f05acce5cbcd9f53b771ac8c73034528007209f65e3f-shm.mount: Deactivated successfully. Dec 13 15:13:45.748870 systemd[1]: var-lib-kubelet-pods-d64fc5fa\x2d9b45\x2d4651\x2d8aae\x2d160965a0cf6b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dr9jg9.mount: Deactivated successfully. Dec 13 15:13:45.749029 systemd[1]: var-lib-kubelet-pods-d64fc5fa\x2d9b45\x2d4651\x2d8aae\x2d160965a0cf6b-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 15:13:45.749142 systemd[1]: var-lib-kubelet-pods-d64fc5fa\x2d9b45\x2d4651\x2d8aae\x2d160965a0cf6b-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 15:13:46.077652 systemd[1]: Removed slice kubepods-besteffort-podb782c812_a1ed_493d_9345_e18658c8b5eb.slice. Dec 13 15:13:46.085879 kubelet[2033]: I1213 15:13:46.085842 2033 scope.go:117] "RemoveContainer" containerID="19c8d13059b8a0dafc895e11693c742f5d3c22643191a5f3a724d136dc7d6680" Dec 13 15:13:46.093286 systemd[1]: Removed slice kubepods-burstable-podd64fc5fa_9b45_4651_8aae_160965a0cf6b.slice. Dec 13 15:13:46.093432 systemd[1]: kubepods-burstable-podd64fc5fa_9b45_4651_8aae_160965a0cf6b.slice: Consumed 10.324s CPU time. Dec 13 15:13:46.100845 env[1196]: time="2024-12-13T15:13:46.100411274Z" level=info msg="RemoveContainer for \"19c8d13059b8a0dafc895e11693c742f5d3c22643191a5f3a724d136dc7d6680\"" Dec 13 15:13:46.104453 env[1196]: time="2024-12-13T15:13:46.104287096Z" level=info msg="RemoveContainer for \"19c8d13059b8a0dafc895e11693c742f5d3c22643191a5f3a724d136dc7d6680\" returns successfully" Dec 13 15:13:46.105157 kubelet[2033]: I1213 15:13:46.105113 2033 scope.go:117] "RemoveContainer" containerID="963978c60a11a621f59a7af4aee6514d6c68b7593f0037738da7fe18ea52b41e" Dec 13 15:13:46.108484 env[1196]: time="2024-12-13T15:13:46.108427256Z" level=info msg="RemoveContainer for \"963978c60a11a621f59a7af4aee6514d6c68b7593f0037738da7fe18ea52b41e\"" Dec 13 15:13:46.112575 env[1196]: time="2024-12-13T15:13:46.112534611Z" level=info msg="RemoveContainer for \"963978c60a11a621f59a7af4aee6514d6c68b7593f0037738da7fe18ea52b41e\" returns successfully" Dec 13 15:13:46.112901 kubelet[2033]: I1213 15:13:46.112873 2033 scope.go:117] "RemoveContainer" containerID="f68c361d5599718b4ffc929520279d6cf42b04fd50482dd7054efff029d72fa8" Dec 13 15:13:46.114733 env[1196]: time="2024-12-13T15:13:46.114491901Z" level=info msg="RemoveContainer for \"f68c361d5599718b4ffc929520279d6cf42b04fd50482dd7054efff029d72fa8\"" Dec 13 15:13:46.123949 env[1196]: time="2024-12-13T15:13:46.121868798Z" level=info msg="RemoveContainer for \"f68c361d5599718b4ffc929520279d6cf42b04fd50482dd7054efff029d72fa8\" returns successfully" Dec 13 15:13:46.124145 kubelet[2033]: I1213 15:13:46.122711 2033 scope.go:117] "RemoveContainer" containerID="adc5c456117a73a26e4865f7232a62d43a4377f6bc960ce65d8d5c7b3f02e439" Dec 13 15:13:46.125841 env[1196]: time="2024-12-13T15:13:46.125765909Z" level=info msg="RemoveContainer for \"adc5c456117a73a26e4865f7232a62d43a4377f6bc960ce65d8d5c7b3f02e439\"" Dec 13 15:13:46.129641 env[1196]: time="2024-12-13T15:13:46.129569127Z" level=info msg="RemoveContainer for \"adc5c456117a73a26e4865f7232a62d43a4377f6bc960ce65d8d5c7b3f02e439\" returns successfully" Dec 13 15:13:46.131607 kubelet[2033]: I1213 15:13:46.131565 2033 scope.go:117] "RemoveContainer" containerID="e35e9ddce5c4c5b57934a52d23dd079f7b28f6c149bc6d8b82404a06e154a81d" Dec 13 15:13:46.133270 env[1196]: time="2024-12-13T15:13:46.133216333Z" level=info msg="RemoveContainer for \"e35e9ddce5c4c5b57934a52d23dd079f7b28f6c149bc6d8b82404a06e154a81d\"" Dec 13 15:13:46.137107 env[1196]: time="2024-12-13T15:13:46.137061020Z" level=info msg="RemoveContainer for \"e35e9ddce5c4c5b57934a52d23dd079f7b28f6c149bc6d8b82404a06e154a81d\" returns successfully" Dec 13 15:13:46.592879 kubelet[2033]: I1213 15:13:46.592828 2033 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="b782c812-a1ed-493d-9345-e18658c8b5eb" path="/var/lib/kubelet/pods/b782c812-a1ed-493d-9345-e18658c8b5eb/volumes" Dec 13 15:13:46.594500 kubelet[2033]: I1213 15:13:46.594477 2033 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="d64fc5fa-9b45-4651-8aae-160965a0cf6b" path="/var/lib/kubelet/pods/d64fc5fa-9b45-4651-8aae-160965a0cf6b/volumes" Dec 13 15:13:46.770928 sshd[3555]: pam_unix(sshd:session): session closed for user core Dec 13 15:13:46.776587 systemd[1]: sshd@22-10.243.84.50:22-139.178.68.195:59084.service: Deactivated successfully. Dec 13 15:13:46.777812 systemd[1]: session-23.scope: Deactivated successfully. Dec 13 15:13:46.778134 systemd[1]: session-23.scope: Consumed 1.986s CPU time. Dec 13 15:13:46.778955 systemd-logind[1182]: Session 23 logged out. Waiting for processes to exit. Dec 13 15:13:46.780414 systemd-logind[1182]: Removed session 23. Dec 13 15:13:46.922425 systemd[1]: Started sshd@23-10.243.84.50:22-139.178.68.195:39050.service. Dec 13 15:13:47.823080 sshd[3717]: Accepted publickey for core from 139.178.68.195 port 39050 ssh2: RSA SHA256:BRWuvX4vngANWcecei9LW91Zd3OWx+vtbErQ53ehsZc Dec 13 15:13:47.825901 sshd[3717]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 15:13:47.834486 systemd[1]: Started session-24.scope. Dec 13 15:13:47.836741 systemd-logind[1182]: New session 24 of user core. Dec 13 15:13:49.090547 kubelet[2033]: I1213 15:13:49.090461 2033 topology_manager.go:215] "Topology Admit Handler" podUID="d63b2a25-c975-4842-ac23-85316b790b7a" podNamespace="kube-system" podName="cilium-w8d2h" Dec 13 15:13:49.092044 kubelet[2033]: E1213 15:13:49.092013 2033 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d64fc5fa-9b45-4651-8aae-160965a0cf6b" containerName="mount-bpf-fs" Dec 13 15:13:49.092044 kubelet[2033]: E1213 15:13:49.092044 2033 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d64fc5fa-9b45-4651-8aae-160965a0cf6b" containerName="clean-cilium-state" Dec 13 15:13:49.092180 kubelet[2033]: E1213 15:13:49.092076 2033 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d64fc5fa-9b45-4651-8aae-160965a0cf6b" containerName="mount-cgroup" Dec 13 15:13:49.092180 kubelet[2033]: E1213 15:13:49.092088 2033 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d64fc5fa-9b45-4651-8aae-160965a0cf6b" containerName="apply-sysctl-overwrites" Dec 13 15:13:49.092180 kubelet[2033]: E1213 15:13:49.092100 2033 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d64fc5fa-9b45-4651-8aae-160965a0cf6b" containerName="cilium-agent" Dec 13 15:13:49.092180 kubelet[2033]: E1213 15:13:49.092111 2033 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b782c812-a1ed-493d-9345-e18658c8b5eb" containerName="cilium-operator" Dec 13 15:13:49.092439 kubelet[2033]: I1213 15:13:49.092204 2033 memory_manager.go:354] "RemoveStaleState removing state" podUID="d64fc5fa-9b45-4651-8aae-160965a0cf6b" containerName="cilium-agent" Dec 13 15:13:49.092439 kubelet[2033]: I1213 15:13:49.092227 2033 memory_manager.go:354] "RemoveStaleState removing state" podUID="b782c812-a1ed-493d-9345-e18658c8b5eb" containerName="cilium-operator" Dec 13 15:13:49.107010 systemd[1]: Created slice kubepods-burstable-podd63b2a25_c975_4842_ac23_85316b790b7a.slice. Dec 13 15:13:49.118073 kubelet[2033]: W1213 15:13:49.118013 2033 reflector.go:539] object-"kube-system"/"cilium-ipsec-keys": failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:srv-iw0hd.gb1.brightbox.com" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'srv-iw0hd.gb1.brightbox.com' and this object Dec 13 15:13:49.118346 kubelet[2033]: E1213 15:13:49.118117 2033 reflector.go:147] object-"kube-system"/"cilium-ipsec-keys": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:srv-iw0hd.gb1.brightbox.com" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'srv-iw0hd.gb1.brightbox.com' and this object Dec 13 15:13:49.118346 kubelet[2033]: W1213 15:13:49.118249 2033 reflector.go:539] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:srv-iw0hd.gb1.brightbox.com" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'srv-iw0hd.gb1.brightbox.com' and this object Dec 13 15:13:49.118346 kubelet[2033]: E1213 15:13:49.118274 2033 reflector.go:147] object-"kube-system"/"hubble-server-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:srv-iw0hd.gb1.brightbox.com" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'srv-iw0hd.gb1.brightbox.com' and this object Dec 13 15:13:49.118601 kubelet[2033]: W1213 15:13:49.118575 2033 reflector.go:539] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:srv-iw0hd.gb1.brightbox.com" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'srv-iw0hd.gb1.brightbox.com' and this object Dec 13 15:13:49.118732 kubelet[2033]: E1213 15:13:49.118707 2033 reflector.go:147] object-"kube-system"/"cilium-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:srv-iw0hd.gb1.brightbox.com" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'srv-iw0hd.gb1.brightbox.com' and this object Dec 13 15:13:49.148451 kubelet[2033]: I1213 15:13:49.148408 2033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/d63b2a25-c975-4842-ac23-85316b790b7a-cilium-ipsec-secrets\") pod \"cilium-w8d2h\" (UID: \"d63b2a25-c975-4842-ac23-85316b790b7a\") " pod="kube-system/cilium-w8d2h" Dec 13 15:13:49.148774 kubelet[2033]: I1213 15:13:49.148743 2033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d63b2a25-c975-4842-ac23-85316b790b7a-cilium-cgroup\") pod \"cilium-w8d2h\" (UID: \"d63b2a25-c975-4842-ac23-85316b790b7a\") " pod="kube-system/cilium-w8d2h" Dec 13 15:13:49.148984 kubelet[2033]: I1213 15:13:49.148943 2033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d63b2a25-c975-4842-ac23-85316b790b7a-host-proc-sys-net\") pod \"cilium-w8d2h\" (UID: \"d63b2a25-c975-4842-ac23-85316b790b7a\") " pod="kube-system/cilium-w8d2h" Dec 13 15:13:49.149356 kubelet[2033]: I1213 15:13:49.149324 2033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d63b2a25-c975-4842-ac23-85316b790b7a-cilium-run\") pod \"cilium-w8d2h\" (UID: \"d63b2a25-c975-4842-ac23-85316b790b7a\") " pod="kube-system/cilium-w8d2h" Dec 13 15:13:49.149629 kubelet[2033]: I1213 15:13:49.149573 2033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4v25\" (UniqueName: \"kubernetes.io/projected/d63b2a25-c975-4842-ac23-85316b790b7a-kube-api-access-p4v25\") pod \"cilium-w8d2h\" (UID: \"d63b2a25-c975-4842-ac23-85316b790b7a\") " pod="kube-system/cilium-w8d2h" Dec 13 15:13:49.149808 kubelet[2033]: I1213 15:13:49.149785 2033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d63b2a25-c975-4842-ac23-85316b790b7a-bpf-maps\") pod \"cilium-w8d2h\" (UID: \"d63b2a25-c975-4842-ac23-85316b790b7a\") " pod="kube-system/cilium-w8d2h" Dec 13 15:13:49.150158 kubelet[2033]: I1213 15:13:49.149961 2033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d63b2a25-c975-4842-ac23-85316b790b7a-cni-path\") pod \"cilium-w8d2h\" (UID: \"d63b2a25-c975-4842-ac23-85316b790b7a\") " pod="kube-system/cilium-w8d2h" Dec 13 15:13:49.150158 kubelet[2033]: I1213 15:13:49.150066 2033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d63b2a25-c975-4842-ac23-85316b790b7a-lib-modules\") pod \"cilium-w8d2h\" (UID: \"d63b2a25-c975-4842-ac23-85316b790b7a\") " pod="kube-system/cilium-w8d2h" Dec 13 15:13:49.150314 kubelet[2033]: I1213 15:13:49.150160 2033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d63b2a25-c975-4842-ac23-85316b790b7a-etc-cni-netd\") pod \"cilium-w8d2h\" (UID: \"d63b2a25-c975-4842-ac23-85316b790b7a\") " pod="kube-system/cilium-w8d2h" Dec 13 15:13:49.150314 kubelet[2033]: I1213 15:13:49.150229 2033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d63b2a25-c975-4842-ac23-85316b790b7a-xtables-lock\") pod \"cilium-w8d2h\" (UID: \"d63b2a25-c975-4842-ac23-85316b790b7a\") " pod="kube-system/cilium-w8d2h" Dec 13 15:13:49.150433 kubelet[2033]: I1213 15:13:49.150319 2033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d63b2a25-c975-4842-ac23-85316b790b7a-hostproc\") pod \"cilium-w8d2h\" (UID: \"d63b2a25-c975-4842-ac23-85316b790b7a\") " pod="kube-system/cilium-w8d2h" Dec 13 15:13:49.150433 kubelet[2033]: I1213 15:13:49.150387 2033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d63b2a25-c975-4842-ac23-85316b790b7a-clustermesh-secrets\") pod \"cilium-w8d2h\" (UID: \"d63b2a25-c975-4842-ac23-85316b790b7a\") " pod="kube-system/cilium-w8d2h" Dec 13 15:13:49.150433 kubelet[2033]: I1213 15:13:49.150429 2033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d63b2a25-c975-4842-ac23-85316b790b7a-cilium-config-path\") pod \"cilium-w8d2h\" (UID: \"d63b2a25-c975-4842-ac23-85316b790b7a\") " pod="kube-system/cilium-w8d2h" Dec 13 15:13:49.150594 kubelet[2033]: I1213 15:13:49.150480 2033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d63b2a25-c975-4842-ac23-85316b790b7a-host-proc-sys-kernel\") pod \"cilium-w8d2h\" (UID: \"d63b2a25-c975-4842-ac23-85316b790b7a\") " pod="kube-system/cilium-w8d2h" Dec 13 15:13:49.150594 kubelet[2033]: I1213 15:13:49.150551 2033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d63b2a25-c975-4842-ac23-85316b790b7a-hubble-tls\") pod \"cilium-w8d2h\" (UID: \"d63b2a25-c975-4842-ac23-85316b790b7a\") " pod="kube-system/cilium-w8d2h" Dec 13 15:13:49.242516 sshd[3717]: pam_unix(sshd:session): session closed for user core Dec 13 15:13:49.246992 systemd[1]: sshd@23-10.243.84.50:22-139.178.68.195:39050.service: Deactivated successfully. Dec 13 15:13:49.248068 systemd[1]: session-24.scope: Deactivated successfully. Dec 13 15:13:49.248789 systemd-logind[1182]: Session 24 logged out. Waiting for processes to exit. Dec 13 15:13:49.250073 systemd-logind[1182]: Removed session 24. Dec 13 15:13:49.392843 systemd[1]: Started sshd@24-10.243.84.50:22-139.178.68.195:39066.service. Dec 13 15:13:49.774555 kubelet[2033]: E1213 15:13:49.774077 2033 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 15:13:50.253593 kubelet[2033]: E1213 15:13:50.253506 2033 configmap.go:199] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Dec 13 15:13:50.254295 kubelet[2033]: E1213 15:13:50.253755 2033 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d63b2a25-c975-4842-ac23-85316b790b7a-cilium-config-path podName:d63b2a25-c975-4842-ac23-85316b790b7a nodeName:}" failed. No retries permitted until 2024-12-13 15:13:50.753669362 +0000 UTC m=+146.413933905 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/d63b2a25-c975-4842-ac23-85316b790b7a-cilium-config-path") pod "cilium-w8d2h" (UID: "d63b2a25-c975-4842-ac23-85316b790b7a") : failed to sync configmap cache: timed out waiting for the condition Dec 13 15:13:50.254295 kubelet[2033]: E1213 15:13:50.253556 2033 secret.go:194] Couldn't get secret kube-system/cilium-ipsec-keys: failed to sync secret cache: timed out waiting for the condition Dec 13 15:13:50.254295 kubelet[2033]: E1213 15:13:50.254211 2033 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d63b2a25-c975-4842-ac23-85316b790b7a-cilium-ipsec-secrets podName:d63b2a25-c975-4842-ac23-85316b790b7a nodeName:}" failed. No retries permitted until 2024-12-13 15:13:50.754197555 +0000 UTC m=+146.414462098 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-ipsec-secrets" (UniqueName: "kubernetes.io/secret/d63b2a25-c975-4842-ac23-85316b790b7a-cilium-ipsec-secrets") pod "cilium-w8d2h" (UID: "d63b2a25-c975-4842-ac23-85316b790b7a") : failed to sync secret cache: timed out waiting for the condition Dec 13 15:13:50.255744 kubelet[2033]: E1213 15:13:50.255686 2033 projected.go:269] Couldn't get secret kube-system/hubble-server-certs: failed to sync secret cache: timed out waiting for the condition Dec 13 15:13:50.255744 kubelet[2033]: E1213 15:13:50.255738 2033 projected.go:200] Error preparing data for projected volume hubble-tls for pod kube-system/cilium-w8d2h: failed to sync secret cache: timed out waiting for the condition Dec 13 15:13:50.255905 kubelet[2033]: E1213 15:13:50.255809 2033 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d63b2a25-c975-4842-ac23-85316b790b7a-hubble-tls podName:d63b2a25-c975-4842-ac23-85316b790b7a nodeName:}" failed. No retries permitted until 2024-12-13 15:13:50.755792302 +0000 UTC m=+146.416056847 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "hubble-tls" (UniqueName: "kubernetes.io/projected/d63b2a25-c975-4842-ac23-85316b790b7a-hubble-tls") pod "cilium-w8d2h" (UID: "d63b2a25-c975-4842-ac23-85316b790b7a") : failed to sync secret cache: timed out waiting for the condition Dec 13 15:13:50.270831 sshd[3730]: Accepted publickey for core from 139.178.68.195 port 39066 ssh2: RSA SHA256:BRWuvX4vngANWcecei9LW91Zd3OWx+vtbErQ53ehsZc Dec 13 15:13:50.272842 sshd[3730]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 15:13:50.280066 systemd-logind[1182]: New session 25 of user core. Dec 13 15:13:50.280836 systemd[1]: Started session-25.scope. Dec 13 15:13:50.913609 env[1196]: time="2024-12-13T15:13:50.912817811Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-w8d2h,Uid:d63b2a25-c975-4842-ac23-85316b790b7a,Namespace:kube-system,Attempt:0,}" Dec 13 15:13:50.953746 env[1196]: time="2024-12-13T15:13:50.953641325Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 15:13:50.954095 env[1196]: time="2024-12-13T15:13:50.954017099Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 15:13:50.954283 env[1196]: time="2024-12-13T15:13:50.954211622Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 15:13:50.954743 env[1196]: time="2024-12-13T15:13:50.954691421Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/09b344a6a65940d2460993be45f2cab3f38d0668faaedb1e48126aa6890604a8 pid=3749 runtime=io.containerd.runc.v2 Dec 13 15:13:50.987724 systemd[1]: Started cri-containerd-09b344a6a65940d2460993be45f2cab3f38d0668faaedb1e48126aa6890604a8.scope. Dec 13 15:13:51.035008 env[1196]: time="2024-12-13T15:13:51.034873374Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-w8d2h,Uid:d63b2a25-c975-4842-ac23-85316b790b7a,Namespace:kube-system,Attempt:0,} returns sandbox id \"09b344a6a65940d2460993be45f2cab3f38d0668faaedb1e48126aa6890604a8\"" Dec 13 15:13:51.042352 env[1196]: time="2024-12-13T15:13:51.042307292Z" level=info msg="CreateContainer within sandbox \"09b344a6a65940d2460993be45f2cab3f38d0668faaedb1e48126aa6890604a8\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 15:13:51.056041 sshd[3730]: pam_unix(sshd:session): session closed for user core Dec 13 15:13:51.060544 env[1196]: time="2024-12-13T15:13:51.058042125Z" level=info msg="CreateContainer within sandbox \"09b344a6a65940d2460993be45f2cab3f38d0668faaedb1e48126aa6890604a8\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"4ad7b078332ad1c3d9d455df72773a23e5abd964d5696815930e7eeb6dd1d863\"" Dec 13 15:13:51.059616 systemd[1]: sshd@24-10.243.84.50:22-139.178.68.195:39066.service: Deactivated successfully. Dec 13 15:13:51.060897 systemd[1]: session-25.scope: Deactivated successfully. Dec 13 15:13:51.062637 systemd-logind[1182]: Session 25 logged out. Waiting for processes to exit. Dec 13 15:13:51.063047 env[1196]: time="2024-12-13T15:13:51.063002745Z" level=info msg="StartContainer for \"4ad7b078332ad1c3d9d455df72773a23e5abd964d5696815930e7eeb6dd1d863\"" Dec 13 15:13:51.065213 systemd-logind[1182]: Removed session 25. Dec 13 15:13:51.087093 systemd[1]: Started cri-containerd-4ad7b078332ad1c3d9d455df72773a23e5abd964d5696815930e7eeb6dd1d863.scope. Dec 13 15:13:51.112809 systemd[1]: cri-containerd-4ad7b078332ad1c3d9d455df72773a23e5abd964d5696815930e7eeb6dd1d863.scope: Deactivated successfully. Dec 13 15:13:51.132604 env[1196]: time="2024-12-13T15:13:51.132431950Z" level=info msg="shim disconnected" id=4ad7b078332ad1c3d9d455df72773a23e5abd964d5696815930e7eeb6dd1d863 Dec 13 15:13:51.132784 env[1196]: time="2024-12-13T15:13:51.132611747Z" level=warning msg="cleaning up after shim disconnected" id=4ad7b078332ad1c3d9d455df72773a23e5abd964d5696815930e7eeb6dd1d863 namespace=k8s.io Dec 13 15:13:51.133796 env[1196]: time="2024-12-13T15:13:51.133747248Z" level=info msg="cleaning up dead shim" Dec 13 15:13:51.152760 env[1196]: time="2024-12-13T15:13:51.152698247Z" level=warning msg="cleanup warnings time=\"2024-12-13T15:13:51Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3807 runtime=io.containerd.runc.v2\ntime=\"2024-12-13T15:13:51Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/4ad7b078332ad1c3d9d455df72773a23e5abd964d5696815930e7eeb6dd1d863/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Dec 13 15:13:51.153479 env[1196]: time="2024-12-13T15:13:51.153305253Z" level=error msg="copy shim log" error="read /proc/self/fd/42: file already closed" Dec 13 15:13:51.154335 env[1196]: time="2024-12-13T15:13:51.154187863Z" level=error msg="Failed to pipe stdout of container \"4ad7b078332ad1c3d9d455df72773a23e5abd964d5696815930e7eeb6dd1d863\"" error="reading from a closed fifo" Dec 13 15:13:51.154733 env[1196]: time="2024-12-13T15:13:51.154228939Z" level=error msg="Failed to pipe stderr of container \"4ad7b078332ad1c3d9d455df72773a23e5abd964d5696815930e7eeb6dd1d863\"" error="reading from a closed fifo" Dec 13 15:13:51.157911 env[1196]: time="2024-12-13T15:13:51.157850492Z" level=error msg="StartContainer for \"4ad7b078332ad1c3d9d455df72773a23e5abd964d5696815930e7eeb6dd1d863\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Dec 13 15:13:51.158448 kubelet[2033]: E1213 15:13:51.158401 2033 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="4ad7b078332ad1c3d9d455df72773a23e5abd964d5696815930e7eeb6dd1d863" Dec 13 15:13:51.162920 kubelet[2033]: E1213 15:13:51.162887 2033 kuberuntime_manager.go:1262] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Dec 13 15:13:51.162920 kubelet[2033]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Dec 13 15:13:51.162920 kubelet[2033]: rm /hostbin/cilium-mount Dec 13 15:13:51.163181 kubelet[2033]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-p4v25,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-w8d2h_kube-system(d63b2a25-c975-4842-ac23-85316b790b7a): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Dec 13 15:13:51.163181 kubelet[2033]: E1213 15:13:51.162992 2033 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-w8d2h" podUID="d63b2a25-c975-4842-ac23-85316b790b7a" Dec 13 15:13:51.203782 systemd[1]: Started sshd@25-10.243.84.50:22-139.178.68.195:39070.service. Dec 13 15:13:52.090760 sshd[3820]: Accepted publickey for core from 139.178.68.195 port 39070 ssh2: RSA SHA256:BRWuvX4vngANWcecei9LW91Zd3OWx+vtbErQ53ehsZc Dec 13 15:13:52.093093 sshd[3820]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 15:13:52.101786 systemd[1]: Started session-26.scope. Dec 13 15:13:52.102843 systemd-logind[1182]: New session 26 of user core. Dec 13 15:13:52.122173 env[1196]: time="2024-12-13T15:13:52.118406555Z" level=info msg="StopPodSandbox for \"09b344a6a65940d2460993be45f2cab3f38d0668faaedb1e48126aa6890604a8\"" Dec 13 15:13:52.122173 env[1196]: time="2024-12-13T15:13:52.118519234Z" level=info msg="Container to stop \"4ad7b078332ad1c3d9d455df72773a23e5abd964d5696815930e7eeb6dd1d863\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 15:13:52.121127 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-09b344a6a65940d2460993be45f2cab3f38d0668faaedb1e48126aa6890604a8-shm.mount: Deactivated successfully. Dec 13 15:13:52.137763 systemd[1]: cri-containerd-09b344a6a65940d2460993be45f2cab3f38d0668faaedb1e48126aa6890604a8.scope: Deactivated successfully. Dec 13 15:13:52.172432 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-09b344a6a65940d2460993be45f2cab3f38d0668faaedb1e48126aa6890604a8-rootfs.mount: Deactivated successfully. Dec 13 15:13:52.179708 env[1196]: time="2024-12-13T15:13:52.179648327Z" level=info msg="shim disconnected" id=09b344a6a65940d2460993be45f2cab3f38d0668faaedb1e48126aa6890604a8 Dec 13 15:13:52.180269 env[1196]: time="2024-12-13T15:13:52.180237550Z" level=warning msg="cleaning up after shim disconnected" id=09b344a6a65940d2460993be45f2cab3f38d0668faaedb1e48126aa6890604a8 namespace=k8s.io Dec 13 15:13:52.180624 env[1196]: time="2024-12-13T15:13:52.180597756Z" level=info msg="cleaning up dead shim" Dec 13 15:13:52.191890 env[1196]: time="2024-12-13T15:13:52.191834948Z" level=warning msg="cleanup warnings time=\"2024-12-13T15:13:52Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3841 runtime=io.containerd.runc.v2\n" Dec 13 15:13:52.192375 env[1196]: time="2024-12-13T15:13:52.192326512Z" level=info msg="TearDown network for sandbox \"09b344a6a65940d2460993be45f2cab3f38d0668faaedb1e48126aa6890604a8\" successfully" Dec 13 15:13:52.192453 env[1196]: time="2024-12-13T15:13:52.192371352Z" level=info msg="StopPodSandbox for \"09b344a6a65940d2460993be45f2cab3f38d0668faaedb1e48126aa6890604a8\" returns successfully" Dec 13 15:13:52.281426 kubelet[2033]: I1213 15:13:52.281352 2033 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d63b2a25-c975-4842-ac23-85316b790b7a-lib-modules\") pod \"d63b2a25-c975-4842-ac23-85316b790b7a\" (UID: \"d63b2a25-c975-4842-ac23-85316b790b7a\") " Dec 13 15:13:52.282360 kubelet[2033]: I1213 15:13:52.282334 2033 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d63b2a25-c975-4842-ac23-85316b790b7a-host-proc-sys-net\") pod \"d63b2a25-c975-4842-ac23-85316b790b7a\" (UID: \"d63b2a25-c975-4842-ac23-85316b790b7a\") " Dec 13 15:13:52.282571 kubelet[2033]: I1213 15:13:52.282536 2033 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d63b2a25-c975-4842-ac23-85316b790b7a-cilium-run\") pod \"d63b2a25-c975-4842-ac23-85316b790b7a\" (UID: \"d63b2a25-c975-4842-ac23-85316b790b7a\") " Dec 13 15:13:52.283135 kubelet[2033]: I1213 15:13:52.282809 2033 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d63b2a25-c975-4842-ac23-85316b790b7a-etc-cni-netd\") pod \"d63b2a25-c975-4842-ac23-85316b790b7a\" (UID: \"d63b2a25-c975-4842-ac23-85316b790b7a\") " Dec 13 15:13:52.283135 kubelet[2033]: I1213 15:13:52.282847 2033 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d63b2a25-c975-4842-ac23-85316b790b7a-host-proc-sys-kernel\") pod \"d63b2a25-c975-4842-ac23-85316b790b7a\" (UID: \"d63b2a25-c975-4842-ac23-85316b790b7a\") " Dec 13 15:13:52.283135 kubelet[2033]: I1213 15:13:52.282868 2033 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d63b2a25-c975-4842-ac23-85316b790b7a-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "d63b2a25-c975-4842-ac23-85316b790b7a" (UID: "d63b2a25-c975-4842-ac23-85316b790b7a"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 15:13:52.283135 kubelet[2033]: I1213 15:13:52.282893 2033 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d63b2a25-c975-4842-ac23-85316b790b7a-cilium-config-path\") pod \"d63b2a25-c975-4842-ac23-85316b790b7a\" (UID: \"d63b2a25-c975-4842-ac23-85316b790b7a\") " Dec 13 15:13:52.283135 kubelet[2033]: I1213 15:13:52.282921 2033 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d63b2a25-c975-4842-ac23-85316b790b7a-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "d63b2a25-c975-4842-ac23-85316b790b7a" (UID: "d63b2a25-c975-4842-ac23-85316b790b7a"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 15:13:52.283135 kubelet[2033]: I1213 15:13:52.282944 2033 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d63b2a25-c975-4842-ac23-85316b790b7a-clustermesh-secrets\") pod \"d63b2a25-c975-4842-ac23-85316b790b7a\" (UID: \"d63b2a25-c975-4842-ac23-85316b790b7a\") " Dec 13 15:13:52.283135 kubelet[2033]: I1213 15:13:52.282988 2033 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d63b2a25-c975-4842-ac23-85316b790b7a-cilium-cgroup\") pod \"d63b2a25-c975-4842-ac23-85316b790b7a\" (UID: \"d63b2a25-c975-4842-ac23-85316b790b7a\") " Dec 13 15:13:52.283135 kubelet[2033]: I1213 15:13:52.283016 2033 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d63b2a25-c975-4842-ac23-85316b790b7a-xtables-lock\") pod \"d63b2a25-c975-4842-ac23-85316b790b7a\" (UID: \"d63b2a25-c975-4842-ac23-85316b790b7a\") " Dec 13 15:13:52.283663 kubelet[2033]: I1213 15:13:52.283640 2033 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d63b2a25-c975-4842-ac23-85316b790b7a-hostproc\") pod \"d63b2a25-c975-4842-ac23-85316b790b7a\" (UID: \"d63b2a25-c975-4842-ac23-85316b790b7a\") " Dec 13 15:13:52.283858 kubelet[2033]: I1213 15:13:52.283822 2033 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p4v25\" (UniqueName: \"kubernetes.io/projected/d63b2a25-c975-4842-ac23-85316b790b7a-kube-api-access-p4v25\") pod \"d63b2a25-c975-4842-ac23-85316b790b7a\" (UID: \"d63b2a25-c975-4842-ac23-85316b790b7a\") " Dec 13 15:13:52.284016 kubelet[2033]: I1213 15:13:52.283994 2033 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d63b2a25-c975-4842-ac23-85316b790b7a-cni-path\") pod \"d63b2a25-c975-4842-ac23-85316b790b7a\" (UID: \"d63b2a25-c975-4842-ac23-85316b790b7a\") " Dec 13 15:13:52.284440 kubelet[2033]: I1213 15:13:52.284259 2033 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d63b2a25-c975-4842-ac23-85316b790b7a-hubble-tls\") pod \"d63b2a25-c975-4842-ac23-85316b790b7a\" (UID: \"d63b2a25-c975-4842-ac23-85316b790b7a\") " Dec 13 15:13:52.284440 kubelet[2033]: I1213 15:13:52.284298 2033 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/d63b2a25-c975-4842-ac23-85316b790b7a-cilium-ipsec-secrets\") pod \"d63b2a25-c975-4842-ac23-85316b790b7a\" (UID: \"d63b2a25-c975-4842-ac23-85316b790b7a\") " Dec 13 15:13:52.284440 kubelet[2033]: I1213 15:13:52.284323 2033 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d63b2a25-c975-4842-ac23-85316b790b7a-bpf-maps\") pod \"d63b2a25-c975-4842-ac23-85316b790b7a\" (UID: \"d63b2a25-c975-4842-ac23-85316b790b7a\") " Dec 13 15:13:52.284659 kubelet[2033]: I1213 15:13:52.284635 2033 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d63b2a25-c975-4842-ac23-85316b790b7a-etc-cni-netd\") on node \"srv-iw0hd.gb1.brightbox.com\" DevicePath \"\"" Dec 13 15:13:52.284789 kubelet[2033]: I1213 15:13:52.284767 2033 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d63b2a25-c975-4842-ac23-85316b790b7a-host-proc-sys-kernel\") on node \"srv-iw0hd.gb1.brightbox.com\" DevicePath \"\"" Dec 13 15:13:52.284946 kubelet[2033]: I1213 15:13:52.284922 2033 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d63b2a25-c975-4842-ac23-85316b790b7a-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "d63b2a25-c975-4842-ac23-85316b790b7a" (UID: "d63b2a25-c975-4842-ac23-85316b790b7a"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 15:13:52.286708 kubelet[2033]: I1213 15:13:52.286629 2033 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d63b2a25-c975-4842-ac23-85316b790b7a-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "d63b2a25-c975-4842-ac23-85316b790b7a" (UID: "d63b2a25-c975-4842-ac23-85316b790b7a"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 15:13:52.286855 kubelet[2033]: I1213 15:13:52.286753 2033 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d63b2a25-c975-4842-ac23-85316b790b7a-hostproc" (OuterVolumeSpecName: "hostproc") pod "d63b2a25-c975-4842-ac23-85316b790b7a" (UID: "d63b2a25-c975-4842-ac23-85316b790b7a"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 15:13:52.286855 kubelet[2033]: I1213 15:13:52.282252 2033 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d63b2a25-c975-4842-ac23-85316b790b7a-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "d63b2a25-c975-4842-ac23-85316b790b7a" (UID: "d63b2a25-c975-4842-ac23-85316b790b7a"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 15:13:52.286855 kubelet[2033]: I1213 15:13:52.282402 2033 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d63b2a25-c975-4842-ac23-85316b790b7a-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "d63b2a25-c975-4842-ac23-85316b790b7a" (UID: "d63b2a25-c975-4842-ac23-85316b790b7a"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 15:13:52.286855 kubelet[2033]: I1213 15:13:52.282601 2033 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d63b2a25-c975-4842-ac23-85316b790b7a-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "d63b2a25-c975-4842-ac23-85316b790b7a" (UID: "d63b2a25-c975-4842-ac23-85316b790b7a"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 15:13:52.286855 kubelet[2033]: I1213 15:13:52.286841 2033 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d63b2a25-c975-4842-ac23-85316b790b7a-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "d63b2a25-c975-4842-ac23-85316b790b7a" (UID: "d63b2a25-c975-4842-ac23-85316b790b7a"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 15:13:52.287193 kubelet[2033]: I1213 15:13:52.286867 2033 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d63b2a25-c975-4842-ac23-85316b790b7a-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "d63b2a25-c975-4842-ac23-85316b790b7a" (UID: "d63b2a25-c975-4842-ac23-85316b790b7a"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 15:13:52.287193 kubelet[2033]: I1213 15:13:52.286913 2033 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d63b2a25-c975-4842-ac23-85316b790b7a-cni-path" (OuterVolumeSpecName: "cni-path") pod "d63b2a25-c975-4842-ac23-85316b790b7a" (UID: "d63b2a25-c975-4842-ac23-85316b790b7a"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 15:13:52.292463 systemd[1]: var-lib-kubelet-pods-d63b2a25\x2dc975\x2d4842\x2dac23\x2d85316b790b7a-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 15:13:52.295664 systemd[1]: var-lib-kubelet-pods-d63b2a25\x2dc975\x2d4842\x2dac23\x2d85316b790b7a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dp4v25.mount: Deactivated successfully. Dec 13 15:13:52.298176 kubelet[2033]: I1213 15:13:52.296877 2033 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d63b2a25-c975-4842-ac23-85316b790b7a-kube-api-access-p4v25" (OuterVolumeSpecName: "kube-api-access-p4v25") pod "d63b2a25-c975-4842-ac23-85316b790b7a" (UID: "d63b2a25-c975-4842-ac23-85316b790b7a"). InnerVolumeSpecName "kube-api-access-p4v25". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 15:13:52.298176 kubelet[2033]: I1213 15:13:52.297415 2033 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d63b2a25-c975-4842-ac23-85316b790b7a-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "d63b2a25-c975-4842-ac23-85316b790b7a" (UID: "d63b2a25-c975-4842-ac23-85316b790b7a"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 15:13:52.300133 kubelet[2033]: I1213 15:13:52.300068 2033 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d63b2a25-c975-4842-ac23-85316b790b7a-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "d63b2a25-c975-4842-ac23-85316b790b7a" (UID: "d63b2a25-c975-4842-ac23-85316b790b7a"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 15:13:52.301381 kubelet[2033]: I1213 15:13:52.301351 2033 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d63b2a25-c975-4842-ac23-85316b790b7a-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "d63b2a25-c975-4842-ac23-85316b790b7a" (UID: "d63b2a25-c975-4842-ac23-85316b790b7a"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 15:13:52.386167 kubelet[2033]: I1213 15:13:52.386011 2033 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d63b2a25-c975-4842-ac23-85316b790b7a-host-proc-sys-net\") on node \"srv-iw0hd.gb1.brightbox.com\" DevicePath \"\"" Dec 13 15:13:52.386433 kubelet[2033]: I1213 15:13:52.386408 2033 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d63b2a25-c975-4842-ac23-85316b790b7a-cilium-run\") on node \"srv-iw0hd.gb1.brightbox.com\" DevicePath \"\"" Dec 13 15:13:52.386558 kubelet[2033]: I1213 15:13:52.386536 2033 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d63b2a25-c975-4842-ac23-85316b790b7a-lib-modules\") on node \"srv-iw0hd.gb1.brightbox.com\" DevicePath \"\"" Dec 13 15:13:52.386701 kubelet[2033]: I1213 15:13:52.386679 2033 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d63b2a25-c975-4842-ac23-85316b790b7a-cilium-config-path\") on node \"srv-iw0hd.gb1.brightbox.com\" DevicePath \"\"" Dec 13 15:13:52.386837 kubelet[2033]: I1213 15:13:52.386815 2033 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d63b2a25-c975-4842-ac23-85316b790b7a-clustermesh-secrets\") on node \"srv-iw0hd.gb1.brightbox.com\" DevicePath \"\"" Dec 13 15:13:52.387073 kubelet[2033]: I1213 15:13:52.387050 2033 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d63b2a25-c975-4842-ac23-85316b790b7a-cilium-cgroup\") on node \"srv-iw0hd.gb1.brightbox.com\" DevicePath \"\"" Dec 13 15:13:52.387226 kubelet[2033]: I1213 15:13:52.387190 2033 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d63b2a25-c975-4842-ac23-85316b790b7a-xtables-lock\") on node \"srv-iw0hd.gb1.brightbox.com\" DevicePath \"\"" Dec 13 15:13:52.387357 kubelet[2033]: I1213 15:13:52.387336 2033 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d63b2a25-c975-4842-ac23-85316b790b7a-hostproc\") on node \"srv-iw0hd.gb1.brightbox.com\" DevicePath \"\"" Dec 13 15:13:52.387489 kubelet[2033]: I1213 15:13:52.387468 2033 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d63b2a25-c975-4842-ac23-85316b790b7a-cni-path\") on node \"srv-iw0hd.gb1.brightbox.com\" DevicePath \"\"" Dec 13 15:13:52.387661 kubelet[2033]: I1213 15:13:52.387598 2033 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d63b2a25-c975-4842-ac23-85316b790b7a-hubble-tls\") on node \"srv-iw0hd.gb1.brightbox.com\" DevicePath \"\"" Dec 13 15:13:52.387792 kubelet[2033]: I1213 15:13:52.387772 2033 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-p4v25\" (UniqueName: \"kubernetes.io/projected/d63b2a25-c975-4842-ac23-85316b790b7a-kube-api-access-p4v25\") on node \"srv-iw0hd.gb1.brightbox.com\" DevicePath \"\"" Dec 13 15:13:52.387919 kubelet[2033]: I1213 15:13:52.387897 2033 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d63b2a25-c975-4842-ac23-85316b790b7a-bpf-maps\") on node \"srv-iw0hd.gb1.brightbox.com\" DevicePath \"\"" Dec 13 15:13:52.388054 kubelet[2033]: I1213 15:13:52.388033 2033 reconciler_common.go:300] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/d63b2a25-c975-4842-ac23-85316b790b7a-cilium-ipsec-secrets\") on node \"srv-iw0hd.gb1.brightbox.com\" DevicePath \"\"" Dec 13 15:13:52.598874 systemd[1]: Removed slice kubepods-burstable-podd63b2a25_c975_4842_ac23_85316b790b7a.slice. Dec 13 15:13:52.780868 systemd[1]: var-lib-kubelet-pods-d63b2a25\x2dc975\x2d4842\x2dac23\x2d85316b790b7a-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 15:13:52.781899 systemd[1]: var-lib-kubelet-pods-d63b2a25\x2dc975\x2d4842\x2dac23\x2d85316b790b7a-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Dec 13 15:13:53.120479 kubelet[2033]: I1213 15:13:53.120425 2033 scope.go:117] "RemoveContainer" containerID="4ad7b078332ad1c3d9d455df72773a23e5abd964d5696815930e7eeb6dd1d863" Dec 13 15:13:53.135708 env[1196]: time="2024-12-13T15:13:53.135027460Z" level=info msg="RemoveContainer for \"4ad7b078332ad1c3d9d455df72773a23e5abd964d5696815930e7eeb6dd1d863\"" Dec 13 15:13:53.139352 env[1196]: time="2024-12-13T15:13:53.139301674Z" level=info msg="RemoveContainer for \"4ad7b078332ad1c3d9d455df72773a23e5abd964d5696815930e7eeb6dd1d863\" returns successfully" Dec 13 15:13:53.196020 kubelet[2033]: I1213 15:13:53.195956 2033 topology_manager.go:215] "Topology Admit Handler" podUID="9f082fdc-66e2-4ca4-bf8c-98c17b14d04a" podNamespace="kube-system" podName="cilium-vfqsd" Dec 13 15:13:53.196423 kubelet[2033]: E1213 15:13:53.196398 2033 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d63b2a25-c975-4842-ac23-85316b790b7a" containerName="mount-cgroup" Dec 13 15:13:53.196699 kubelet[2033]: I1213 15:13:53.196676 2033 memory_manager.go:354] "RemoveStaleState removing state" podUID="d63b2a25-c975-4842-ac23-85316b790b7a" containerName="mount-cgroup" Dec 13 15:13:53.204714 systemd[1]: Created slice kubepods-burstable-pod9f082fdc_66e2_4ca4_bf8c_98c17b14d04a.slice. Dec 13 15:13:53.294483 kubelet[2033]: I1213 15:13:53.294380 2033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9f082fdc-66e2-4ca4-bf8c-98c17b14d04a-bpf-maps\") pod \"cilium-vfqsd\" (UID: \"9f082fdc-66e2-4ca4-bf8c-98c17b14d04a\") " pod="kube-system/cilium-vfqsd" Dec 13 15:13:53.294483 kubelet[2033]: I1213 15:13:53.294490 2033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9f082fdc-66e2-4ca4-bf8c-98c17b14d04a-cni-path\") pod \"cilium-vfqsd\" (UID: \"9f082fdc-66e2-4ca4-bf8c-98c17b14d04a\") " pod="kube-system/cilium-vfqsd" Dec 13 15:13:53.295202 kubelet[2033]: I1213 15:13:53.294522 2033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9f082fdc-66e2-4ca4-bf8c-98c17b14d04a-etc-cni-netd\") pod \"cilium-vfqsd\" (UID: \"9f082fdc-66e2-4ca4-bf8c-98c17b14d04a\") " pod="kube-system/cilium-vfqsd" Dec 13 15:13:53.295202 kubelet[2033]: I1213 15:13:53.294589 2033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9f082fdc-66e2-4ca4-bf8c-98c17b14d04a-cilium-config-path\") pod \"cilium-vfqsd\" (UID: \"9f082fdc-66e2-4ca4-bf8c-98c17b14d04a\") " pod="kube-system/cilium-vfqsd" Dec 13 15:13:53.295202 kubelet[2033]: I1213 15:13:53.294645 2033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9f082fdc-66e2-4ca4-bf8c-98c17b14d04a-xtables-lock\") pod \"cilium-vfqsd\" (UID: \"9f082fdc-66e2-4ca4-bf8c-98c17b14d04a\") " pod="kube-system/cilium-vfqsd" Dec 13 15:13:53.295202 kubelet[2033]: I1213 15:13:53.294677 2033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9f082fdc-66e2-4ca4-bf8c-98c17b14d04a-cilium-cgroup\") pod \"cilium-vfqsd\" (UID: \"9f082fdc-66e2-4ca4-bf8c-98c17b14d04a\") " pod="kube-system/cilium-vfqsd" Dec 13 15:13:53.295202 kubelet[2033]: I1213 15:13:53.294735 2033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9f082fdc-66e2-4ca4-bf8c-98c17b14d04a-lib-modules\") pod \"cilium-vfqsd\" (UID: \"9f082fdc-66e2-4ca4-bf8c-98c17b14d04a\") " pod="kube-system/cilium-vfqsd" Dec 13 15:13:53.295202 kubelet[2033]: I1213 15:13:53.294777 2033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9f082fdc-66e2-4ca4-bf8c-98c17b14d04a-clustermesh-secrets\") pod \"cilium-vfqsd\" (UID: \"9f082fdc-66e2-4ca4-bf8c-98c17b14d04a\") " pod="kube-system/cilium-vfqsd" Dec 13 15:13:53.295202 kubelet[2033]: I1213 15:13:53.294826 2033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/9f082fdc-66e2-4ca4-bf8c-98c17b14d04a-cilium-ipsec-secrets\") pod \"cilium-vfqsd\" (UID: \"9f082fdc-66e2-4ca4-bf8c-98c17b14d04a\") " pod="kube-system/cilium-vfqsd" Dec 13 15:13:53.295202 kubelet[2033]: I1213 15:13:53.294862 2033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9f082fdc-66e2-4ca4-bf8c-98c17b14d04a-host-proc-sys-kernel\") pod \"cilium-vfqsd\" (UID: \"9f082fdc-66e2-4ca4-bf8c-98c17b14d04a\") " pod="kube-system/cilium-vfqsd" Dec 13 15:13:53.295202 kubelet[2033]: I1213 15:13:53.294913 2033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9f082fdc-66e2-4ca4-bf8c-98c17b14d04a-cilium-run\") pod \"cilium-vfqsd\" (UID: \"9f082fdc-66e2-4ca4-bf8c-98c17b14d04a\") " pod="kube-system/cilium-vfqsd" Dec 13 15:13:53.295202 kubelet[2033]: I1213 15:13:53.294949 2033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9f082fdc-66e2-4ca4-bf8c-98c17b14d04a-hostproc\") pod \"cilium-vfqsd\" (UID: \"9f082fdc-66e2-4ca4-bf8c-98c17b14d04a\") " pod="kube-system/cilium-vfqsd" Dec 13 15:13:53.295202 kubelet[2033]: I1213 15:13:53.295011 2033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9f082fdc-66e2-4ca4-bf8c-98c17b14d04a-host-proc-sys-net\") pod \"cilium-vfqsd\" (UID: \"9f082fdc-66e2-4ca4-bf8c-98c17b14d04a\") " pod="kube-system/cilium-vfqsd" Dec 13 15:13:53.295202 kubelet[2033]: I1213 15:13:53.295062 2033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9f082fdc-66e2-4ca4-bf8c-98c17b14d04a-hubble-tls\") pod \"cilium-vfqsd\" (UID: \"9f082fdc-66e2-4ca4-bf8c-98c17b14d04a\") " pod="kube-system/cilium-vfqsd" Dec 13 15:13:53.295202 kubelet[2033]: I1213 15:13:53.295102 2033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fgxd9\" (UniqueName: \"kubernetes.io/projected/9f082fdc-66e2-4ca4-bf8c-98c17b14d04a-kube-api-access-fgxd9\") pod \"cilium-vfqsd\" (UID: \"9f082fdc-66e2-4ca4-bf8c-98c17b14d04a\") " pod="kube-system/cilium-vfqsd" Dec 13 15:13:53.511908 env[1196]: time="2024-12-13T15:13:53.509595274Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vfqsd,Uid:9f082fdc-66e2-4ca4-bf8c-98c17b14d04a,Namespace:kube-system,Attempt:0,}" Dec 13 15:13:53.527944 env[1196]: time="2024-12-13T15:13:53.527849051Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 15:13:53.528303 env[1196]: time="2024-12-13T15:13:53.528228330Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 15:13:53.528479 env[1196]: time="2024-12-13T15:13:53.528261574Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 15:13:53.528835 env[1196]: time="2024-12-13T15:13:53.528780871Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9a5020e3eb4fafb5240dfe395e7d192416d090da36327f43285c49996e3bc149 pid=3878 runtime=io.containerd.runc.v2 Dec 13 15:13:53.547716 systemd[1]: Started cri-containerd-9a5020e3eb4fafb5240dfe395e7d192416d090da36327f43285c49996e3bc149.scope. Dec 13 15:13:53.601392 env[1196]: time="2024-12-13T15:13:53.601308667Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vfqsd,Uid:9f082fdc-66e2-4ca4-bf8c-98c17b14d04a,Namespace:kube-system,Attempt:0,} returns sandbox id \"9a5020e3eb4fafb5240dfe395e7d192416d090da36327f43285c49996e3bc149\"" Dec 13 15:13:53.608429 env[1196]: time="2024-12-13T15:13:53.607340251Z" level=info msg="CreateContainer within sandbox \"9a5020e3eb4fafb5240dfe395e7d192416d090da36327f43285c49996e3bc149\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 15:13:53.621740 env[1196]: time="2024-12-13T15:13:53.621673747Z" level=info msg="CreateContainer within sandbox \"9a5020e3eb4fafb5240dfe395e7d192416d090da36327f43285c49996e3bc149\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"5cb2a2b82a46a302e1e42244f9fa6c92feb94f2864c95110a9b47b410d19a150\"" Dec 13 15:13:53.623777 env[1196]: time="2024-12-13T15:13:53.623096776Z" level=info msg="StartContainer for \"5cb2a2b82a46a302e1e42244f9fa6c92feb94f2864c95110a9b47b410d19a150\"" Dec 13 15:13:53.646136 systemd[1]: Started cri-containerd-5cb2a2b82a46a302e1e42244f9fa6c92feb94f2864c95110a9b47b410d19a150.scope. Dec 13 15:13:53.694993 env[1196]: time="2024-12-13T15:13:53.694918475Z" level=info msg="StartContainer for \"5cb2a2b82a46a302e1e42244f9fa6c92feb94f2864c95110a9b47b410d19a150\" returns successfully" Dec 13 15:13:53.734696 systemd[1]: cri-containerd-5cb2a2b82a46a302e1e42244f9fa6c92feb94f2864c95110a9b47b410d19a150.scope: Deactivated successfully. Dec 13 15:13:53.770364 env[1196]: time="2024-12-13T15:13:53.769393963Z" level=info msg="shim disconnected" id=5cb2a2b82a46a302e1e42244f9fa6c92feb94f2864c95110a9b47b410d19a150 Dec 13 15:13:53.770364 env[1196]: time="2024-12-13T15:13:53.770093729Z" level=warning msg="cleaning up after shim disconnected" id=5cb2a2b82a46a302e1e42244f9fa6c92feb94f2864c95110a9b47b410d19a150 namespace=k8s.io Dec 13 15:13:53.770364 env[1196]: time="2024-12-13T15:13:53.770125457Z" level=info msg="cleaning up dead shim" Dec 13 15:13:53.791562 env[1196]: time="2024-12-13T15:13:53.791499808Z" level=warning msg="cleanup warnings time=\"2024-12-13T15:13:53Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3963 runtime=io.containerd.runc.v2\n" Dec 13 15:13:54.127775 env[1196]: time="2024-12-13T15:13:54.127710279Z" level=info msg="CreateContainer within sandbox \"9a5020e3eb4fafb5240dfe395e7d192416d090da36327f43285c49996e3bc149\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 15:13:54.146958 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2854765509.mount: Deactivated successfully. Dec 13 15:13:54.158896 env[1196]: time="2024-12-13T15:13:54.158816289Z" level=info msg="CreateContainer within sandbox \"9a5020e3eb4fafb5240dfe395e7d192416d090da36327f43285c49996e3bc149\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"da1ecbae76d8f94d24be33303897767c042e9be72f93d5fe50daaf947e28c725\"" Dec 13 15:13:54.160292 env[1196]: time="2024-12-13T15:13:54.160254749Z" level=info msg="StartContainer for \"da1ecbae76d8f94d24be33303897767c042e9be72f93d5fe50daaf947e28c725\"" Dec 13 15:13:54.186923 systemd[1]: Started cri-containerd-da1ecbae76d8f94d24be33303897767c042e9be72f93d5fe50daaf947e28c725.scope. Dec 13 15:13:54.241921 env[1196]: time="2024-12-13T15:13:54.241810097Z" level=info msg="StartContainer for \"da1ecbae76d8f94d24be33303897767c042e9be72f93d5fe50daaf947e28c725\" returns successfully" Dec 13 15:13:54.243577 kubelet[2033]: W1213 15:13:54.243501 2033 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd63b2a25_c975_4842_ac23_85316b790b7a.slice/cri-containerd-4ad7b078332ad1c3d9d455df72773a23e5abd964d5696815930e7eeb6dd1d863.scope WatchSource:0}: container "4ad7b078332ad1c3d9d455df72773a23e5abd964d5696815930e7eeb6dd1d863" in namespace "k8s.io": not found Dec 13 15:13:54.264885 systemd[1]: cri-containerd-da1ecbae76d8f94d24be33303897767c042e9be72f93d5fe50daaf947e28c725.scope: Deactivated successfully. Dec 13 15:13:54.295994 env[1196]: time="2024-12-13T15:13:54.295894506Z" level=info msg="shim disconnected" id=da1ecbae76d8f94d24be33303897767c042e9be72f93d5fe50daaf947e28c725 Dec 13 15:13:54.296370 env[1196]: time="2024-12-13T15:13:54.296337980Z" level=warning msg="cleaning up after shim disconnected" id=da1ecbae76d8f94d24be33303897767c042e9be72f93d5fe50daaf947e28c725 namespace=k8s.io Dec 13 15:13:54.296523 env[1196]: time="2024-12-13T15:13:54.296495847Z" level=info msg="cleaning up dead shim" Dec 13 15:13:54.311273 env[1196]: time="2024-12-13T15:13:54.311226815Z" level=warning msg="cleanup warnings time=\"2024-12-13T15:13:54Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4025 runtime=io.containerd.runc.v2\n" Dec 13 15:13:54.593577 kubelet[2033]: I1213 15:13:54.593535 2033 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="d63b2a25-c975-4842-ac23-85316b790b7a" path="/var/lib/kubelet/pods/d63b2a25-c975-4842-ac23-85316b790b7a/volumes" Dec 13 15:13:54.775890 kubelet[2033]: E1213 15:13:54.775819 2033 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 15:13:54.784042 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-da1ecbae76d8f94d24be33303897767c042e9be72f93d5fe50daaf947e28c725-rootfs.mount: Deactivated successfully. Dec 13 15:13:55.134450 env[1196]: time="2024-12-13T15:13:55.134203018Z" level=info msg="CreateContainer within sandbox \"9a5020e3eb4fafb5240dfe395e7d192416d090da36327f43285c49996e3bc149\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 15:13:55.160065 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount310077845.mount: Deactivated successfully. Dec 13 15:13:55.172437 env[1196]: time="2024-12-13T15:13:55.172365990Z" level=info msg="CreateContainer within sandbox \"9a5020e3eb4fafb5240dfe395e7d192416d090da36327f43285c49996e3bc149\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"58a377e5f433351b747caf5a6e2dbe505524381b231dfa5cd30672ee7dfcd2cc\"" Dec 13 15:13:55.173768 env[1196]: time="2024-12-13T15:13:55.173732889Z" level=info msg="StartContainer for \"58a377e5f433351b747caf5a6e2dbe505524381b231dfa5cd30672ee7dfcd2cc\"" Dec 13 15:13:55.204545 systemd[1]: Started cri-containerd-58a377e5f433351b747caf5a6e2dbe505524381b231dfa5cd30672ee7dfcd2cc.scope. Dec 13 15:13:55.265082 env[1196]: time="2024-12-13T15:13:55.264948057Z" level=info msg="StartContainer for \"58a377e5f433351b747caf5a6e2dbe505524381b231dfa5cd30672ee7dfcd2cc\" returns successfully" Dec 13 15:13:55.267783 systemd[1]: cri-containerd-58a377e5f433351b747caf5a6e2dbe505524381b231dfa5cd30672ee7dfcd2cc.scope: Deactivated successfully. Dec 13 15:13:55.297753 env[1196]: time="2024-12-13T15:13:55.297693794Z" level=info msg="shim disconnected" id=58a377e5f433351b747caf5a6e2dbe505524381b231dfa5cd30672ee7dfcd2cc Dec 13 15:13:55.298034 env[1196]: time="2024-12-13T15:13:55.297762007Z" level=warning msg="cleaning up after shim disconnected" id=58a377e5f433351b747caf5a6e2dbe505524381b231dfa5cd30672ee7dfcd2cc namespace=k8s.io Dec 13 15:13:55.298034 env[1196]: time="2024-12-13T15:13:55.297779500Z" level=info msg="cleaning up dead shim" Dec 13 15:13:55.308951 env[1196]: time="2024-12-13T15:13:55.308897049Z" level=warning msg="cleanup warnings time=\"2024-12-13T15:13:55Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4084 runtime=io.containerd.runc.v2\n" Dec 13 15:13:55.784184 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-58a377e5f433351b747caf5a6e2dbe505524381b231dfa5cd30672ee7dfcd2cc-rootfs.mount: Deactivated successfully. Dec 13 15:13:56.139870 env[1196]: time="2024-12-13T15:13:56.139769206Z" level=info msg="CreateContainer within sandbox \"9a5020e3eb4fafb5240dfe395e7d192416d090da36327f43285c49996e3bc149\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 15:13:56.163580 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3361376214.mount: Deactivated successfully. Dec 13 15:13:56.166802 env[1196]: time="2024-12-13T15:13:56.166722087Z" level=info msg="CreateContainer within sandbox \"9a5020e3eb4fafb5240dfe395e7d192416d090da36327f43285c49996e3bc149\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"becd0ec79452b583f728ba0c516211e9cf03f15a702af39b58f727974dfaa657\"" Dec 13 15:13:56.168214 env[1196]: time="2024-12-13T15:13:56.168181400Z" level=info msg="StartContainer for \"becd0ec79452b583f728ba0c516211e9cf03f15a702af39b58f727974dfaa657\"" Dec 13 15:13:56.201533 systemd[1]: Started cri-containerd-becd0ec79452b583f728ba0c516211e9cf03f15a702af39b58f727974dfaa657.scope. Dec 13 15:13:56.244637 systemd[1]: cri-containerd-becd0ec79452b583f728ba0c516211e9cf03f15a702af39b58f727974dfaa657.scope: Deactivated successfully. Dec 13 15:13:56.249907 env[1196]: time="2024-12-13T15:13:56.249834260Z" level=info msg="StartContainer for \"becd0ec79452b583f728ba0c516211e9cf03f15a702af39b58f727974dfaa657\" returns successfully" Dec 13 15:13:56.252770 env[1196]: time="2024-12-13T15:13:56.247959295Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9f082fdc_66e2_4ca4_bf8c_98c17b14d04a.slice/cri-containerd-becd0ec79452b583f728ba0c516211e9cf03f15a702af39b58f727974dfaa657.scope/memory.events\": no such file or directory" Dec 13 15:13:56.276991 env[1196]: time="2024-12-13T15:13:56.276915210Z" level=info msg="shim disconnected" id=becd0ec79452b583f728ba0c516211e9cf03f15a702af39b58f727974dfaa657 Dec 13 15:13:56.277337 env[1196]: time="2024-12-13T15:13:56.277306174Z" level=warning msg="cleaning up after shim disconnected" id=becd0ec79452b583f728ba0c516211e9cf03f15a702af39b58f727974dfaa657 namespace=k8s.io Dec 13 15:13:56.277571 env[1196]: time="2024-12-13T15:13:56.277544625Z" level=info msg="cleaning up dead shim" Dec 13 15:13:56.288319 env[1196]: time="2024-12-13T15:13:56.288255725Z" level=warning msg="cleanup warnings time=\"2024-12-13T15:13:56Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4139 runtime=io.containerd.runc.v2\n" Dec 13 15:13:56.784340 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-becd0ec79452b583f728ba0c516211e9cf03f15a702af39b58f727974dfaa657-rootfs.mount: Deactivated successfully. Dec 13 15:13:57.144502 env[1196]: time="2024-12-13T15:13:57.144447686Z" level=info msg="CreateContainer within sandbox \"9a5020e3eb4fafb5240dfe395e7d192416d090da36327f43285c49996e3bc149\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 15:13:57.165052 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3922181279.mount: Deactivated successfully. Dec 13 15:13:57.170873 env[1196]: time="2024-12-13T15:13:57.170821432Z" level=info msg="CreateContainer within sandbox \"9a5020e3eb4fafb5240dfe395e7d192416d090da36327f43285c49996e3bc149\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"1cb98f296c5880337a49f0109f756b5cac6230d99f6aaee324df545f6734a2e6\"" Dec 13 15:13:57.173545 env[1196]: time="2024-12-13T15:13:57.173502959Z" level=info msg="StartContainer for \"1cb98f296c5880337a49f0109f756b5cac6230d99f6aaee324df545f6734a2e6\"" Dec 13 15:13:57.175714 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3786807431.mount: Deactivated successfully. Dec 13 15:13:57.203669 systemd[1]: Started cri-containerd-1cb98f296c5880337a49f0109f756b5cac6230d99f6aaee324df545f6734a2e6.scope. Dec 13 15:13:57.267496 env[1196]: time="2024-12-13T15:13:57.267404917Z" level=info msg="StartContainer for \"1cb98f296c5880337a49f0109f756b5cac6230d99f6aaee324df545f6734a2e6\" returns successfully" Dec 13 15:13:57.361925 kubelet[2033]: W1213 15:13:57.358219 2033 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9f082fdc_66e2_4ca4_bf8c_98c17b14d04a.slice/cri-containerd-5cb2a2b82a46a302e1e42244f9fa6c92feb94f2864c95110a9b47b410d19a150.scope WatchSource:0}: task 5cb2a2b82a46a302e1e42244f9fa6c92feb94f2864c95110a9b47b410d19a150 not found: not found Dec 13 15:13:58.007005 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Dec 13 15:13:58.079339 kubelet[2033]: I1213 15:13:58.079297 2033 setters.go:568] "Node became not ready" node="srv-iw0hd.gb1.brightbox.com" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-12-13T15:13:58Z","lastTransitionTime":"2024-12-13T15:13:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Dec 13 15:13:58.171513 kubelet[2033]: I1213 15:13:58.171217 2033 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-vfqsd" podStartSLOduration=5.171137598 podStartE2EDuration="5.171137598s" podCreationTimestamp="2024-12-13 15:13:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 15:13:58.170402713 +0000 UTC m=+153.830667267" watchObservedRunningTime="2024-12-13 15:13:58.171137598 +0000 UTC m=+153.831402147" Dec 13 15:13:58.942889 systemd[1]: run-containerd-runc-k8s.io-1cb98f296c5880337a49f0109f756b5cac6230d99f6aaee324df545f6734a2e6-runc.PFtLC3.mount: Deactivated successfully. Dec 13 15:13:59.053531 kubelet[2033]: E1213 15:13:59.053253 2033 upgradeaware.go:439] Error proxying data from backend to client: write tcp 10.243.84.50:10250->10.243.84.50:60538: write: connection reset by peer Dec 13 15:14:00.472667 kubelet[2033]: W1213 15:14:00.472558 2033 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9f082fdc_66e2_4ca4_bf8c_98c17b14d04a.slice/cri-containerd-da1ecbae76d8f94d24be33303897767c042e9be72f93d5fe50daaf947e28c725.scope WatchSource:0}: task da1ecbae76d8f94d24be33303897767c042e9be72f93d5fe50daaf947e28c725 not found: not found Dec 13 15:14:01.187844 systemd[1]: run-containerd-runc-k8s.io-1cb98f296c5880337a49f0109f756b5cac6230d99f6aaee324df545f6734a2e6-runc.HsyNGH.mount: Deactivated successfully. Dec 13 15:14:01.718914 systemd-networkd[1030]: lxc_health: Link UP Dec 13 15:14:01.728290 systemd-networkd[1030]: lxc_health: Gained carrier Dec 13 15:14:01.729088 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Dec 13 15:14:03.546839 systemd[1]: run-containerd-runc-k8s.io-1cb98f296c5880337a49f0109f756b5cac6230d99f6aaee324df545f6734a2e6-runc.r9Ntj4.mount: Deactivated successfully. Dec 13 15:14:03.597103 kubelet[2033]: W1213 15:14:03.597025 2033 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9f082fdc_66e2_4ca4_bf8c_98c17b14d04a.slice/cri-containerd-58a377e5f433351b747caf5a6e2dbe505524381b231dfa5cd30672ee7dfcd2cc.scope WatchSource:0}: task 58a377e5f433351b747caf5a6e2dbe505524381b231dfa5cd30672ee7dfcd2cc not found: not found Dec 13 15:14:03.644129 systemd-networkd[1030]: lxc_health: Gained IPv6LL Dec 13 15:14:05.881616 systemd[1]: run-containerd-runc-k8s.io-1cb98f296c5880337a49f0109f756b5cac6230d99f6aaee324df545f6734a2e6-runc.1GxxA9.mount: Deactivated successfully. Dec 13 15:14:06.709190 kubelet[2033]: W1213 15:14:06.709098 2033 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9f082fdc_66e2_4ca4_bf8c_98c17b14d04a.slice/cri-containerd-becd0ec79452b583f728ba0c516211e9cf03f15a702af39b58f727974dfaa657.scope WatchSource:0}: task becd0ec79452b583f728ba0c516211e9cf03f15a702af39b58f727974dfaa657 not found: not found Dec 13 15:14:08.136256 systemd[1]: run-containerd-runc-k8s.io-1cb98f296c5880337a49f0109f756b5cac6230d99f6aaee324df545f6734a2e6-runc.en2gzO.mount: Deactivated successfully. Dec 13 15:14:08.367625 sshd[3820]: pam_unix(sshd:session): session closed for user core Dec 13 15:14:08.378305 systemd[1]: sshd@25-10.243.84.50:22-139.178.68.195:39070.service: Deactivated successfully. Dec 13 15:14:08.379551 systemd[1]: session-26.scope: Deactivated successfully. Dec 13 15:14:08.381300 systemd-logind[1182]: Session 26 logged out. Waiting for processes to exit. Dec 13 15:14:08.384022 systemd-logind[1182]: Removed session 26.