Dec 13 15:43:55.932631 kernel: Linux version 5.15.173-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri Dec 13 12:55:10 -00 2024 Dec 13 15:43:55.932674 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=8c474c3ec361ec863adbecaa85281a726e1b53f7863ecc4742be8c5f6d02a66e Dec 13 15:43:55.932695 kernel: BIOS-provided physical RAM map: Dec 13 15:43:55.932706 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Dec 13 15:43:55.932716 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Dec 13 15:43:55.932726 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Dec 13 15:43:55.932738 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdbfff] usable Dec 13 15:43:55.932749 kernel: BIOS-e820: [mem 0x000000007ffdc000-0x000000007fffffff] reserved Dec 13 15:43:55.932759 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Dec 13 15:43:55.932769 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Dec 13 15:43:55.932784 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Dec 13 15:43:55.932794 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Dec 13 15:43:55.932805 kernel: NX (Execute Disable) protection: active Dec 13 15:43:55.932815 kernel: SMBIOS 2.8 present. Dec 13 15:43:55.932828 kernel: DMI: Red Hat KVM/RHEL-AV, BIOS 1.13.0-2.module_el8.5.0+2608+72063365 04/01/2014 Dec 13 15:43:55.932839 kernel: Hypervisor detected: KVM Dec 13 15:43:55.932854 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 13 15:43:55.932865 kernel: kvm-clock: cpu 0, msr 7e19a001, primary cpu clock Dec 13 15:43:55.932877 kernel: kvm-clock: using sched offset of 4696181489 cycles Dec 13 15:43:55.932888 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 13 15:43:55.932900 kernel: tsc: Detected 2499.998 MHz processor Dec 13 15:43:55.932911 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 13 15:43:55.932923 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 13 15:43:55.932934 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Dec 13 15:43:55.932945 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 13 15:43:55.932960 kernel: Using GB pages for direct mapping Dec 13 15:43:55.932972 kernel: ACPI: Early table checksum verification disabled Dec 13 15:43:55.932983 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS ) Dec 13 15:43:55.932994 kernel: ACPI: RSDT 0x000000007FFE47A5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 15:43:55.933005 kernel: ACPI: FACP 0x000000007FFE438D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 15:43:55.933017 kernel: ACPI: DSDT 0x000000007FFDFD80 00460D (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 15:43:55.933028 kernel: ACPI: FACS 0x000000007FFDFD40 000040 Dec 13 15:43:55.933039 kernel: ACPI: APIC 0x000000007FFE4481 0000F0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 15:43:55.933050 kernel: ACPI: SRAT 0x000000007FFE4571 0001D0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 15:43:55.933065 kernel: ACPI: MCFG 0x000000007FFE4741 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 15:43:55.933077 kernel: ACPI: WAET 0x000000007FFE477D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 15:43:55.933088 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe438d-0x7ffe4480] Dec 13 15:43:55.933099 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfd80-0x7ffe438c] Dec 13 15:43:55.933110 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfd40-0x7ffdfd7f] Dec 13 15:43:55.933122 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe4481-0x7ffe4570] Dec 13 15:43:55.933138 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe4571-0x7ffe4740] Dec 13 15:43:55.933154 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe4741-0x7ffe477c] Dec 13 15:43:55.933173 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe477d-0x7ffe47a4] Dec 13 15:43:55.933185 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Dec 13 15:43:55.933209 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Dec 13 15:43:55.933222 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Dec 13 15:43:55.933234 kernel: SRAT: PXM 0 -> APIC 0x03 -> Node 0 Dec 13 15:43:55.933245 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Dec 13 15:43:55.933262 kernel: SRAT: PXM 0 -> APIC 0x05 -> Node 0 Dec 13 15:43:55.933274 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Dec 13 15:43:55.933286 kernel: SRAT: PXM 0 -> APIC 0x07 -> Node 0 Dec 13 15:43:55.933297 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Dec 13 15:43:55.933309 kernel: SRAT: PXM 0 -> APIC 0x09 -> Node 0 Dec 13 15:43:55.933321 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Dec 13 15:43:55.933332 kernel: SRAT: PXM 0 -> APIC 0x0b -> Node 0 Dec 13 15:43:55.933344 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Dec 13 15:43:55.933356 kernel: SRAT: PXM 0 -> APIC 0x0d -> Node 0 Dec 13 15:43:55.933368 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Dec 13 15:43:55.933383 kernel: SRAT: PXM 0 -> APIC 0x0f -> Node 0 Dec 13 15:43:55.933395 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Dec 13 15:43:55.933407 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Dec 13 15:43:55.933419 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x20800fffff] hotplug Dec 13 15:43:55.933431 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdbfff] -> [mem 0x00000000-0x7ffdbfff] Dec 13 15:43:55.933443 kernel: NODE_DATA(0) allocated [mem 0x7ffd6000-0x7ffdbfff] Dec 13 15:43:55.933455 kernel: Zone ranges: Dec 13 15:43:55.933467 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 13 15:43:55.933479 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdbfff] Dec 13 15:43:55.933540 kernel: Normal empty Dec 13 15:43:55.933570 kernel: Movable zone start for each node Dec 13 15:43:55.933582 kernel: Early memory node ranges Dec 13 15:43:55.933594 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Dec 13 15:43:55.933606 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdbfff] Dec 13 15:43:55.933618 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdbfff] Dec 13 15:43:55.933630 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 15:43:55.933642 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Dec 13 15:43:55.933654 kernel: On node 0, zone DMA32: 36 pages in unavailable ranges Dec 13 15:43:55.933672 kernel: ACPI: PM-Timer IO Port: 0x608 Dec 13 15:43:55.933684 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 13 15:43:55.933696 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 13 15:43:55.933708 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Dec 13 15:43:55.933720 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 13 15:43:55.933732 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 13 15:43:55.933744 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 13 15:43:55.933756 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 13 15:43:55.933768 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 13 15:43:55.933784 kernel: TSC deadline timer available Dec 13 15:43:55.933796 kernel: smpboot: Allowing 16 CPUs, 14 hotplug CPUs Dec 13 15:43:55.933808 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Dec 13 15:43:55.933820 kernel: Booting paravirtualized kernel on KVM Dec 13 15:43:55.933832 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 13 15:43:55.933844 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:16 nr_node_ids:1 Dec 13 15:43:55.933858 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u262144 Dec 13 15:43:55.933870 kernel: pcpu-alloc: s188696 r8192 d32488 u262144 alloc=1*2097152 Dec 13 15:43:55.933881 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Dec 13 15:43:55.933897 kernel: kvm-guest: stealtime: cpu 0, msr 7fa1c0c0 Dec 13 15:43:55.933909 kernel: kvm-guest: PV spinlocks enabled Dec 13 15:43:55.933921 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 13 15:43:55.933933 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515804 Dec 13 15:43:55.933945 kernel: Policy zone: DMA32 Dec 13 15:43:55.933958 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=8c474c3ec361ec863adbecaa85281a726e1b53f7863ecc4742be8c5f6d02a66e Dec 13 15:43:55.933971 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 15:43:55.933983 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 15:43:55.933998 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 13 15:43:55.934011 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 15:43:55.934023 kernel: Memory: 1903832K/2096616K available (12294K kernel code, 2275K rwdata, 13716K rodata, 47472K init, 4112K bss, 192524K reserved, 0K cma-reserved) Dec 13 15:43:55.934035 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Dec 13 15:43:55.934047 kernel: Kernel/User page tables isolation: enabled Dec 13 15:43:55.934059 kernel: ftrace: allocating 34549 entries in 135 pages Dec 13 15:43:55.934071 kernel: ftrace: allocated 135 pages with 4 groups Dec 13 15:43:55.934083 kernel: rcu: Hierarchical RCU implementation. Dec 13 15:43:55.934096 kernel: rcu: RCU event tracing is enabled. Dec 13 15:43:55.934112 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Dec 13 15:43:55.934124 kernel: Rude variant of Tasks RCU enabled. Dec 13 15:43:55.934136 kernel: Tracing variant of Tasks RCU enabled. Dec 13 15:43:55.934149 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 15:43:55.934161 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Dec 13 15:43:55.934173 kernel: NR_IRQS: 33024, nr_irqs: 552, preallocated irqs: 16 Dec 13 15:43:55.934190 kernel: random: crng init done Dec 13 15:43:55.934228 kernel: Console: colour VGA+ 80x25 Dec 13 15:43:55.934241 kernel: printk: console [tty0] enabled Dec 13 15:43:55.934253 kernel: printk: console [ttyS0] enabled Dec 13 15:43:55.934265 kernel: ACPI: Core revision 20210730 Dec 13 15:43:55.934278 kernel: APIC: Switch to symmetric I/O mode setup Dec 13 15:43:55.934294 kernel: x2apic enabled Dec 13 15:43:55.934306 kernel: Switched APIC routing to physical x2apic. Dec 13 15:43:55.934319 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Dec 13 15:43:55.934332 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499998) Dec 13 15:43:55.934345 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Dec 13 15:43:55.934361 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Dec 13 15:43:55.934373 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Dec 13 15:43:55.934386 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 13 15:43:55.934398 kernel: Spectre V2 : Mitigation: Retpolines Dec 13 15:43:55.934410 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Dec 13 15:43:55.934423 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Dec 13 15:43:55.934435 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Dec 13 15:43:55.934447 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 13 15:43:55.934460 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Dec 13 15:43:55.934472 kernel: MDS: Mitigation: Clear CPU buffers Dec 13 15:43:55.934492 kernel: MMIO Stale Data: Unknown: No mitigations Dec 13 15:43:55.934517 kernel: SRBDS: Unknown: Dependent on hypervisor status Dec 13 15:43:55.934529 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 13 15:43:55.934542 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 13 15:43:55.934555 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 13 15:43:55.934567 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 13 15:43:55.934593 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Dec 13 15:43:55.934605 kernel: Freeing SMP alternatives memory: 32K Dec 13 15:43:55.934617 kernel: pid_max: default: 32768 minimum: 301 Dec 13 15:43:55.934628 kernel: LSM: Security Framework initializing Dec 13 15:43:55.934654 kernel: SELinux: Initializing. Dec 13 15:43:55.934671 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 13 15:43:55.934687 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 13 15:43:55.934700 kernel: smpboot: CPU0: Intel Xeon E3-12xx v2 (Ivy Bridge, IBRS) (family: 0x6, model: 0x3a, stepping: 0x9) Dec 13 15:43:55.934712 kernel: Performance Events: unsupported p6 CPU model 58 no PMU driver, software events only. Dec 13 15:43:55.934725 kernel: signal: max sigframe size: 1776 Dec 13 15:43:55.934737 kernel: rcu: Hierarchical SRCU implementation. Dec 13 15:43:55.934750 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Dec 13 15:43:55.934762 kernel: smp: Bringing up secondary CPUs ... Dec 13 15:43:55.934775 kernel: x86: Booting SMP configuration: Dec 13 15:43:55.934787 kernel: .... node #0, CPUs: #1 Dec 13 15:43:55.934803 kernel: kvm-clock: cpu 1, msr 7e19a041, secondary cpu clock Dec 13 15:43:55.934816 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Dec 13 15:43:55.934828 kernel: kvm-guest: stealtime: cpu 1, msr 7fa5c0c0 Dec 13 15:43:55.934841 kernel: smp: Brought up 1 node, 2 CPUs Dec 13 15:43:55.934853 kernel: smpboot: Max logical packages: 16 Dec 13 15:43:55.934865 kernel: smpboot: Total of 2 processors activated (9999.99 BogoMIPS) Dec 13 15:43:55.934878 kernel: devtmpfs: initialized Dec 13 15:43:55.934890 kernel: x86/mm: Memory block size: 128MB Dec 13 15:43:55.934903 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 15:43:55.934919 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Dec 13 15:43:55.934932 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 15:43:55.934944 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 15:43:55.934957 kernel: audit: initializing netlink subsys (disabled) Dec 13 15:43:55.934969 kernel: audit: type=2000 audit(1734104635.241:1): state=initialized audit_enabled=0 res=1 Dec 13 15:43:55.934981 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 15:43:55.934994 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 13 15:43:55.935007 kernel: cpuidle: using governor menu Dec 13 15:43:55.935019 kernel: ACPI: bus type PCI registered Dec 13 15:43:55.935035 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 15:43:55.935048 kernel: dca service started, version 1.12.1 Dec 13 15:43:55.935060 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Dec 13 15:43:55.935073 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved in E820 Dec 13 15:43:55.935085 kernel: PCI: Using configuration type 1 for base access Dec 13 15:43:55.935098 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 13 15:43:55.935111 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 15:43:55.935123 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 15:43:55.935136 kernel: ACPI: Added _OSI(Module Device) Dec 13 15:43:55.935152 kernel: ACPI: Added _OSI(Processor Device) Dec 13 15:43:55.935164 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 15:43:55.935177 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 15:43:55.935189 kernel: ACPI: Added _OSI(Linux-Dell-Video) Dec 13 15:43:55.935212 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Dec 13 15:43:55.935225 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Dec 13 15:43:55.935238 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 15:43:55.935250 kernel: ACPI: Interpreter enabled Dec 13 15:43:55.935262 kernel: ACPI: PM: (supports S0 S5) Dec 13 15:43:55.935279 kernel: ACPI: Using IOAPIC for interrupt routing Dec 13 15:43:55.935292 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 13 15:43:55.935305 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Dec 13 15:43:55.935317 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 13 15:43:55.936475 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 13 15:43:55.936692 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Dec 13 15:43:55.936870 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Dec 13 15:43:55.936889 kernel: PCI host bridge to bus 0000:00 Dec 13 15:43:55.937087 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 13 15:43:55.937257 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 13 15:43:55.937409 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 13 15:43:55.937613 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Dec 13 15:43:55.937793 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Dec 13 15:43:55.937944 kernel: pci_bus 0000:00: root bus resource [mem 0x20c0000000-0x28bfffffff window] Dec 13 15:43:55.938110 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 13 15:43:55.938322 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Dec 13 15:43:55.938527 kernel: pci 0000:00:01.0: [1013:00b8] type 00 class 0x030000 Dec 13 15:43:55.938741 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfa000000-0xfbffffff pref] Dec 13 15:43:55.938933 kernel: pci 0000:00:01.0: reg 0x14: [mem 0xfea50000-0xfea50fff] Dec 13 15:43:55.939105 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea40000-0xfea4ffff pref] Dec 13 15:43:55.939287 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 13 15:43:55.947566 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Dec 13 15:43:55.947770 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea51000-0xfea51fff] Dec 13 15:43:55.947986 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Dec 13 15:43:55.948152 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea52000-0xfea52fff] Dec 13 15:43:55.948347 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Dec 13 15:43:55.948530 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea53000-0xfea53fff] Dec 13 15:43:55.948742 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Dec 13 15:43:55.948926 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea54000-0xfea54fff] Dec 13 15:43:55.949110 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Dec 13 15:43:55.949291 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea55000-0xfea55fff] Dec 13 15:43:55.949471 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Dec 13 15:43:55.949647 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea56000-0xfea56fff] Dec 13 15:43:55.949837 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Dec 13 15:43:55.950007 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea57000-0xfea57fff] Dec 13 15:43:55.950205 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Dec 13 15:43:55.950366 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea58000-0xfea58fff] Dec 13 15:43:55.950567 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Dec 13 15:43:55.950730 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc0c0-0xc0df] Dec 13 15:43:55.950894 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfea59000-0xfea59fff] Dec 13 15:43:55.951051 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Dec 13 15:43:55.951232 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfea00000-0xfea3ffff pref] Dec 13 15:43:55.951441 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Dec 13 15:43:55.951684 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Dec 13 15:43:55.951842 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfea5a000-0xfea5afff] Dec 13 15:43:55.951996 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfd004000-0xfd007fff 64bit pref] Dec 13 15:43:55.952184 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Dec 13 15:43:55.952361 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Dec 13 15:43:55.959596 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Dec 13 15:43:55.959777 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc0e0-0xc0ff] Dec 13 15:43:55.959940 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea5b000-0xfea5bfff] Dec 13 15:43:55.960193 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Dec 13 15:43:55.960367 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Dec 13 15:43:55.960586 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 Dec 13 15:43:55.960751 kernel: pci 0000:01:00.0: reg 0x10: [mem 0xfda00000-0xfda000ff 64bit] Dec 13 15:43:55.960909 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Dec 13 15:43:55.961064 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Dec 13 15:43:55.961240 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Dec 13 15:43:55.961434 kernel: pci_bus 0000:02: extended config space not accessible Dec 13 15:43:55.961651 kernel: pci 0000:02:01.0: [8086:25ab] type 00 class 0x088000 Dec 13 15:43:55.961824 kernel: pci 0000:02:01.0: reg 0x10: [mem 0xfd800000-0xfd80000f] Dec 13 15:43:55.961995 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Dec 13 15:43:55.962162 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Dec 13 15:43:55.962362 kernel: pci 0000:03:00.0: [1b36:000d] type 00 class 0x0c0330 Dec 13 15:43:55.962541 kernel: pci 0000:03:00.0: reg 0x10: [mem 0xfe800000-0xfe803fff 64bit] Dec 13 15:43:55.962698 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Dec 13 15:43:55.962860 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Dec 13 15:43:55.963018 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Dec 13 15:43:55.963221 kernel: pci 0000:04:00.0: [1af4:1044] type 00 class 0x00ff00 Dec 13 15:43:55.963390 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Dec 13 15:43:55.963563 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Dec 13 15:43:55.963720 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Dec 13 15:43:55.963876 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Dec 13 15:43:55.964043 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Dec 13 15:43:55.964213 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Dec 13 15:43:55.964372 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Dec 13 15:43:55.964546 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Dec 13 15:43:55.964699 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Dec 13 15:43:55.964855 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Dec 13 15:43:55.965011 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Dec 13 15:43:55.965167 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Dec 13 15:43:55.965343 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Dec 13 15:43:55.965515 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Dec 13 15:43:55.965675 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Dec 13 15:43:55.965832 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Dec 13 15:43:55.966005 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Dec 13 15:43:55.966185 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Dec 13 15:43:55.966357 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Dec 13 15:43:55.966377 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 13 15:43:55.966390 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 13 15:43:55.966410 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 13 15:43:55.966423 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 13 15:43:55.966436 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Dec 13 15:43:55.966449 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Dec 13 15:43:55.966462 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Dec 13 15:43:55.966475 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Dec 13 15:43:55.966488 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Dec 13 15:43:55.969577 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Dec 13 15:43:55.969595 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Dec 13 15:43:55.969615 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Dec 13 15:43:55.969628 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Dec 13 15:43:55.969641 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Dec 13 15:43:55.969654 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Dec 13 15:43:55.969667 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Dec 13 15:43:55.969680 kernel: iommu: Default domain type: Translated Dec 13 15:43:55.969693 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 13 15:43:55.969887 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Dec 13 15:43:55.970056 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 13 15:43:55.970229 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Dec 13 15:43:55.970249 kernel: vgaarb: loaded Dec 13 15:43:55.970263 kernel: pps_core: LinuxPPS API ver. 1 registered Dec 13 15:43:55.970276 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Dec 13 15:43:55.970289 kernel: PTP clock support registered Dec 13 15:43:55.970302 kernel: PCI: Using ACPI for IRQ routing Dec 13 15:43:55.970315 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 13 15:43:55.970327 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Dec 13 15:43:55.970346 kernel: e820: reserve RAM buffer [mem 0x7ffdc000-0x7fffffff] Dec 13 15:43:55.970358 kernel: clocksource: Switched to clocksource kvm-clock Dec 13 15:43:55.970371 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 15:43:55.970384 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 15:43:55.970397 kernel: pnp: PnP ACPI init Dec 13 15:43:55.970607 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Dec 13 15:43:55.970629 kernel: pnp: PnP ACPI: found 5 devices Dec 13 15:43:55.970642 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 13 15:43:55.970662 kernel: NET: Registered PF_INET protocol family Dec 13 15:43:55.970675 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 15:43:55.970688 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Dec 13 15:43:55.970701 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 15:43:55.970714 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 13 15:43:55.970738 kernel: TCP bind hash table entries: 16384 (order: 6, 262144 bytes, linear) Dec 13 15:43:55.970750 kernel: TCP: Hash tables configured (established 16384 bind 16384) Dec 13 15:43:55.970763 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 13 15:43:55.970775 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 13 15:43:55.970804 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 15:43:55.970817 kernel: NET: Registered PF_XDP protocol family Dec 13 15:43:55.970972 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01-02] add_size 1000 Dec 13 15:43:55.971129 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Dec 13 15:43:55.971301 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Dec 13 15:43:55.971456 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Dec 13 15:43:55.971633 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Dec 13 15:43:55.971795 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Dec 13 15:43:55.971950 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Dec 13 15:43:55.972104 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Dec 13 15:43:55.972272 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Dec 13 15:43:55.972425 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Dec 13 15:43:55.972596 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Dec 13 15:43:55.972758 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Dec 13 15:43:55.972912 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Dec 13 15:43:55.973066 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Dec 13 15:43:55.973232 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Dec 13 15:43:55.973386 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Dec 13 15:43:55.973561 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Dec 13 15:43:55.973722 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Dec 13 15:43:55.973887 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Dec 13 15:43:55.974059 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Dec 13 15:43:55.974225 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Dec 13 15:43:55.974387 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Dec 13 15:43:55.974577 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Dec 13 15:43:55.974732 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Dec 13 15:43:55.974885 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Dec 13 15:43:55.975042 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Dec 13 15:43:55.975206 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Dec 13 15:43:55.975366 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Dec 13 15:43:55.975545 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Dec 13 15:43:55.975702 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Dec 13 15:43:55.975859 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Dec 13 15:43:55.976017 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Dec 13 15:43:55.976172 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Dec 13 15:43:55.976341 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Dec 13 15:43:55.986478 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Dec 13 15:43:55.986749 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Dec 13 15:43:55.986925 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Dec 13 15:43:55.987084 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Dec 13 15:43:55.987257 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Dec 13 15:43:55.987415 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Dec 13 15:43:55.987593 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Dec 13 15:43:55.987760 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Dec 13 15:43:55.987935 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Dec 13 15:43:55.988098 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Dec 13 15:43:55.988267 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Dec 13 15:43:55.988426 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Dec 13 15:43:55.988598 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Dec 13 15:43:55.988779 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Dec 13 15:43:55.988940 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Dec 13 15:43:55.989095 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Dec 13 15:43:55.989261 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 13 15:43:55.989406 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 13 15:43:55.989569 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 13 15:43:55.989730 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Dec 13 15:43:55.989884 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Dec 13 15:43:55.990026 kernel: pci_bus 0000:00: resource 9 [mem 0x20c0000000-0x28bfffffff window] Dec 13 15:43:55.990221 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Dec 13 15:43:55.990389 kernel: pci_bus 0000:01: resource 1 [mem 0xfd800000-0xfdbfffff] Dec 13 15:43:55.990562 kernel: pci_bus 0000:01: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Dec 13 15:43:55.990746 kernel: pci_bus 0000:02: resource 1 [mem 0xfd800000-0xfd9fffff] Dec 13 15:43:55.990931 kernel: pci_bus 0000:03: resource 0 [io 0x2000-0x2fff] Dec 13 15:43:55.991096 kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff] Dec 13 15:43:55.991270 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Dec 13 15:43:55.991458 kernel: pci_bus 0000:04: resource 0 [io 0x3000-0x3fff] Dec 13 15:43:55.991644 kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff] Dec 13 15:43:55.991807 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Dec 13 15:43:55.991985 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Dec 13 15:43:55.992137 kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff] Dec 13 15:43:55.992308 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Dec 13 15:43:55.996435 kernel: pci_bus 0000:06: resource 0 [io 0x5000-0x5fff] Dec 13 15:43:55.996626 kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff] Dec 13 15:43:55.996789 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Dec 13 15:43:55.996986 kernel: pci_bus 0000:07: resource 0 [io 0x6000-0x6fff] Dec 13 15:43:55.997137 kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff] Dec 13 15:43:55.997299 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Dec 13 15:43:55.997467 kernel: pci_bus 0000:08: resource 0 [io 0x7000-0x7fff] Dec 13 15:43:55.997642 kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff] Dec 13 15:43:55.997791 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Dec 13 15:43:55.997959 kernel: pci_bus 0000:09: resource 0 [io 0x8000-0x8fff] Dec 13 15:43:55.998110 kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff] Dec 13 15:43:55.998272 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Dec 13 15:43:55.998294 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Dec 13 15:43:55.998308 kernel: PCI: CLS 0 bytes, default 64 Dec 13 15:43:55.998328 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Dec 13 15:43:55.998341 kernel: software IO TLB: mapped [mem 0x0000000073000000-0x0000000077000000] (64MB) Dec 13 15:43:55.998355 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Dec 13 15:43:55.998369 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Dec 13 15:43:55.998382 kernel: Initialise system trusted keyrings Dec 13 15:43:55.998396 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Dec 13 15:43:55.998409 kernel: Key type asymmetric registered Dec 13 15:43:55.998422 kernel: Asymmetric key parser 'x509' registered Dec 13 15:43:55.998435 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Dec 13 15:43:55.998453 kernel: io scheduler mq-deadline registered Dec 13 15:43:55.998466 kernel: io scheduler kyber registered Dec 13 15:43:55.998479 kernel: io scheduler bfq registered Dec 13 15:43:55.998654 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Dec 13 15:43:55.998813 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Dec 13 15:43:55.998968 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 15:43:55.999149 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Dec 13 15:43:55.999321 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Dec 13 15:43:55.999485 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 15:43:55.999664 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Dec 13 15:43:55.999820 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Dec 13 15:43:55.999974 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 15:43:56.000130 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Dec 13 15:43:56.000300 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Dec 13 15:43:56.000463 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 15:43:56.000638 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Dec 13 15:43:56.000796 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Dec 13 15:43:56.000951 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 15:43:56.001106 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Dec 13 15:43:56.001277 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Dec 13 15:43:56.001441 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 15:43:56.001615 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Dec 13 15:43:56.001774 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Dec 13 15:43:56.001930 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 15:43:56.002086 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Dec 13 15:43:56.002256 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Dec 13 15:43:56.002420 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 15:43:56.002441 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 13 15:43:56.002455 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Dec 13 15:43:56.002469 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Dec 13 15:43:56.002483 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 15:43:56.009718 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 13 15:43:56.009744 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 13 15:43:56.009778 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 13 15:43:56.009792 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 13 15:43:56.009806 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 13 15:43:56.010020 kernel: rtc_cmos 00:03: RTC can wake from S4 Dec 13 15:43:56.010176 kernel: rtc_cmos 00:03: registered as rtc0 Dec 13 15:43:56.010349 kernel: rtc_cmos 00:03: setting system clock to 2024-12-13T15:43:55 UTC (1734104635) Dec 13 15:43:56.010512 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Dec 13 15:43:56.010533 kernel: intel_pstate: CPU model not supported Dec 13 15:43:56.010553 kernel: NET: Registered PF_INET6 protocol family Dec 13 15:43:56.010566 kernel: Segment Routing with IPv6 Dec 13 15:43:56.010580 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 15:43:56.010594 kernel: NET: Registered PF_PACKET protocol family Dec 13 15:43:56.010607 kernel: Key type dns_resolver registered Dec 13 15:43:56.010620 kernel: IPI shorthand broadcast: enabled Dec 13 15:43:56.010634 kernel: sched_clock: Marking stable (991002104, 223212460)->(1498483527, -284268963) Dec 13 15:43:56.010647 kernel: registered taskstats version 1 Dec 13 15:43:56.010661 kernel: Loading compiled-in X.509 certificates Dec 13 15:43:56.010678 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.173-flatcar: e1d88c9e01f5bb2adeb5b99325e46e5ca8dff115' Dec 13 15:43:56.010692 kernel: Key type .fscrypt registered Dec 13 15:43:56.010705 kernel: Key type fscrypt-provisioning registered Dec 13 15:43:56.010718 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 15:43:56.010732 kernel: ima: Allocated hash algorithm: sha1 Dec 13 15:43:56.010745 kernel: ima: No architecture policies found Dec 13 15:43:56.010758 kernel: clk: Disabling unused clocks Dec 13 15:43:56.010771 kernel: Freeing unused kernel image (initmem) memory: 47472K Dec 13 15:43:56.010786 kernel: Write protecting the kernel read-only data: 28672k Dec 13 15:43:56.010805 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Dec 13 15:43:56.010819 kernel: Freeing unused kernel image (rodata/data gap) memory: 620K Dec 13 15:43:56.010832 kernel: Run /init as init process Dec 13 15:43:56.010846 kernel: with arguments: Dec 13 15:43:56.010859 kernel: /init Dec 13 15:43:56.010871 kernel: with environment: Dec 13 15:43:56.010884 kernel: HOME=/ Dec 13 15:43:56.010897 kernel: TERM=linux Dec 13 15:43:56.010910 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 15:43:56.010939 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 15:43:56.010957 systemd[1]: Detected virtualization kvm. Dec 13 15:43:56.010976 systemd[1]: Detected architecture x86-64. Dec 13 15:43:56.010992 systemd[1]: Running in initrd. Dec 13 15:43:56.011006 systemd[1]: No hostname configured, using default hostname. Dec 13 15:43:56.011020 systemd[1]: Hostname set to . Dec 13 15:43:56.011034 systemd[1]: Initializing machine ID from VM UUID. Dec 13 15:43:56.011052 systemd[1]: Queued start job for default target initrd.target. Dec 13 15:43:56.011066 systemd[1]: Started systemd-ask-password-console.path. Dec 13 15:43:56.011079 systemd[1]: Reached target cryptsetup.target. Dec 13 15:43:56.011093 systemd[1]: Reached target paths.target. Dec 13 15:43:56.011107 systemd[1]: Reached target slices.target. Dec 13 15:43:56.011121 systemd[1]: Reached target swap.target. Dec 13 15:43:56.011135 systemd[1]: Reached target timers.target. Dec 13 15:43:56.011150 systemd[1]: Listening on iscsid.socket. Dec 13 15:43:56.011168 systemd[1]: Listening on iscsiuio.socket. Dec 13 15:43:56.011182 systemd[1]: Listening on systemd-journald-audit.socket. Dec 13 15:43:56.011208 systemd[1]: Listening on systemd-journald-dev-log.socket. Dec 13 15:43:56.011223 systemd[1]: Listening on systemd-journald.socket. Dec 13 15:43:56.011238 systemd[1]: Listening on systemd-networkd.socket. Dec 13 15:43:56.011252 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 15:43:56.011266 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 15:43:56.011280 systemd[1]: Reached target sockets.target. Dec 13 15:43:56.011294 systemd[1]: Starting kmod-static-nodes.service... Dec 13 15:43:56.011313 systemd[1]: Finished network-cleanup.service. Dec 13 15:43:56.011327 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 15:43:56.011341 systemd[1]: Starting systemd-journald.service... Dec 13 15:43:56.011356 systemd[1]: Starting systemd-modules-load.service... Dec 13 15:43:56.011370 systemd[1]: Starting systemd-resolved.service... Dec 13 15:43:56.011384 systemd[1]: Starting systemd-vconsole-setup.service... Dec 13 15:43:56.011398 systemd[1]: Finished kmod-static-nodes.service. Dec 13 15:43:56.011412 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 15:43:56.011441 systemd-journald[201]: Journal started Dec 13 15:43:56.011529 systemd-journald[201]: Runtime Journal (/run/log/journal/f0def2d0de6840ec99f17cfb192b0836) is 4.7M, max 38.1M, 33.3M free. Dec 13 15:43:55.930174 systemd-modules-load[202]: Inserted module 'overlay' Dec 13 15:43:56.023821 kernel: Bridge firewalling registered Dec 13 15:43:55.983911 systemd-resolved[203]: Positive Trust Anchors: Dec 13 15:43:56.038042 systemd[1]: Started systemd-resolved.service. Dec 13 15:43:56.038073 kernel: audit: type=1130 audit(1734104636.023:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:43:56.038102 systemd[1]: Started systemd-journald.service. Dec 13 15:43:56.038123 kernel: audit: type=1130 audit(1734104636.031:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:43:56.023000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:43:56.031000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:43:55.983937 systemd-resolved[203]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 15:43:56.043885 kernel: SCSI subsystem initialized Dec 13 15:43:55.983984 systemd-resolved[203]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 15:43:56.072645 kernel: audit: type=1130 audit(1734104636.040:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:43:56.072681 kernel: audit: type=1130 audit(1734104636.040:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:43:56.072700 kernel: audit: type=1130 audit(1734104636.041:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:43:56.072718 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 15:43:56.072736 kernel: device-mapper: uevent: version 1.0.3 Dec 13 15:43:56.072759 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Dec 13 15:43:56.040000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:43:56.040000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:43:56.041000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:43:55.987935 systemd-resolved[203]: Defaulting to hostname 'linux'. Dec 13 15:43:56.013266 systemd-modules-load[202]: Inserted module 'br_netfilter' Dec 13 15:43:56.081478 kernel: audit: type=1130 audit(1734104636.074:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:43:56.074000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:43:56.040801 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 15:43:56.090460 kernel: audit: type=1130 audit(1734104636.082:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:43:56.082000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:43:56.041598 systemd[1]: Finished systemd-vconsole-setup.service. Dec 13 15:43:56.042346 systemd[1]: Reached target nss-lookup.target. Dec 13 15:43:56.044009 systemd[1]: Starting dracut-cmdline-ask.service... Dec 13 15:43:56.045735 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Dec 13 15:43:56.067761 systemd-modules-load[202]: Inserted module 'dm_multipath' Dec 13 15:43:56.073450 systemd[1]: Finished systemd-modules-load.service. Dec 13 15:43:56.074888 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Dec 13 15:43:56.088868 systemd[1]: Starting systemd-sysctl.service... Dec 13 15:43:56.106282 kernel: audit: type=1130 audit(1734104636.100:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:43:56.100000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:43:56.100090 systemd[1]: Finished systemd-sysctl.service. Dec 13 15:43:56.106593 systemd[1]: Finished dracut-cmdline-ask.service. Dec 13 15:43:56.126719 kernel: audit: type=1130 audit(1734104636.107:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:43:56.107000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:43:56.109120 systemd[1]: Starting dracut-cmdline.service... Dec 13 15:43:56.130267 dracut-cmdline[225]: dracut-dracut-053 Dec 13 15:43:56.134350 dracut-cmdline[225]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=8c474c3ec361ec863adbecaa85281a726e1b53f7863ecc4742be8c5f6d02a66e Dec 13 15:43:56.219539 kernel: Loading iSCSI transport class v2.0-870. Dec 13 15:43:56.241540 kernel: iscsi: registered transport (tcp) Dec 13 15:43:56.270364 kernel: iscsi: registered transport (qla4xxx) Dec 13 15:43:56.270442 kernel: QLogic iSCSI HBA Driver Dec 13 15:43:56.317633 systemd[1]: Finished dracut-cmdline.service. Dec 13 15:43:56.317000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:43:56.319688 systemd[1]: Starting dracut-pre-udev.service... Dec 13 15:43:56.378569 kernel: raid6: sse2x4 gen() 13665 MB/s Dec 13 15:43:56.396574 kernel: raid6: sse2x4 xor() 7968 MB/s Dec 13 15:43:56.414573 kernel: raid6: sse2x2 gen() 9399 MB/s Dec 13 15:43:56.432545 kernel: raid6: sse2x2 xor() 7934 MB/s Dec 13 15:43:56.450545 kernel: raid6: sse2x1 gen() 9874 MB/s Dec 13 15:43:56.469218 kernel: raid6: sse2x1 xor() 7186 MB/s Dec 13 15:43:56.469300 kernel: raid6: using algorithm sse2x4 gen() 13665 MB/s Dec 13 15:43:56.469319 kernel: raid6: .... xor() 7968 MB/s, rmw enabled Dec 13 15:43:56.470527 kernel: raid6: using ssse3x2 recovery algorithm Dec 13 15:43:56.487538 kernel: xor: automatically using best checksumming function avx Dec 13 15:43:56.602543 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Dec 13 15:43:56.615067 systemd[1]: Finished dracut-pre-udev.service. Dec 13 15:43:56.615000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:43:56.615000 audit: BPF prog-id=7 op=LOAD Dec 13 15:43:56.615000 audit: BPF prog-id=8 op=LOAD Dec 13 15:43:56.616936 systemd[1]: Starting systemd-udevd.service... Dec 13 15:43:56.634152 systemd-udevd[403]: Using default interface naming scheme 'v252'. Dec 13 15:43:56.642000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:43:56.642295 systemd[1]: Started systemd-udevd.service. Dec 13 15:43:56.644098 systemd[1]: Starting dracut-pre-trigger.service... Dec 13 15:43:56.661518 dracut-pre-trigger[408]: rd.md=0: removing MD RAID activation Dec 13 15:43:56.700839 systemd[1]: Finished dracut-pre-trigger.service. Dec 13 15:43:56.701000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:43:56.702633 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 15:43:56.793000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:43:56.793148 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 15:43:56.892525 kernel: virtio_blk virtio1: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Dec 13 15:43:56.936973 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 15:43:56.936999 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 15:43:56.937018 kernel: GPT:17805311 != 125829119 Dec 13 15:43:56.937043 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 15:43:56.937060 kernel: GPT:17805311 != 125829119 Dec 13 15:43:56.937076 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 15:43:56.937092 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 15:43:56.937109 kernel: ACPI: bus type USB registered Dec 13 15:43:56.939527 kernel: usbcore: registered new interface driver usbfs Dec 13 15:43:56.945532 kernel: AVX version of gcm_enc/dec engaged. Dec 13 15:43:56.945566 kernel: usbcore: registered new interface driver hub Dec 13 15:43:56.946521 kernel: AES CTR mode by8 optimization enabled Dec 13 15:43:56.952519 kernel: usbcore: registered new device driver usb Dec 13 15:43:56.953525 kernel: libata version 3.00 loaded. Dec 13 15:43:56.991526 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (445) Dec 13 15:43:56.996552 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Dec 13 15:43:57.089668 kernel: ahci 0000:00:1f.2: version 3.0 Dec 13 15:43:57.089992 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Dec 13 15:43:57.090016 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Dec 13 15:43:57.090225 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Dec 13 15:43:57.090403 kernel: scsi host0: ahci Dec 13 15:43:57.090628 kernel: scsi host1: ahci Dec 13 15:43:57.090823 kernel: scsi host2: ahci Dec 13 15:43:57.091009 kernel: scsi host3: ahci Dec 13 15:43:57.091202 kernel: scsi host4: ahci Dec 13 15:43:57.091404 kernel: scsi host5: ahci Dec 13 15:43:57.091605 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b100 irq 38 Dec 13 15:43:57.091625 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b180 irq 38 Dec 13 15:43:57.091642 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b200 irq 38 Dec 13 15:43:57.091659 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b280 irq 38 Dec 13 15:43:57.091676 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b300 irq 38 Dec 13 15:43:57.091693 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b380 irq 38 Dec 13 15:43:57.095197 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Dec 13 15:43:57.099716 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Dec 13 15:43:57.100482 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Dec 13 15:43:57.106454 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Dec 13 15:43:57.108360 systemd[1]: Starting disk-uuid.service... Dec 13 15:43:57.119521 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 15:43:57.124667 disk-uuid[524]: Primary Header is updated. Dec 13 15:43:57.124667 disk-uuid[524]: Secondary Entries is updated. Dec 13 15:43:57.124667 disk-uuid[524]: Secondary Header is updated. Dec 13 15:43:57.321543 kernel: ata2: SATA link down (SStatus 0 SControl 300) Dec 13 15:43:57.321631 kernel: ata1: SATA link down (SStatus 0 SControl 300) Dec 13 15:43:57.331577 kernel: ata3: SATA link down (SStatus 0 SControl 300) Dec 13 15:43:57.331640 kernel: ata6: SATA link down (SStatus 0 SControl 300) Dec 13 15:43:57.332531 kernel: ata4: SATA link down (SStatus 0 SControl 300) Dec 13 15:43:57.334921 kernel: ata5: SATA link down (SStatus 0 SControl 300) Dec 13 15:43:57.342525 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Dec 13 15:43:57.362721 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 1 Dec 13 15:43:57.362923 kernel: xhci_hcd 0000:03:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Dec 13 15:43:57.363102 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Dec 13 15:43:57.363296 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 2 Dec 13 15:43:57.363471 kernel: xhci_hcd 0000:03:00.0: Host supports USB 3.0 SuperSpeed Dec 13 15:43:57.363677 kernel: hub 1-0:1.0: USB hub found Dec 13 15:43:57.363894 kernel: hub 1-0:1.0: 4 ports detected Dec 13 15:43:57.364091 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Dec 13 15:43:57.364324 kernel: hub 2-0:1.0: USB hub found Dec 13 15:43:57.364555 kernel: hub 2-0:1.0: 4 ports detected Dec 13 15:43:57.597535 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Dec 13 15:43:57.738563 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 13 15:43:57.744854 kernel: usbcore: registered new interface driver usbhid Dec 13 15:43:57.744907 kernel: usbhid: USB HID core driver Dec 13 15:43:57.754209 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:03:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Dec 13 15:43:57.754254 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:03:00.0-1/input0 Dec 13 15:43:58.136524 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 15:43:58.136862 disk-uuid[525]: The operation has completed successfully. Dec 13 15:43:58.195848 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 15:43:58.197023 systemd[1]: Finished disk-uuid.service. Dec 13 15:43:58.197000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:43:58.197000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:43:58.203990 systemd[1]: Starting verity-setup.service... Dec 13 15:43:58.222523 kernel: device-mapper: verity: sha256 using implementation "sha256-avx" Dec 13 15:43:58.275816 systemd[1]: Found device dev-mapper-usr.device. Dec 13 15:43:58.278541 systemd[1]: Mounting sysusr-usr.mount... Dec 13 15:43:58.281452 systemd[1]: Finished verity-setup.service. Dec 13 15:43:58.281000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:43:58.374523 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Dec 13 15:43:58.375275 systemd[1]: Mounted sysusr-usr.mount. Dec 13 15:43:58.376125 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Dec 13 15:43:58.377097 systemd[1]: Starting ignition-setup.service... Dec 13 15:43:58.379849 systemd[1]: Starting parse-ip-for-networkd.service... Dec 13 15:43:58.394591 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 15:43:58.394664 kernel: BTRFS info (device vda6): using free space tree Dec 13 15:43:58.394685 kernel: BTRFS info (device vda6): has skinny extents Dec 13 15:43:58.415105 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 15:43:58.424210 systemd[1]: Finished ignition-setup.service. Dec 13 15:43:58.424000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:43:58.426088 systemd[1]: Starting ignition-fetch-offline.service... Dec 13 15:43:58.549069 systemd[1]: Finished parse-ip-for-networkd.service. Dec 13 15:43:58.552018 systemd[1]: Starting systemd-networkd.service... Dec 13 15:43:58.549000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:43:58.550000 audit: BPF prog-id=9 op=LOAD Dec 13 15:43:58.585325 systemd-networkd[705]: lo: Link UP Dec 13 15:43:58.585339 systemd-networkd[705]: lo: Gained carrier Dec 13 15:43:58.586718 systemd-networkd[705]: Enumeration completed Dec 13 15:43:58.587000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:43:58.586844 systemd[1]: Started systemd-networkd.service. Dec 13 15:43:58.587825 systemd[1]: Reached target network.target. Dec 13 15:43:58.590019 systemd[1]: Starting iscsiuio.service... Dec 13 15:43:58.598294 systemd-networkd[705]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 15:43:58.606000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:43:58.609734 ignition[617]: Ignition 2.14.0 Dec 13 15:43:58.601593 systemd[1]: Started iscsiuio.service. Dec 13 15:43:58.615000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:43:58.619000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:43:58.609757 ignition[617]: Stage: fetch-offline Dec 13 15:43:58.603215 systemd-networkd[705]: eth0: Link UP Dec 13 15:43:58.633251 iscsid[710]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Dec 13 15:43:58.633251 iscsid[710]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Dec 13 15:43:58.633251 iscsid[710]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Dec 13 15:43:58.633251 iscsid[710]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Dec 13 15:43:58.633251 iscsid[710]: If using hardware iscsi like qla4xxx this message can be ignored. Dec 13 15:43:58.633251 iscsid[710]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Dec 13 15:43:58.633251 iscsid[710]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Dec 13 15:43:58.609877 ignition[617]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 15:43:58.603222 systemd-networkd[705]: eth0: Gained carrier Dec 13 15:43:58.609916 ignition[617]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Dec 13 15:43:58.608642 systemd[1]: Starting iscsid.service... Dec 13 15:43:58.611650 ignition[617]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 15:43:58.615239 systemd[1]: Finished ignition-fetch-offline.service. Dec 13 15:43:58.611826 ignition[617]: parsed url from cmdline: "" Dec 13 15:43:58.617210 systemd[1]: Starting ignition-fetch.service... Dec 13 15:43:58.648000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:43:58.611833 ignition[617]: no config URL provided Dec 13 15:43:58.619870 systemd[1]: Started iscsid.service. Dec 13 15:43:58.611843 ignition[617]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 15:43:58.623112 systemd[1]: Starting dracut-initqueue.service... Dec 13 15:43:58.611859 ignition[617]: no config at "/usr/lib/ignition/user.ign" Dec 13 15:43:58.648067 systemd[1]: Finished dracut-initqueue.service. Dec 13 15:43:58.611892 ignition[617]: failed to fetch config: resource requires networking Dec 13 15:43:58.648967 systemd[1]: Reached target remote-fs-pre.target. Dec 13 15:43:58.612088 ignition[617]: Ignition finished successfully Dec 13 15:43:58.650084 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 15:43:58.627893 ignition[711]: Ignition 2.14.0 Dec 13 15:43:58.651725 systemd[1]: Reached target remote-fs.target. Dec 13 15:43:58.627904 ignition[711]: Stage: fetch Dec 13 15:43:58.654081 systemd[1]: Starting dracut-pre-mount.service... Dec 13 15:43:58.628117 ignition[711]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 15:43:58.657663 systemd-networkd[705]: eth0: DHCPv4 address 10.244.25.74/30, gateway 10.244.25.73 acquired from 10.244.25.73 Dec 13 15:43:58.628199 ignition[711]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Dec 13 15:43:58.629845 ignition[711]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 15:43:58.630025 ignition[711]: parsed url from cmdline: "" Dec 13 15:43:58.630032 ignition[711]: no config URL provided Dec 13 15:43:58.630058 ignition[711]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 15:43:58.630077 ignition[711]: no config at "/usr/lib/ignition/user.ign" Dec 13 15:43:58.636759 ignition[711]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Dec 13 15:43:58.636825 ignition[711]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Dec 13 15:43:58.637701 ignition[711]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Dec 13 15:43:58.639724 ignition[711]: GET error: Get "http://169.254.169.254/openstack/latest/user_data": dial tcp 169.254.169.254:80: connect: network is unreachable Dec 13 15:43:58.669431 systemd[1]: Finished dracut-pre-mount.service. Dec 13 15:43:58.670000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:43:58.840767 ignition[711]: GET http://169.254.169.254/openstack/latest/user_data: attempt #2 Dec 13 15:43:58.857320 ignition[711]: GET result: OK Dec 13 15:43:58.857910 ignition[711]: parsing config with SHA512: 1e9644e391ba6d6933c0f97ceca74de01a5aa80caea7ba5cf0ed27bf2ba7f102f9dde985d94350b4b2e2f4ff3f492d779cb28df77fd96329c93c83c6d20d370b Dec 13 15:43:58.862854 unknown[711]: fetched base config from "system" Dec 13 15:43:58.862877 unknown[711]: fetched base config from "system" Dec 13 15:43:58.862887 unknown[711]: fetched user config from "openstack" Dec 13 15:43:58.867736 ignition[711]: fetch: fetch complete Dec 13 15:43:58.867747 ignition[711]: fetch: fetch passed Dec 13 15:43:58.869331 systemd[1]: Finished ignition-fetch.service. Dec 13 15:43:58.867847 ignition[711]: Ignition finished successfully Dec 13 15:43:58.869000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:43:58.871621 systemd[1]: Starting ignition-kargs.service... Dec 13 15:43:58.887696 ignition[730]: Ignition 2.14.0 Dec 13 15:43:58.888769 ignition[730]: Stage: kargs Dec 13 15:43:58.889820 ignition[730]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 15:43:58.890812 ignition[730]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Dec 13 15:43:58.892280 ignition[730]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 15:43:58.894429 ignition[730]: kargs: kargs passed Dec 13 15:43:58.895211 ignition[730]: Ignition finished successfully Dec 13 15:43:58.896990 systemd[1]: Finished ignition-kargs.service. Dec 13 15:43:58.897000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:43:58.898799 systemd[1]: Starting ignition-disks.service... Dec 13 15:43:58.909884 ignition[736]: Ignition 2.14.0 Dec 13 15:43:58.910962 ignition[736]: Stage: disks Dec 13 15:43:58.911799 ignition[736]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 15:43:58.912760 ignition[736]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Dec 13 15:43:58.914140 ignition[736]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 15:43:58.916363 ignition[736]: disks: disks passed Dec 13 15:43:58.917132 ignition[736]: Ignition finished successfully Dec 13 15:43:58.918832 systemd[1]: Finished ignition-disks.service. Dec 13 15:43:58.919000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:43:58.919759 systemd[1]: Reached target initrd-root-device.target. Dec 13 15:43:58.921019 systemd[1]: Reached target local-fs-pre.target. Dec 13 15:43:58.923073 systemd[1]: Reached target local-fs.target. Dec 13 15:43:58.923791 systemd[1]: Reached target sysinit.target. Dec 13 15:43:58.924981 systemd[1]: Reached target basic.target. Dec 13 15:43:58.927366 systemd[1]: Starting systemd-fsck-root.service... Dec 13 15:43:58.946719 systemd-fsck[743]: ROOT: clean, 621/1628000 files, 124058/1617920 blocks Dec 13 15:43:58.952461 systemd[1]: Finished systemd-fsck-root.service. Dec 13 15:43:58.954274 systemd[1]: Mounting sysroot.mount... Dec 13 15:43:58.952000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:43:58.968547 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Dec 13 15:43:58.969328 systemd[1]: Mounted sysroot.mount. Dec 13 15:43:58.970080 systemd[1]: Reached target initrd-root-fs.target. Dec 13 15:43:58.972837 systemd[1]: Mounting sysroot-usr.mount... Dec 13 15:43:58.974049 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Dec 13 15:43:58.974998 systemd[1]: Starting flatcar-openstack-hostname.service... Dec 13 15:43:58.978230 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 15:43:58.978275 systemd[1]: Reached target ignition-diskful.target. Dec 13 15:43:58.981400 systemd[1]: Mounted sysroot-usr.mount. Dec 13 15:43:58.983834 systemd[1]: Starting initrd-setup-root.service... Dec 13 15:43:58.992427 initrd-setup-root[754]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 15:43:59.003911 initrd-setup-root[762]: cut: /sysroot/etc/group: No such file or directory Dec 13 15:43:59.016061 initrd-setup-root[770]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 15:43:59.024810 initrd-setup-root[778]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 15:43:59.102971 systemd[1]: Finished initrd-setup-root.service. Dec 13 15:43:59.103000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:43:59.104896 systemd[1]: Starting ignition-mount.service... Dec 13 15:43:59.106571 systemd[1]: Starting sysroot-boot.service... Dec 13 15:43:59.121824 bash[797]: umount: /sysroot/usr/share/oem: not mounted. Dec 13 15:43:59.132648 coreos-metadata[749]: Dec 13 15:43:59.132 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Dec 13 15:43:59.141200 ignition[799]: INFO : Ignition 2.14.0 Dec 13 15:43:59.142248 ignition[799]: INFO : Stage: mount Dec 13 15:43:59.143131 ignition[799]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 15:43:59.144114 ignition[799]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Dec 13 15:43:59.148637 systemd[1]: Finished sysroot-boot.service. Dec 13 15:43:59.149000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:43:59.150690 ignition[799]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 15:43:59.153184 ignition[799]: INFO : mount: mount passed Dec 13 15:43:59.153999 coreos-metadata[749]: Dec 13 15:43:59.153 INFO Fetch successful Dec 13 15:43:59.153999 coreos-metadata[749]: Dec 13 15:43:59.153 INFO wrote hostname srv-is9pt.gb1.brightbox.com to /sysroot/etc/hostname Dec 13 15:43:59.156062 ignition[799]: INFO : Ignition finished successfully Dec 13 15:43:59.157625 systemd[1]: Finished ignition-mount.service. Dec 13 15:43:59.171000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:43:59.172639 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Dec 13 15:43:59.173000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:43:59.173000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:43:59.172771 systemd[1]: Finished flatcar-openstack-hostname.service. Dec 13 15:43:59.301566 systemd[1]: Mounting sysroot-usr-share-oem.mount... Dec 13 15:43:59.313552 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (806) Dec 13 15:43:59.317644 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 15:43:59.317685 kernel: BTRFS info (device vda6): using free space tree Dec 13 15:43:59.317705 kernel: BTRFS info (device vda6): has skinny extents Dec 13 15:43:59.324753 systemd[1]: Mounted sysroot-usr-share-oem.mount. Dec 13 15:43:59.327309 systemd[1]: Starting ignition-files.service... Dec 13 15:43:59.347618 ignition[826]: INFO : Ignition 2.14.0 Dec 13 15:43:59.347618 ignition[826]: INFO : Stage: files Dec 13 15:43:59.349298 ignition[826]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 15:43:59.349298 ignition[826]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Dec 13 15:43:59.349298 ignition[826]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 15:43:59.352379 ignition[826]: DEBUG : files: compiled without relabeling support, skipping Dec 13 15:43:59.352379 ignition[826]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 15:43:59.352379 ignition[826]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 15:43:59.355462 ignition[826]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 15:43:59.355462 ignition[826]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 15:43:59.357446 ignition[826]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 15:43:59.357446 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Dec 13 15:43:59.357446 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 15:43:59.356311 unknown[826]: wrote ssh authorized keys file for user: core Dec 13 15:43:59.363200 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 15:43:59.363200 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 15:43:59.363200 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Dec 13 15:43:59.363200 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Dec 13 15:43:59.363200 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Dec 13 15:43:59.363200 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Dec 13 15:43:59.972468 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Dec 13 15:44:00.201950 systemd-networkd[705]: eth0: Gained IPv6LL Dec 13 15:44:01.137429 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Dec 13 15:44:01.137429 ignition[826]: INFO : files: op(7): [started] processing unit "coreos-metadata-sshkeys@.service" Dec 13 15:44:01.137429 ignition[826]: INFO : files: op(7): [finished] processing unit "coreos-metadata-sshkeys@.service" Dec 13 15:44:01.141085 ignition[826]: INFO : files: op(8): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Dec 13 15:44:01.141085 ignition[826]: INFO : files: op(8): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Dec 13 15:44:01.149084 ignition[826]: INFO : files: createResultFile: createFiles: op(9): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 15:44:01.150742 ignition[826]: INFO : files: createResultFile: createFiles: op(9): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 15:44:01.150742 ignition[826]: INFO : files: files passed Dec 13 15:44:01.150742 ignition[826]: INFO : Ignition finished successfully Dec 13 15:44:01.154380 systemd[1]: Finished ignition-files.service. Dec 13 15:44:01.163433 kernel: kauditd_printk_skb: 28 callbacks suppressed Dec 13 15:44:01.163479 kernel: audit: type=1130 audit(1734104641.154:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:44:01.154000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:44:01.157074 systemd[1]: Starting initrd-setup-root-after-ignition.service... Dec 13 15:44:01.165941 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Dec 13 15:44:01.168195 systemd[1]: Starting ignition-quench.service... Dec 13 15:44:01.173163 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 15:44:01.174171 initrd-setup-root-after-ignition[851]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 15:44:01.175401 systemd[1]: Finished ignition-quench.service. Dec 13 15:44:01.181531 kernel: audit: type=1130 audit(1734104641.176:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:44:01.181569 kernel: audit: type=1131 audit(1734104641.176:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:44:01.176000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:44:01.176000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:44:01.177087 systemd[1]: Finished initrd-setup-root-after-ignition.service. Dec 13 15:44:01.187000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:44:01.188572 systemd[1]: Reached target ignition-complete.target. Dec 13 15:44:01.194293 kernel: audit: type=1130 audit(1734104641.187:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:44:01.196198 systemd[1]: Starting initrd-parse-etc.service... Dec 13 15:44:01.214806 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 15:44:01.215878 systemd[1]: Finished initrd-parse-etc.service. Dec 13 15:44:01.216000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:44:01.217373 systemd[1]: Reached target initrd-fs.target. Dec 13 15:44:01.228089 kernel: audit: type=1130 audit(1734104641.216:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:44:01.228138 kernel: audit: type=1131 audit(1734104641.216:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:44:01.216000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:44:01.228363 systemd[1]: Reached target initrd.target. Dec 13 15:44:01.229771 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Dec 13 15:44:01.231922 systemd[1]: Starting dracut-pre-pivot.service... Dec 13 15:44:01.247708 systemd[1]: Finished dracut-pre-pivot.service. Dec 13 15:44:01.250579 systemd[1]: Starting initrd-cleanup.service... Dec 13 15:44:01.247000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:44:01.257531 kernel: audit: type=1130 audit(1734104641.247:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:44:01.262341 systemd[1]: Stopped target nss-lookup.target. Dec 13 15:44:01.263185 systemd[1]: Stopped target remote-cryptsetup.target. Dec 13 15:44:01.264514 systemd[1]: Stopped target timers.target. Dec 13 15:44:01.267314 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 15:44:01.267556 systemd[1]: Stopped dracut-pre-pivot.service. Dec 13 15:44:01.275199 kernel: audit: type=1131 audit(1734104641.268:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:44:01.268000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:44:01.268748 systemd[1]: Stopped target initrd.target. Dec 13 15:44:01.274629 systemd[1]: Stopped target basic.target. Dec 13 15:44:01.275853 systemd[1]: Stopped target ignition-complete.target. Dec 13 15:44:01.277182 systemd[1]: Stopped target ignition-diskful.target. Dec 13 15:44:01.278458 systemd[1]: Stopped target initrd-root-device.target. Dec 13 15:44:01.279869 systemd[1]: Stopped target remote-fs.target. Dec 13 15:44:01.281177 systemd[1]: Stopped target remote-fs-pre.target. Dec 13 15:44:01.282523 systemd[1]: Stopped target sysinit.target. Dec 13 15:44:01.283791 systemd[1]: Stopped target local-fs.target. Dec 13 15:44:01.285128 systemd[1]: Stopped target local-fs-pre.target. Dec 13 15:44:01.286386 systemd[1]: Stopped target swap.target. Dec 13 15:44:01.294350 kernel: audit: type=1131 audit(1734104641.288:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:44:01.288000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:44:01.287592 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 15:44:01.287765 systemd[1]: Stopped dracut-pre-mount.service. Dec 13 15:44:01.315759 kernel: audit: type=1131 audit(1734104641.310:48): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:44:01.310000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:44:01.289148 systemd[1]: Stopped target cryptsetup.target. Dec 13 15:44:01.316000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:44:01.295044 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 15:44:01.317000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:44:01.295246 systemd[1]: Stopped dracut-initqueue.service. Dec 13 15:44:01.310818 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 15:44:01.311058 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Dec 13 15:44:01.316666 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 15:44:01.316900 systemd[1]: Stopped ignition-files.service. Dec 13 15:44:01.319303 systemd[1]: Stopping ignition-mount.service... Dec 13 15:44:01.327524 iscsid[710]: iscsid shutting down. Dec 13 15:44:01.329752 systemd[1]: Stopping iscsid.service... Dec 13 15:44:01.330512 ignition[864]: INFO : Ignition 2.14.0 Dec 13 15:44:01.330512 ignition[864]: INFO : Stage: umount Dec 13 15:44:01.332118 ignition[864]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 15:44:01.332118 ignition[864]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Dec 13 15:44:01.335592 ignition[864]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 15:44:01.335592 ignition[864]: INFO : umount: umount passed Dec 13 15:44:01.335592 ignition[864]: INFO : Ignition finished successfully Dec 13 15:44:01.338000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:44:01.336474 systemd[1]: Stopping sysroot-boot.service... Dec 13 15:44:01.339000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:44:01.337291 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 15:44:01.337612 systemd[1]: Stopped systemd-udev-trigger.service. Dec 13 15:44:01.338811 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 15:44:01.339031 systemd[1]: Stopped dracut-pre-trigger.service. Dec 13 15:44:01.343189 systemd[1]: iscsid.service: Deactivated successfully. Dec 13 15:44:01.343383 systemd[1]: Stopped iscsid.service. Dec 13 15:44:01.345000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:44:01.347186 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 15:44:01.348148 systemd[1]: Stopped ignition-mount.service. Dec 13 15:44:01.349000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:44:01.351123 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 15:44:01.352053 systemd[1]: Finished initrd-cleanup.service. Dec 13 15:44:01.352000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:44:01.353000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:44:01.354775 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 15:44:01.355625 systemd[1]: Stopped ignition-disks.service. Dec 13 15:44:01.356000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:44:01.357140 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 15:44:01.357964 systemd[1]: Stopped ignition-kargs.service. Dec 13 15:44:01.358000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:44:01.359443 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 13 15:44:01.360307 systemd[1]: Stopped ignition-fetch.service. Dec 13 15:44:01.361939 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 15:44:01.362873 systemd[1]: Stopped ignition-fetch-offline.service. Dec 13 15:44:01.361000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:44:01.365548 systemd[1]: Stopped target paths.target. Dec 13 15:44:01.366825 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 15:44:01.364000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:44:01.371591 systemd[1]: Stopped systemd-ask-password-console.path. Dec 13 15:44:01.373910 systemd[1]: Stopped target slices.target. Dec 13 15:44:01.375214 systemd[1]: Stopped target sockets.target. Dec 13 15:44:01.376560 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 15:44:01.376624 systemd[1]: Closed iscsid.socket. Dec 13 15:44:01.378594 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 15:44:01.378672 systemd[1]: Stopped ignition-setup.service. Dec 13 15:44:01.380000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:44:01.380826 systemd[1]: Stopping iscsiuio.service... Dec 13 15:44:01.383031 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 15:44:01.384834 systemd[1]: iscsiuio.service: Deactivated successfully. Dec 13 15:44:01.384990 systemd[1]: Stopped iscsiuio.service. Dec 13 15:44:01.385000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:44:01.386311 systemd[1]: Stopped target network.target. Dec 13 15:44:01.387357 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 15:44:01.387410 systemd[1]: Closed iscsiuio.socket. Dec 13 15:44:01.389205 systemd[1]: Stopping systemd-networkd.service... Dec 13 15:44:01.390301 systemd[1]: Stopping systemd-resolved.service... Dec 13 15:44:01.393586 systemd-networkd[705]: eth0: DHCPv6 lease lost Dec 13 15:44:01.396445 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 15:44:01.396614 systemd[1]: Stopped systemd-resolved.service. Dec 13 15:44:01.397000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:44:01.398906 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 15:44:01.399047 systemd[1]: Stopped systemd-networkd.service. Dec 13 15:44:01.400000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:44:01.401000 audit: BPF prog-id=9 op=UNLOAD Dec 13 15:44:01.401000 audit: BPF prog-id=6 op=UNLOAD Dec 13 15:44:01.401698 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 15:44:01.401749 systemd[1]: Closed systemd-networkd.socket. Dec 13 15:44:01.404165 systemd[1]: Stopping network-cleanup.service... Dec 13 15:44:01.406188 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 15:44:01.407000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:44:01.406262 systemd[1]: Stopped parse-ip-for-networkd.service. Dec 13 15:44:01.408000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:44:01.407586 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 15:44:01.409000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:44:01.407654 systemd[1]: Stopped systemd-sysctl.service. Dec 13 15:44:01.409356 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 15:44:01.409423 systemd[1]: Stopped systemd-modules-load.service. Dec 13 15:44:01.422621 systemd[1]: Stopping systemd-udevd.service... Dec 13 15:44:01.424839 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 13 15:44:01.428774 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 15:44:01.429000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:44:01.428937 systemd[1]: Stopped network-cleanup.service. Dec 13 15:44:01.431718 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 15:44:01.431916 systemd[1]: Stopped systemd-udevd.service. Dec 13 15:44:01.432000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:44:01.433698 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 15:44:01.433757 systemd[1]: Closed systemd-udevd-control.socket. Dec 13 15:44:01.434643 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 15:44:01.436000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:44:01.434693 systemd[1]: Closed systemd-udevd-kernel.socket. Dec 13 15:44:01.438000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:44:01.435970 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 15:44:01.439000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:44:01.436034 systemd[1]: Stopped dracut-pre-udev.service. Dec 13 15:44:01.437240 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 15:44:01.441000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:44:01.441000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:44:01.441000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:44:01.437303 systemd[1]: Stopped dracut-cmdline.service. Dec 13 15:44:01.438748 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 15:44:01.438810 systemd[1]: Stopped dracut-cmdline-ask.service. Dec 13 15:44:01.441082 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Dec 13 15:44:01.442615 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 15:44:01.454000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:44:01.454000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:44:01.442712 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Dec 13 15:44:01.443711 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 15:44:01.443773 systemd[1]: Stopped kmod-static-nodes.service. Dec 13 15:44:01.444426 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 15:44:01.444515 systemd[1]: Stopped systemd-vconsole-setup.service. Dec 13 15:44:01.446838 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Dec 13 15:44:01.454066 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 15:44:01.454219 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Dec 13 15:44:01.489625 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 15:44:01.489799 systemd[1]: Stopped sysroot-boot.service. Dec 13 15:44:01.490000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:44:01.491427 systemd[1]: Reached target initrd-switch-root.target. Dec 13 15:44:01.492425 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 15:44:01.493000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:44:01.492492 systemd[1]: Stopped initrd-setup-root.service. Dec 13 15:44:01.494955 systemd[1]: Starting initrd-switch-root.service... Dec 13 15:44:01.510876 systemd[1]: Switching root. Dec 13 15:44:01.531939 systemd-journald[201]: Journal stopped Dec 13 15:44:05.449727 systemd-journald[201]: Received SIGTERM from PID 1 (systemd). Dec 13 15:44:05.449838 kernel: SELinux: Class mctp_socket not defined in policy. Dec 13 15:44:05.449865 kernel: SELinux: Class anon_inode not defined in policy. Dec 13 15:44:05.449901 kernel: SELinux: the above unknown classes and permissions will be allowed Dec 13 15:44:05.449936 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 15:44:05.449957 kernel: SELinux: policy capability open_perms=1 Dec 13 15:44:05.449977 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 15:44:05.450004 kernel: SELinux: policy capability always_check_network=0 Dec 13 15:44:05.450048 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 15:44:05.450070 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 15:44:05.450095 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 15:44:05.450116 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 15:44:05.450137 systemd[1]: Successfully loaded SELinux policy in 71.156ms. Dec 13 15:44:05.450177 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 15.589ms. Dec 13 15:44:05.450209 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 15:44:05.450231 systemd[1]: Detected virtualization kvm. Dec 13 15:44:05.450263 systemd[1]: Detected architecture x86-64. Dec 13 15:44:05.450286 systemd[1]: Detected first boot. Dec 13 15:44:05.450307 systemd[1]: Hostname set to . Dec 13 15:44:05.450343 systemd[1]: Initializing machine ID from VM UUID. Dec 13 15:44:05.450365 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Dec 13 15:44:05.450386 systemd[1]: Populated /etc with preset unit settings. Dec 13 15:44:05.450408 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 15:44:05.450430 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 15:44:05.450466 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 15:44:05.450523 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 13 15:44:05.452570 systemd[1]: Stopped initrd-switch-root.service. Dec 13 15:44:05.452608 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 13 15:44:05.452638 systemd[1]: Created slice system-addon\x2dconfig.slice. Dec 13 15:44:05.452661 systemd[1]: Created slice system-addon\x2drun.slice. Dec 13 15:44:05.452701 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Dec 13 15:44:05.452724 systemd[1]: Created slice system-getty.slice. Dec 13 15:44:05.452745 systemd[1]: Created slice system-modprobe.slice. Dec 13 15:44:05.452765 systemd[1]: Created slice system-serial\x2dgetty.slice. Dec 13 15:44:05.452785 systemd[1]: Created slice system-system\x2dcloudinit.slice. Dec 13 15:44:05.452806 systemd[1]: Created slice system-systemd\x2dfsck.slice. Dec 13 15:44:05.452838 systemd[1]: Created slice user.slice. Dec 13 15:44:05.452861 systemd[1]: Started systemd-ask-password-console.path. Dec 13 15:44:05.452882 systemd[1]: Started systemd-ask-password-wall.path. Dec 13 15:44:05.452902 systemd[1]: Set up automount boot.automount. Dec 13 15:44:05.452924 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Dec 13 15:44:05.452944 systemd[1]: Stopped target initrd-switch-root.target. Dec 13 15:44:05.452976 systemd[1]: Stopped target initrd-fs.target. Dec 13 15:44:05.452999 systemd[1]: Stopped target initrd-root-fs.target. Dec 13 15:44:05.453020 systemd[1]: Reached target integritysetup.target. Dec 13 15:44:05.453056 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 15:44:05.453079 systemd[1]: Reached target remote-fs.target. Dec 13 15:44:05.453108 systemd[1]: Reached target slices.target. Dec 13 15:44:05.453130 systemd[1]: Reached target swap.target. Dec 13 15:44:05.453152 systemd[1]: Reached target torcx.target. Dec 13 15:44:05.453173 systemd[1]: Reached target veritysetup.target. Dec 13 15:44:05.453193 systemd[1]: Listening on systemd-coredump.socket. Dec 13 15:44:05.453226 systemd[1]: Listening on systemd-initctl.socket. Dec 13 15:44:05.453249 systemd[1]: Listening on systemd-networkd.socket. Dec 13 15:44:05.453277 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 15:44:05.453298 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 15:44:05.453319 systemd[1]: Listening on systemd-userdbd.socket. Dec 13 15:44:05.453339 systemd[1]: Mounting dev-hugepages.mount... Dec 13 15:44:05.453359 systemd[1]: Mounting dev-mqueue.mount... Dec 13 15:44:05.453380 systemd[1]: Mounting media.mount... Dec 13 15:44:05.453400 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 15:44:05.453432 systemd[1]: Mounting sys-kernel-debug.mount... Dec 13 15:44:05.453455 systemd[1]: Mounting sys-kernel-tracing.mount... Dec 13 15:44:05.453477 systemd[1]: Mounting tmp.mount... Dec 13 15:44:05.453518 systemd[1]: Starting flatcar-tmpfiles.service... Dec 13 15:44:05.453555 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 15:44:05.453580 systemd[1]: Starting kmod-static-nodes.service... Dec 13 15:44:05.453601 systemd[1]: Starting modprobe@configfs.service... Dec 13 15:44:05.453622 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 15:44:05.453643 systemd[1]: Starting modprobe@drm.service... Dec 13 15:44:05.453675 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 15:44:05.453700 systemd[1]: Starting modprobe@fuse.service... Dec 13 15:44:05.453727 systemd[1]: Starting modprobe@loop.service... Dec 13 15:44:05.453749 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 15:44:05.453779 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 13 15:44:05.453807 systemd[1]: Stopped systemd-fsck-root.service. Dec 13 15:44:05.453830 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 13 15:44:05.453850 systemd[1]: Stopped systemd-fsck-usr.service. Dec 13 15:44:05.453871 systemd[1]: Stopped systemd-journald.service. Dec 13 15:44:05.453903 systemd[1]: Starting systemd-journald.service... Dec 13 15:44:05.453926 systemd[1]: Starting systemd-modules-load.service... Dec 13 15:44:05.453946 systemd[1]: Starting systemd-network-generator.service... Dec 13 15:44:05.453967 systemd[1]: Starting systemd-remount-fs.service... Dec 13 15:44:05.453987 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 15:44:05.454007 kernel: loop: module loaded Dec 13 15:44:05.454056 systemd[1]: verity-setup.service: Deactivated successfully. Dec 13 15:44:05.454080 systemd[1]: Stopped verity-setup.service. Dec 13 15:44:05.454108 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 15:44:05.454143 systemd[1]: Mounted dev-hugepages.mount. Dec 13 15:44:05.454166 kernel: fuse: init (API version 7.34) Dec 13 15:44:05.454192 systemd[1]: Mounted dev-mqueue.mount. Dec 13 15:44:05.454214 systemd[1]: Mounted media.mount. Dec 13 15:44:05.454234 systemd[1]: Mounted sys-kernel-debug.mount. Dec 13 15:44:05.454254 systemd[1]: Mounted sys-kernel-tracing.mount. Dec 13 15:44:05.454274 systemd[1]: Mounted tmp.mount. Dec 13 15:44:05.454294 systemd[1]: Finished kmod-static-nodes.service. Dec 13 15:44:05.454326 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 15:44:05.454348 systemd[1]: Finished modprobe@configfs.service. Dec 13 15:44:05.454384 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 15:44:05.454410 systemd-journald[973]: Journal started Dec 13 15:44:05.454483 systemd-journald[973]: Runtime Journal (/run/log/journal/f0def2d0de6840ec99f17cfb192b0836) is 4.7M, max 38.1M, 33.3M free. Dec 13 15:44:01.712000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 15:44:05.456615 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 15:44:01.787000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 15:44:01.787000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 15:44:01.787000 audit: BPF prog-id=10 op=LOAD Dec 13 15:44:01.787000 audit: BPF prog-id=10 op=UNLOAD Dec 13 15:44:01.787000 audit: BPF prog-id=11 op=LOAD Dec 13 15:44:01.787000 audit: BPF prog-id=11 op=UNLOAD Dec 13 15:44:01.881000 audit[896]: AVC avc: denied { associate } for pid=896 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Dec 13 15:44:01.881000 audit[896]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c00014d8a2 a1=c0000cede0 a2=c0000d70c0 a3=32 items=0 ppid=879 pid=896 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 15:44:01.881000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Dec 13 15:44:01.884000 audit[896]: AVC avc: denied { associate } for pid=896 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Dec 13 15:44:01.884000 audit[896]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c00014d979 a2=1ed a3=0 items=2 ppid=879 pid=896 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 15:44:01.884000 audit: CWD cwd="/" Dec 13 15:44:01.884000 audit: PATH item=0 name=(null) inode=2 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:44:01.884000 audit: PATH item=1 name=(null) inode=3 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:44:01.884000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Dec 13 15:44:05.184000 audit: BPF prog-id=12 op=LOAD Dec 13 15:44:05.184000 audit: BPF prog-id=3 op=UNLOAD Dec 13 15:44:05.184000 audit: BPF prog-id=13 op=LOAD Dec 13 15:44:05.185000 audit: BPF prog-id=14 op=LOAD Dec 13 15:44:05.185000 audit: BPF prog-id=4 op=UNLOAD Dec 13 15:44:05.185000 audit: BPF prog-id=5 op=UNLOAD Dec 13 15:44:05.185000 audit: BPF prog-id=15 op=LOAD Dec 13 15:44:05.185000 audit: BPF prog-id=12 op=UNLOAD Dec 13 15:44:05.186000 audit: BPF prog-id=16 op=LOAD Dec 13 15:44:05.186000 audit: BPF prog-id=17 op=LOAD Dec 13 15:44:05.186000 audit: BPF prog-id=13 op=UNLOAD Dec 13 15:44:05.186000 audit: BPF prog-id=14 op=UNLOAD Dec 13 15:44:05.187000 audit: BPF prog-id=18 op=LOAD Dec 13 15:44:05.187000 audit: BPF prog-id=15 op=UNLOAD Dec 13 15:44:05.187000 audit: BPF prog-id=19 op=LOAD Dec 13 15:44:05.187000 audit: BPF prog-id=20 op=LOAD Dec 13 15:44:05.187000 audit: BPF prog-id=16 op=UNLOAD Dec 13 15:44:05.187000 audit: BPF prog-id=17 op=UNLOAD Dec 13 15:44:05.188000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:44:05.193000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:44:05.193000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:44:05.196000 audit: BPF prog-id=18 op=UNLOAD Dec 13 15:44:05.361000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:44:05.365000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:44:05.367000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:44:05.367000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:44:05.369000 audit: BPF prog-id=21 op=LOAD Dec 13 15:44:05.369000 audit: BPF prog-id=22 op=LOAD Dec 13 15:44:05.369000 audit: BPF prog-id=23 op=LOAD Dec 13 15:44:05.369000 audit: BPF prog-id=19 op=UNLOAD Dec 13 15:44:05.370000 audit: BPF prog-id=20 op=UNLOAD Dec 13 15:44:05.462023 systemd[1]: Started systemd-journald.service. Dec 13 15:44:05.408000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:44:05.444000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:44:05.446000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Dec 13 15:44:05.446000 audit[973]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=5 a1=7ffc025bb6a0 a2=4000 a3=7ffc025bb73c items=0 ppid=1 pid=973 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 15:44:05.446000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Dec 13 15:44:05.449000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:44:05.449000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:44:05.456000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:44:05.456000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:44:05.460000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:44:05.180726 systemd[1]: Queued start job for default target multi-user.target. Dec 13 15:44:01.878477 /usr/lib/systemd/system-generators/torcx-generator[896]: time="2024-12-13T15:44:01Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 15:44:05.180745 systemd[1]: Unnecessary job was removed for dev-vda6.device. Dec 13 15:44:01.879716 /usr/lib/systemd/system-generators/torcx-generator[896]: time="2024-12-13T15:44:01Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Dec 13 15:44:05.188612 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 13 15:44:01.879764 /usr/lib/systemd/system-generators/torcx-generator[896]: time="2024-12-13T15:44:01Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Dec 13 15:44:05.461767 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 15:44:01.879819 /usr/lib/systemd/system-generators/torcx-generator[896]: time="2024-12-13T15:44:01Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Dec 13 15:44:05.461988 systemd[1]: Finished modprobe@drm.service. Dec 13 15:44:01.879837 /usr/lib/systemd/system-generators/torcx-generator[896]: time="2024-12-13T15:44:01Z" level=debug msg="skipped missing lower profile" missing profile=oem Dec 13 15:44:01.879888 /usr/lib/systemd/system-generators/torcx-generator[896]: time="2024-12-13T15:44:01Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Dec 13 15:44:01.879910 /usr/lib/systemd/system-generators/torcx-generator[896]: time="2024-12-13T15:44:01Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Dec 13 15:44:01.880263 /usr/lib/systemd/system-generators/torcx-generator[896]: time="2024-12-13T15:44:01Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Dec 13 15:44:01.880336 /usr/lib/systemd/system-generators/torcx-generator[896]: time="2024-12-13T15:44:01Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Dec 13 15:44:01.880364 /usr/lib/systemd/system-generators/torcx-generator[896]: time="2024-12-13T15:44:01Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Dec 13 15:44:01.881220 /usr/lib/systemd/system-generators/torcx-generator[896]: time="2024-12-13T15:44:01Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Dec 13 15:44:01.881277 /usr/lib/systemd/system-generators/torcx-generator[896]: time="2024-12-13T15:44:01Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Dec 13 15:44:01.881309 /usr/lib/systemd/system-generators/torcx-generator[896]: time="2024-12-13T15:44:01Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.6: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.6 Dec 13 15:44:01.881336 /usr/lib/systemd/system-generators/torcx-generator[896]: time="2024-12-13T15:44:01Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Dec 13 15:44:01.881367 /usr/lib/systemd/system-generators/torcx-generator[896]: time="2024-12-13T15:44:01Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.6: no such file or directory" path=/var/lib/torcx/store/3510.3.6 Dec 13 15:44:01.881392 /usr/lib/systemd/system-generators/torcx-generator[896]: time="2024-12-13T15:44:01Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Dec 13 15:44:04.602639 /usr/lib/systemd/system-generators/torcx-generator[896]: time="2024-12-13T15:44:04Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 15:44:04.603382 /usr/lib/systemd/system-generators/torcx-generator[896]: time="2024-12-13T15:44:04Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 15:44:04.603711 /usr/lib/systemd/system-generators/torcx-generator[896]: time="2024-12-13T15:44:04Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 15:44:04.604844 /usr/lib/systemd/system-generators/torcx-generator[896]: time="2024-12-13T15:44:04Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 15:44:04.604958 /usr/lib/systemd/system-generators/torcx-generator[896]: time="2024-12-13T15:44:04Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Dec 13 15:44:04.605142 /usr/lib/systemd/system-generators/torcx-generator[896]: time="2024-12-13T15:44:04Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Dec 13 15:44:05.476000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:44:05.476000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:44:05.478236 systemd[1]: Finished flatcar-tmpfiles.service. Dec 13 15:44:05.478000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:44:05.479320 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 15:44:05.479598 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 15:44:05.479000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:44:05.479000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:44:05.480675 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 15:44:05.480870 systemd[1]: Finished modprobe@fuse.service. Dec 13 15:44:05.480000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:44:05.481000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:44:05.481919 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 15:44:05.482139 systemd[1]: Finished modprobe@loop.service. Dec 13 15:44:05.482000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:44:05.482000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:44:05.483379 systemd[1]: Finished systemd-modules-load.service. Dec 13 15:44:05.483000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:44:05.484394 systemd[1]: Finished systemd-network-generator.service. Dec 13 15:44:05.484000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:44:05.485516 systemd[1]: Finished systemd-remount-fs.service. Dec 13 15:44:05.485000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:44:05.487233 systemd[1]: Reached target network-pre.target. Dec 13 15:44:05.489890 systemd[1]: Mounting sys-fs-fuse-connections.mount... Dec 13 15:44:05.493186 systemd[1]: Mounting sys-kernel-config.mount... Dec 13 15:44:05.497684 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 15:44:05.500998 systemd[1]: Starting systemd-hwdb-update.service... Dec 13 15:44:05.508802 systemd[1]: Starting systemd-journal-flush.service... Dec 13 15:44:05.509681 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 15:44:05.511630 systemd[1]: Starting systemd-random-seed.service... Dec 13 15:44:05.512444 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 15:44:05.516332 systemd[1]: Starting systemd-sysctl.service... Dec 13 15:44:05.519478 systemd[1]: Starting systemd-sysusers.service... Dec 13 15:44:05.525395 systemd[1]: Mounted sys-fs-fuse-connections.mount. Dec 13 15:44:05.529541 systemd[1]: Mounted sys-kernel-config.mount. Dec 13 15:44:05.534748 systemd-journald[973]: Time spent on flushing to /var/log/journal/f0def2d0de6840ec99f17cfb192b0836 is 81.081ms for 1284 entries. Dec 13 15:44:05.534748 systemd-journald[973]: System Journal (/var/log/journal/f0def2d0de6840ec99f17cfb192b0836) is 8.0M, max 584.8M, 576.8M free. Dec 13 15:44:05.639009 systemd-journald[973]: Received client request to flush runtime journal. Dec 13 15:44:05.548000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:44:05.562000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:44:05.567000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:44:05.547899 systemd[1]: Finished systemd-random-seed.service. Dec 13 15:44:05.548830 systemd[1]: Reached target first-boot-complete.target. Dec 13 15:44:05.642000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:44:05.562683 systemd[1]: Finished systemd-sysctl.service. Dec 13 15:44:05.567090 systemd[1]: Finished systemd-sysusers.service. Dec 13 15:44:05.570770 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Dec 13 15:44:05.640596 systemd[1]: Finished systemd-journal-flush.service. Dec 13 15:44:05.652000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:44:05.651782 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Dec 13 15:44:05.708624 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 15:44:05.708000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:44:05.711253 systemd[1]: Starting systemd-udev-settle.service... Dec 13 15:44:05.723156 udevadm[1008]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Dec 13 15:44:06.248365 systemd[1]: Finished systemd-hwdb-update.service. Dec 13 15:44:06.257433 kernel: kauditd_printk_skb: 108 callbacks suppressed Dec 13 15:44:06.257622 kernel: audit: type=1130 audit(1734104646.250:148): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:44:06.250000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:44:06.256000 audit: BPF prog-id=24 op=LOAD Dec 13 15:44:06.258830 systemd[1]: Starting systemd-udevd.service... Dec 13 15:44:06.257000 audit: BPF prog-id=25 op=LOAD Dec 13 15:44:06.257000 audit: BPF prog-id=7 op=UNLOAD Dec 13 15:44:06.257000 audit: BPF prog-id=8 op=UNLOAD Dec 13 15:44:06.262010 kernel: audit: type=1334 audit(1734104646.256:149): prog-id=24 op=LOAD Dec 13 15:44:06.262146 kernel: audit: type=1334 audit(1734104646.257:150): prog-id=25 op=LOAD Dec 13 15:44:06.262196 kernel: audit: type=1334 audit(1734104646.257:151): prog-id=7 op=UNLOAD Dec 13 15:44:06.262237 kernel: audit: type=1334 audit(1734104646.257:152): prog-id=8 op=UNLOAD Dec 13 15:44:06.290385 systemd-udevd[1009]: Using default interface naming scheme 'v252'. Dec 13 15:44:06.322827 systemd[1]: Started systemd-udevd.service. Dec 13 15:44:06.333829 kernel: audit: type=1130 audit(1734104646.323:153): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:44:06.323000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:44:06.324000 audit: BPF prog-id=26 op=LOAD Dec 13 15:44:06.338596 kernel: audit: type=1334 audit(1734104646.324:154): prog-id=26 op=LOAD Dec 13 15:44:06.335523 systemd[1]: Starting systemd-networkd.service... Dec 13 15:44:06.353436 kernel: audit: type=1334 audit(1734104646.345:155): prog-id=27 op=LOAD Dec 13 15:44:06.353587 kernel: audit: type=1334 audit(1734104646.345:156): prog-id=28 op=LOAD Dec 13 15:44:06.353640 kernel: audit: type=1334 audit(1734104646.345:157): prog-id=29 op=LOAD Dec 13 15:44:06.345000 audit: BPF prog-id=27 op=LOAD Dec 13 15:44:06.345000 audit: BPF prog-id=28 op=LOAD Dec 13 15:44:06.345000 audit: BPF prog-id=29 op=LOAD Dec 13 15:44:06.347171 systemd[1]: Starting systemd-userdbd.service... Dec 13 15:44:06.395000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:44:06.395243 systemd[1]: Started systemd-userdbd.service. Dec 13 15:44:06.418918 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Dec 13 15:44:06.516891 systemd-networkd[1022]: lo: Link UP Dec 13 15:44:06.516906 systemd-networkd[1022]: lo: Gained carrier Dec 13 15:44:06.517806 systemd-networkd[1022]: Enumeration completed Dec 13 15:44:06.518000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:44:06.517958 systemd[1]: Started systemd-networkd.service. Dec 13 15:44:06.517983 systemd-networkd[1022]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 15:44:06.520839 systemd-networkd[1022]: eth0: Link UP Dec 13 15:44:06.520854 systemd-networkd[1022]: eth0: Gained carrier Dec 13 15:44:06.534549 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Dec 13 15:44:06.538717 systemd-networkd[1022]: eth0: DHCPv4 address 10.244.25.74/30, gateway 10.244.25.73 acquired from 10.244.25.73 Dec 13 15:44:06.544529 kernel: ACPI: button: Power Button [PWRF] Dec 13 15:44:06.563527 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 15:44:06.584184 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Dec 13 15:44:06.609000 audit[1015]: AVC avc: denied { confidentiality } for pid=1015 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Dec 13 15:44:06.609000 audit[1015]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=5606a77d1690 a1=337fc a2=7fd859049bc5 a3=5 items=110 ppid=1009 pid=1015 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 15:44:06.609000 audit: CWD cwd="/" Dec 13 15:44:06.609000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:44:06.609000 audit: PATH item=1 name=(null) inode=14288 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:44:06.609000 audit: PATH item=2 name=(null) inode=14288 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:44:06.609000 audit: PATH item=3 name=(null) inode=14289 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:44:06.609000 audit: PATH item=4 name=(null) inode=14288 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:44:06.609000 audit: PATH item=5 name=(null) inode=14290 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:44:06.609000 audit: PATH item=6 name=(null) inode=14288 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:44:06.609000 audit: PATH item=7 name=(null) inode=14291 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:44:06.609000 audit: PATH item=8 name=(null) inode=14291 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:44:06.609000 audit: PATH item=9 name=(null) inode=14292 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:44:06.609000 audit: PATH item=10 name=(null) inode=14291 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:44:06.609000 audit: PATH item=11 name=(null) inode=14293 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:44:06.609000 audit: PATH item=12 name=(null) inode=14291 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:44:06.609000 audit: PATH item=13 name=(null) inode=14294 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:44:06.609000 audit: PATH item=14 name=(null) inode=14291 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:44:06.609000 audit: PATH item=15 name=(null) inode=14295 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:44:06.609000 audit: PATH item=16 name=(null) inode=14291 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:44:06.609000 audit: PATH item=17 name=(null) inode=14296 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:44:06.609000 audit: PATH item=18 name=(null) inode=14288 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:44:06.609000 audit: PATH item=19 name=(null) inode=14297 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:44:06.609000 audit: PATH item=20 name=(null) inode=14297 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:44:06.609000 audit: PATH item=21 name=(null) inode=14298 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:44:06.609000 audit: PATH item=22 name=(null) inode=14297 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:44:06.609000 audit: PATH item=23 name=(null) inode=14299 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:44:06.609000 audit: PATH item=24 name=(null) inode=14297 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:44:06.609000 audit: PATH item=25 name=(null) inode=14300 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:44:06.609000 audit: PATH item=26 name=(null) inode=14297 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:44:06.609000 audit: PATH item=27 name=(null) inode=14301 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:44:06.609000 audit: PATH item=28 name=(null) inode=14297 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:44:06.609000 audit: PATH item=29 name=(null) inode=14302 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:44:06.609000 audit: PATH item=30 name=(null) inode=14288 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:44:06.609000 audit: PATH item=31 name=(null) inode=14303 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:44:06.609000 audit: PATH item=32 name=(null) inode=14303 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:44:06.609000 audit: PATH item=33 name=(null) inode=14304 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:44:06.609000 audit: PATH item=34 name=(null) inode=14303 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:44:06.609000 audit: PATH item=35 name=(null) inode=14305 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:44:06.609000 audit: PATH item=36 name=(null) inode=14303 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:44:06.609000 audit: PATH item=37 name=(null) inode=14306 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:44:06.609000 audit: PATH item=38 name=(null) inode=14303 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:44:06.609000 audit: PATH item=39 name=(null) inode=14307 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:44:06.609000 audit: PATH item=40 name=(null) inode=14303 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:44:06.609000 audit: PATH item=41 name=(null) inode=14308 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:44:06.609000 audit: PATH item=42 name=(null) inode=14288 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:44:06.609000 audit: PATH item=43 name=(null) inode=14309 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:44:06.609000 audit: PATH item=44 name=(null) inode=14309 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:44:06.609000 audit: PATH item=45 name=(null) inode=14310 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:44:06.609000 audit: PATH item=46 name=(null) inode=14309 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:44:06.609000 audit: PATH item=47 name=(null) inode=14311 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:44:06.609000 audit: PATH item=48 name=(null) inode=14309 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:44:06.609000 audit: PATH item=49 name=(null) inode=14312 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:44:06.609000 audit: PATH item=50 name=(null) inode=14309 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:44:06.609000 audit: PATH item=51 name=(null) inode=14313 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:44:06.609000 audit: PATH item=52 name=(null) inode=14309 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:44:06.609000 audit: PATH item=53 name=(null) inode=14314 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:44:06.609000 audit: PATH item=54 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:44:06.609000 audit: PATH item=55 name=(null) inode=14315 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:44:06.609000 audit: PATH item=56 name=(null) inode=14315 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:44:06.609000 audit: PATH item=57 name=(null) inode=14316 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:44:06.609000 audit: PATH item=58 name=(null) inode=14315 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:44:06.609000 audit: PATH item=59 name=(null) inode=14317 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:44:06.609000 audit: PATH item=60 name=(null) inode=14315 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:44:06.609000 audit: PATH item=61 name=(null) inode=14318 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:44:06.609000 audit: PATH item=62 name=(null) inode=14318 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:44:06.609000 audit: PATH item=63 name=(null) inode=14319 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:44:06.609000 audit: PATH item=64 name=(null) inode=14318 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:44:06.609000 audit: PATH item=65 name=(null) inode=14320 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:44:06.609000 audit: PATH item=66 name=(null) inode=14318 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:44:06.609000 audit: PATH item=67 name=(null) inode=14321 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:44:06.609000 audit: PATH item=68 name=(null) inode=14318 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:44:06.609000 audit: PATH item=69 name=(null) inode=14322 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:44:06.609000 audit: PATH item=70 name=(null) inode=14318 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:44:06.609000 audit: PATH item=71 name=(null) inode=14323 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:44:06.609000 audit: PATH item=72 name=(null) inode=14315 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:44:06.609000 audit: PATH item=73 name=(null) inode=14324 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:44:06.609000 audit: PATH item=74 name=(null) inode=14324 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:44:06.609000 audit: PATH item=75 name=(null) inode=14325 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:44:06.609000 audit: PATH item=76 name=(null) inode=14324 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:44:06.609000 audit: PATH item=77 name=(null) inode=14326 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:44:06.609000 audit: PATH item=78 name=(null) inode=14324 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:44:06.609000 audit: PATH item=79 name=(null) inode=14327 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:44:06.609000 audit: PATH item=80 name=(null) inode=14324 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:44:06.609000 audit: PATH item=81 name=(null) inode=14328 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:44:06.609000 audit: PATH item=82 name=(null) inode=14324 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:44:06.609000 audit: PATH item=83 name=(null) inode=14329 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:44:06.609000 audit: PATH item=84 name=(null) inode=14315 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:44:06.609000 audit: PATH item=85 name=(null) inode=14330 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:44:06.609000 audit: PATH item=86 name=(null) inode=14330 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:44:06.609000 audit: PATH item=87 name=(null) inode=14331 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:44:06.609000 audit: PATH item=88 name=(null) inode=14330 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:44:06.609000 audit: PATH item=89 name=(null) inode=14332 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:44:06.609000 audit: PATH item=90 name=(null) inode=14330 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:44:06.609000 audit: PATH item=91 name=(null) inode=14333 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:44:06.609000 audit: PATH item=92 name=(null) inode=14330 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:44:06.609000 audit: PATH item=93 name=(null) inode=14334 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:44:06.665583 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Dec 13 15:44:06.609000 audit: PATH item=94 name=(null) inode=14330 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:44:06.609000 audit: PATH item=95 name=(null) inode=14335 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:44:06.609000 audit: PATH item=96 name=(null) inode=14315 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:44:06.609000 audit: PATH item=97 name=(null) inode=14336 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:44:06.609000 audit: PATH item=98 name=(null) inode=14336 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:44:06.609000 audit: PATH item=99 name=(null) inode=16385 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:44:06.609000 audit: PATH item=100 name=(null) inode=14336 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:44:06.609000 audit: PATH item=101 name=(null) inode=16386 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:44:06.609000 audit: PATH item=102 name=(null) inode=14336 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:44:06.609000 audit: PATH item=103 name=(null) inode=16387 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:44:06.609000 audit: PATH item=104 name=(null) inode=14336 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:44:06.609000 audit: PATH item=105 name=(null) inode=16388 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:44:06.609000 audit: PATH item=106 name=(null) inode=14336 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:44:06.609000 audit: PATH item=107 name=(null) inode=16389 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:44:06.609000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:44:06.609000 audit: PATH item=109 name=(null) inode=15538 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:44:06.609000 audit: PROCTITLE proctitle="(udev-worker)" Dec 13 15:44:06.696608 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Dec 13 15:44:06.727933 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Dec 13 15:44:06.728255 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Dec 13 15:44:06.842000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:44:06.842694 systemd[1]: Finished systemd-udev-settle.service. Dec 13 15:44:06.845821 systemd[1]: Starting lvm2-activation-early.service... Dec 13 15:44:06.878800 lvm[1038]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 15:44:06.911451 systemd[1]: Finished lvm2-activation-early.service. Dec 13 15:44:06.911000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:44:06.912471 systemd[1]: Reached target cryptsetup.target. Dec 13 15:44:06.915043 systemd[1]: Starting lvm2-activation.service... Dec 13 15:44:06.922287 lvm[1039]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 15:44:06.950295 systemd[1]: Finished lvm2-activation.service. Dec 13 15:44:06.950000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:44:06.951209 systemd[1]: Reached target local-fs-pre.target. Dec 13 15:44:06.951897 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 15:44:06.951944 systemd[1]: Reached target local-fs.target. Dec 13 15:44:06.952597 systemd[1]: Reached target machines.target. Dec 13 15:44:06.955280 systemd[1]: Starting ldconfig.service... Dec 13 15:44:06.956628 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 15:44:06.956709 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 15:44:06.958604 systemd[1]: Starting systemd-boot-update.service... Dec 13 15:44:06.960776 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Dec 13 15:44:06.968656 systemd[1]: Starting systemd-machine-id-commit.service... Dec 13 15:44:06.971235 systemd[1]: Starting systemd-sysext.service... Dec 13 15:44:06.972627 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1041 (bootctl) Dec 13 15:44:06.975729 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Dec 13 15:44:06.995808 systemd[1]: Unmounting usr-share-oem.mount... Dec 13 15:44:07.048074 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Dec 13 15:44:07.048000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:44:07.082431 systemd[1]: usr-share-oem.mount: Deactivated successfully. Dec 13 15:44:07.083018 systemd[1]: Unmounted usr-share-oem.mount. Dec 13 15:44:07.131744 kernel: loop0: detected capacity change from 0 to 205544 Dec 13 15:44:07.141466 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 15:44:07.143000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:44:07.143365 systemd[1]: Finished systemd-machine-id-commit.service. Dec 13 15:44:07.172531 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 15:44:07.194550 kernel: loop1: detected capacity change from 0 to 205544 Dec 13 15:44:07.221925 (sd-sysext)[1053]: Using extensions 'kubernetes'. Dec 13 15:44:07.224234 (sd-sysext)[1053]: Merged extensions into '/usr'. Dec 13 15:44:07.237541 systemd-fsck[1050]: fsck.fat 4.2 (2021-01-31) Dec 13 15:44:07.237541 systemd-fsck[1050]: /dev/vda1: 789 files, 119291/258078 clusters Dec 13 15:44:07.238104 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Dec 13 15:44:07.238000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:44:07.241839 systemd[1]: Mounting boot.mount... Dec 13 15:44:07.265036 systemd[1]: Mounted boot.mount. Dec 13 15:44:07.268898 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 15:44:07.271296 systemd[1]: Mounting usr-share-oem.mount... Dec 13 15:44:07.273090 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 15:44:07.275266 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 15:44:07.281924 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 15:44:07.287145 systemd[1]: Starting modprobe@loop.service... Dec 13 15:44:07.287972 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 15:44:07.288261 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 15:44:07.288540 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 15:44:07.298311 systemd[1]: Mounted usr-share-oem.mount. Dec 13 15:44:07.300912 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 15:44:07.301877 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 15:44:07.302000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:44:07.302000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:44:07.302000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:44:07.302000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:44:07.302000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:44:07.302000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:44:07.302000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:44:07.303460 systemd[1]: Finished systemd-boot-update.service. Dec 13 15:44:07.304654 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 15:44:07.304825 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 15:44:07.306031 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 15:44:07.306202 systemd[1]: Finished modprobe@loop.service. Dec 13 15:44:07.307467 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 15:44:07.307653 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 15:44:07.310315 systemd[1]: Finished systemd-sysext.service. Dec 13 15:44:07.310000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:44:07.314097 systemd[1]: Starting ensure-sysext.service... Dec 13 15:44:07.316334 systemd[1]: Starting systemd-tmpfiles-setup.service... Dec 13 15:44:07.325210 systemd[1]: Reloading. Dec 13 15:44:07.353592 systemd-tmpfiles[1061]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Dec 13 15:44:07.361364 systemd-tmpfiles[1061]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 15:44:07.370646 systemd-tmpfiles[1061]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 15:44:07.449881 /usr/lib/systemd/system-generators/torcx-generator[1080]: time="2024-12-13T15:44:07Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 15:44:07.449936 /usr/lib/systemd/system-generators/torcx-generator[1080]: time="2024-12-13T15:44:07Z" level=info msg="torcx already run" Dec 13 15:44:07.603286 ldconfig[1040]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 15:44:07.621120 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 15:44:07.621154 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 15:44:07.649202 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 15:44:07.732000 audit: BPF prog-id=30 op=LOAD Dec 13 15:44:07.732000 audit: BPF prog-id=26 op=UNLOAD Dec 13 15:44:07.736000 audit: BPF prog-id=31 op=LOAD Dec 13 15:44:07.736000 audit: BPF prog-id=21 op=UNLOAD Dec 13 15:44:07.736000 audit: BPF prog-id=32 op=LOAD Dec 13 15:44:07.736000 audit: BPF prog-id=33 op=LOAD Dec 13 15:44:07.736000 audit: BPF prog-id=22 op=UNLOAD Dec 13 15:44:07.736000 audit: BPF prog-id=23 op=UNLOAD Dec 13 15:44:07.740000 audit: BPF prog-id=34 op=LOAD Dec 13 15:44:07.740000 audit: BPF prog-id=35 op=LOAD Dec 13 15:44:07.740000 audit: BPF prog-id=24 op=UNLOAD Dec 13 15:44:07.740000 audit: BPF prog-id=25 op=UNLOAD Dec 13 15:44:07.741000 audit: BPF prog-id=36 op=LOAD Dec 13 15:44:07.741000 audit: BPF prog-id=27 op=UNLOAD Dec 13 15:44:07.742000 audit: BPF prog-id=37 op=LOAD Dec 13 15:44:07.742000 audit: BPF prog-id=38 op=LOAD Dec 13 15:44:07.742000 audit: BPF prog-id=28 op=UNLOAD Dec 13 15:44:07.742000 audit: BPF prog-id=29 op=UNLOAD Dec 13 15:44:07.746305 systemd[1]: Finished ldconfig.service. Dec 13 15:44:07.746000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:44:07.748859 systemd[1]: Finished systemd-tmpfiles-setup.service. Dec 13 15:44:07.749000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:44:07.754411 systemd[1]: Starting audit-rules.service... Dec 13 15:44:07.756894 systemd[1]: Starting clean-ca-certificates.service... Dec 13 15:44:07.760026 systemd[1]: Starting systemd-journal-catalog-update.service... Dec 13 15:44:07.763000 audit: BPF prog-id=39 op=LOAD Dec 13 15:44:07.767740 systemd[1]: Starting systemd-resolved.service... Dec 13 15:44:07.769000 audit: BPF prog-id=40 op=LOAD Dec 13 15:44:07.771743 systemd[1]: Starting systemd-timesyncd.service... Dec 13 15:44:07.774182 systemd[1]: Starting systemd-update-utmp.service... Dec 13 15:44:07.784041 systemd[1]: Finished clean-ca-certificates.service. Dec 13 15:44:07.784000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:44:07.787619 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 15:44:07.789600 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 15:44:07.794021 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 15:44:07.796767 systemd[1]: Starting modprobe@loop.service... Dec 13 15:44:07.799570 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 15:44:07.799763 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 15:44:07.799943 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 15:44:07.801567 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 15:44:07.801797 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 15:44:07.802000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:44:07.802000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:44:07.803094 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 15:44:07.803282 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 15:44:07.804000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:44:07.805000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:44:07.806451 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 15:44:07.806672 systemd[1]: Finished modprobe@loop.service. Dec 13 15:44:07.806000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:44:07.806000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:44:07.811715 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 15:44:07.815198 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 15:44:07.818857 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 15:44:07.822814 systemd[1]: Starting modprobe@loop.service... Dec 13 15:44:07.824374 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 15:44:07.824670 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 15:44:07.824927 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 15:44:07.827085 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 15:44:07.827349 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 15:44:07.828000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:44:07.828000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:44:07.830000 audit[1134]: SYSTEM_BOOT pid=1134 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Dec 13 15:44:07.837849 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 15:44:07.841230 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 15:44:07.845320 systemd[1]: Starting modprobe@drm.service... Dec 13 15:44:07.846208 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 15:44:07.846582 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 15:44:07.849427 systemd[1]: Starting systemd-networkd-wait-online.service... Dec 13 15:44:07.851858 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 15:44:07.854374 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 15:44:07.855813 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 15:44:07.857000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:44:07.857000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:44:07.858866 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 15:44:07.859157 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 15:44:07.859000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:44:07.859000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:44:07.860947 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 15:44:07.865150 systemd[1]: Finished ensure-sysext.service. Dec 13 15:44:07.865000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:44:07.866274 systemd[1]: Finished systemd-update-utmp.service. Dec 13 15:44:07.866000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:44:07.874635 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 15:44:07.874856 systemd[1]: Finished modprobe@drm.service. Dec 13 15:44:07.876119 systemd[1]: Finished systemd-journal-catalog-update.service. Dec 13 15:44:07.875000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:44:07.875000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:44:07.876000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:44:07.879089 systemd[1]: Starting systemd-update-done.service... Dec 13 15:44:07.882122 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 15:44:07.882333 systemd[1]: Finished modprobe@loop.service. Dec 13 15:44:07.882000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:44:07.882000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:44:07.883261 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 15:44:07.890274 systemd[1]: Finished systemd-update-done.service. Dec 13 15:44:07.890000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:44:07.893000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Dec 13 15:44:07.893000 audit[1158]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffdfcd8ea10 a2=420 a3=0 items=0 ppid=1128 pid=1158 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 15:44:07.893000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 13 15:44:07.895126 augenrules[1158]: No rules Dec 13 15:44:07.896335 systemd[1]: Finished audit-rules.service. Dec 13 15:44:07.928161 systemd[1]: Started systemd-timesyncd.service. Dec 13 15:44:07.929103 systemd[1]: Reached target time-set.target. Dec 13 15:44:07.966588 systemd-resolved[1131]: Positive Trust Anchors: Dec 13 15:44:07.966613 systemd-resolved[1131]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 15:44:07.966653 systemd-resolved[1131]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 15:44:07.974779 systemd-resolved[1131]: Using system hostname 'srv-is9pt.gb1.brightbox.com'. Dec 13 15:44:07.977405 systemd[1]: Started systemd-resolved.service. Dec 13 15:44:07.978232 systemd[1]: Reached target network.target. Dec 13 15:44:07.978861 systemd[1]: Reached target nss-lookup.target. Dec 13 15:44:07.979528 systemd[1]: Reached target sysinit.target. Dec 13 15:44:07.980226 systemd[1]: Started motdgen.path. Dec 13 15:44:07.980869 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Dec 13 15:44:07.981811 systemd[1]: Started logrotate.timer. Dec 13 15:44:07.982545 systemd[1]: Started mdadm.timer. Dec 13 15:44:07.983117 systemd[1]: Started systemd-tmpfiles-clean.timer. Dec 13 15:44:07.983776 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 15:44:07.983825 systemd[1]: Reached target paths.target. Dec 13 15:44:07.984411 systemd[1]: Reached target timers.target. Dec 13 15:44:07.985475 systemd[1]: Listening on dbus.socket. Dec 13 15:44:07.987742 systemd[1]: Starting docker.socket... Dec 13 15:44:07.992158 systemd[1]: Listening on sshd.socket. Dec 13 15:44:07.992923 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 15:44:07.993593 systemd[1]: Listening on docker.socket. Dec 13 15:44:07.994328 systemd[1]: Reached target sockets.target. Dec 13 15:44:07.994958 systemd[1]: Reached target basic.target. Dec 13 15:44:07.995656 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 15:44:07.995710 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 15:44:07.997262 systemd[1]: Starting containerd.service... Dec 13 15:44:07.999548 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Dec 13 15:44:08.001952 systemd[1]: Starting dbus.service... Dec 13 15:44:08.006553 systemd[1]: Starting enable-oem-cloudinit.service... Dec 13 15:44:08.015025 jq[1171]: false Dec 13 15:44:08.014829 systemd[1]: Starting extend-filesystems.service... Dec 13 15:44:08.016747 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Dec 13 15:44:08.021074 systemd[1]: Starting motdgen.service... Dec 13 15:44:08.026419 systemd[1]: Starting ssh-key-proc-cmdline.service... Dec 13 15:44:08.031432 systemd[1]: Starting sshd-keygen.service... Dec 13 15:44:08.037730 systemd[1]: Starting systemd-logind.service... Dec 13 15:44:08.038580 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 15:44:08.038748 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 15:44:08.039607 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 15:44:08.042750 systemd[1]: Starting update-engine.service... Dec 13 15:44:08.045969 systemd[1]: Starting update-ssh-keys-after-ignition.service... Dec 13 15:44:08.054181 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 15:44:08.054547 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Dec 13 15:44:08.055317 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 15:44:08.055773 systemd[1]: Finished ssh-key-proc-cmdline.service. Dec 13 15:44:08.075885 jq[1184]: true Dec 13 15:44:08.074142 systemd-networkd[1022]: eth0: Gained IPv6LL Dec 13 15:44:08.076576 systemd[1]: Finished systemd-networkd-wait-online.service. Dec 13 15:44:08.077530 systemd[1]: Reached target network-online.target. Dec 13 15:44:08.080489 systemd[1]: Starting kubelet.service... Dec 13 15:44:08.084911 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 15:44:08.085068 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 15:44:08.106606 extend-filesystems[1172]: Found loop1 Dec 13 15:44:08.111777 extend-filesystems[1172]: Found vda Dec 13 15:44:08.113814 extend-filesystems[1172]: Found vda1 Dec 13 15:44:08.115367 jq[1191]: true Dec 13 15:44:08.117905 extend-filesystems[1172]: Found vda2 Dec 13 15:44:08.117905 extend-filesystems[1172]: Found vda3 Dec 13 15:44:08.117905 extend-filesystems[1172]: Found usr Dec 13 15:44:08.117905 extend-filesystems[1172]: Found vda4 Dec 13 15:44:08.126868 extend-filesystems[1172]: Found vda6 Dec 13 15:44:08.132662 extend-filesystems[1172]: Found vda7 Dec 13 15:44:08.132662 extend-filesystems[1172]: Found vda9 Dec 13 15:44:08.132662 extend-filesystems[1172]: Checking size of /dev/vda9 Dec 13 15:44:08.140622 dbus-daemon[1168]: [system] SELinux support is enabled Dec 13 15:44:08.140853 systemd[1]: Started dbus.service. Dec 13 15:44:08.142157 dbus-daemon[1168]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1022 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Dec 13 15:44:08.146696 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 15:44:08.146743 systemd[1]: Reached target system-config.target. Dec 13 15:44:08.147414 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 15:44:08.147447 systemd[1]: Reached target user-config.target. Dec 13 15:44:08.170337 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 15:44:08.170601 systemd[1]: Finished motdgen.service. Dec 13 15:44:08.173082 dbus-daemon[1168]: [system] Successfully activated service 'org.freedesktop.systemd1' Dec 13 15:44:08.178788 systemd[1]: Starting systemd-hostnamed.service... Dec 13 15:44:09.324026 systemd-resolved[1131]: Clock change detected. Flushing caches. Dec 13 15:44:09.325118 systemd-timesyncd[1133]: Contacted time server 51.89.151.183:123 (0.flatcar.pool.ntp.org). Dec 13 15:44:09.325207 systemd-timesyncd[1133]: Initial clock synchronization to Fri 2024-12-13 15:44:09.323946 UTC. Dec 13 15:44:09.333904 update_engine[1182]: I1213 15:44:09.332973 1182 main.cc:92] Flatcar Update Engine starting Dec 13 15:44:09.338995 systemd[1]: Started update-engine.service. Dec 13 15:44:09.339238 update_engine[1182]: I1213 15:44:09.339201 1182 update_check_scheduler.cc:74] Next update check in 6m41s Dec 13 15:44:09.341721 extend-filesystems[1172]: Resized partition /dev/vda9 Dec 13 15:44:09.342574 systemd[1]: Started locksmithd.service. Dec 13 15:44:09.363285 extend-filesystems[1218]: resize2fs 1.46.5 (30-Dec-2021) Dec 13 15:44:09.371392 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 15121403 blocks Dec 13 15:44:09.427569 systemd-logind[1178]: Watching system buttons on /dev/input/event2 (Power Button) Dec 13 15:44:09.427617 systemd-logind[1178]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 13 15:44:09.427979 systemd-logind[1178]: New seat seat0. Dec 13 15:44:09.431730 systemd[1]: Started systemd-logind.service. Dec 13 15:44:09.444991 bash[1222]: Updated "/home/core/.ssh/authorized_keys" Dec 13 15:44:09.445508 systemd[1]: Finished update-ssh-keys-after-ignition.service. Dec 13 15:44:09.470817 env[1186]: time="2024-12-13T15:44:09.470689371Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Dec 13 15:44:09.531144 dbus-daemon[1168]: [system] Successfully activated service 'org.freedesktop.hostname1' Dec 13 15:44:09.531326 systemd[1]: Started systemd-hostnamed.service. Dec 13 15:44:09.533403 dbus-daemon[1168]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1211 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Dec 13 15:44:09.538534 systemd[1]: Starting polkit.service... Dec 13 15:44:09.552626 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Dec 13 15:44:09.570809 extend-filesystems[1218]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Dec 13 15:44:09.570809 extend-filesystems[1218]: old_desc_blocks = 1, new_desc_blocks = 8 Dec 13 15:44:09.570809 extend-filesystems[1218]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Dec 13 15:44:09.574318 extend-filesystems[1172]: Resized filesystem in /dev/vda9 Dec 13 15:44:09.571270 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 15:44:09.571536 systemd[1]: Finished extend-filesystems.service. Dec 13 15:44:09.576824 polkitd[1226]: Started polkitd version 121 Dec 13 15:44:09.596102 env[1186]: time="2024-12-13T15:44:09.595804241Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 15:44:09.598252 env[1186]: time="2024-12-13T15:44:09.598195025Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 15:44:09.603751 polkitd[1226]: Loading rules from directory /etc/polkit-1/rules.d Dec 13 15:44:09.604024 polkitd[1226]: Loading rules from directory /usr/share/polkit-1/rules.d Dec 13 15:44:09.604914 env[1186]: time="2024-12-13T15:44:09.604833473Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.173-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 15:44:09.605151 env[1186]: time="2024-12-13T15:44:09.605066818Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 15:44:09.605651 env[1186]: time="2024-12-13T15:44:09.605615146Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 15:44:09.606143 env[1186]: time="2024-12-13T15:44:09.606111911Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 15:44:09.606466 env[1186]: time="2024-12-13T15:44:09.606434109Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Dec 13 15:44:09.607459 env[1186]: time="2024-12-13T15:44:09.607428072Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 15:44:09.607776 env[1186]: time="2024-12-13T15:44:09.607747017Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 15:44:09.608492 polkitd[1226]: Finished loading, compiling and executing 2 rules Dec 13 15:44:09.609226 env[1186]: time="2024-12-13T15:44:09.609185751Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 15:44:09.610503 dbus-daemon[1168]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Dec 13 15:44:09.610808 systemd[1]: Started polkit.service. Dec 13 15:44:09.611052 polkitd[1226]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Dec 13 15:44:09.612022 env[1186]: time="2024-12-13T15:44:09.611985056Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 15:44:09.612816 env[1186]: time="2024-12-13T15:44:09.612784672Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 15:44:09.613596 env[1186]: time="2024-12-13T15:44:09.613508114Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Dec 13 15:44:09.614663 env[1186]: time="2024-12-13T15:44:09.614632210Z" level=info msg="metadata content store policy set" policy=shared Dec 13 15:44:09.646666 env[1186]: time="2024-12-13T15:44:09.645020078Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 15:44:09.646666 env[1186]: time="2024-12-13T15:44:09.645150343Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 15:44:09.646666 env[1186]: time="2024-12-13T15:44:09.645180728Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 15:44:09.646666 env[1186]: time="2024-12-13T15:44:09.645271521Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 15:44:09.646666 env[1186]: time="2024-12-13T15:44:09.645308477Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 15:44:09.646666 env[1186]: time="2024-12-13T15:44:09.645334625Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 15:44:09.646666 env[1186]: time="2024-12-13T15:44:09.645374436Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 15:44:09.646666 env[1186]: time="2024-12-13T15:44:09.645401642Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 15:44:09.646666 env[1186]: time="2024-12-13T15:44:09.645430739Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Dec 13 15:44:09.646666 env[1186]: time="2024-12-13T15:44:09.645458042Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 15:44:09.646666 env[1186]: time="2024-12-13T15:44:09.645481163Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 15:44:09.646666 env[1186]: time="2024-12-13T15:44:09.645510236Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 15:44:09.646666 env[1186]: time="2024-12-13T15:44:09.645778804Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 15:44:09.646666 env[1186]: time="2024-12-13T15:44:09.645991039Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 15:44:09.647737 env[1186]: time="2024-12-13T15:44:09.646382757Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 15:44:09.647737 env[1186]: time="2024-12-13T15:44:09.646561612Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 15:44:09.647737 env[1186]: time="2024-12-13T15:44:09.646592581Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 15:44:09.648986 env[1186]: time="2024-12-13T15:44:09.648110553Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 15:44:09.648986 env[1186]: time="2024-12-13T15:44:09.648169666Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 15:44:09.648986 env[1186]: time="2024-12-13T15:44:09.648223165Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 15:44:09.648986 env[1186]: time="2024-12-13T15:44:09.648257497Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 15:44:09.648986 env[1186]: time="2024-12-13T15:44:09.648281895Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 15:44:09.648986 env[1186]: time="2024-12-13T15:44:09.648439477Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 15:44:09.648986 env[1186]: time="2024-12-13T15:44:09.648492252Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 15:44:09.648986 env[1186]: time="2024-12-13T15:44:09.648517577Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 15:44:09.648986 env[1186]: time="2024-12-13T15:44:09.648561516Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 15:44:09.649579 env[1186]: time="2024-12-13T15:44:09.649184076Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 15:44:09.649579 env[1186]: time="2024-12-13T15:44:09.649219105Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 15:44:09.649926 env[1186]: time="2024-12-13T15:44:09.649688313Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 15:44:09.649926 env[1186]: time="2024-12-13T15:44:09.649727417Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 15:44:09.649926 env[1186]: time="2024-12-13T15:44:09.649756742Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Dec 13 15:44:09.649926 env[1186]: time="2024-12-13T15:44:09.649796676Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 15:44:09.649926 env[1186]: time="2024-12-13T15:44:09.649873547Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Dec 13 15:44:09.650487 env[1186]: time="2024-12-13T15:44:09.650029933Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 15:44:09.650827 env[1186]: time="2024-12-13T15:44:09.650739218Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 15:44:09.656957 env[1186]: time="2024-12-13T15:44:09.651082276Z" level=info msg="Connect containerd service" Dec 13 15:44:09.656957 env[1186]: time="2024-12-13T15:44:09.651174936Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 15:44:09.656957 env[1186]: time="2024-12-13T15:44:09.652427744Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 15:44:09.656957 env[1186]: time="2024-12-13T15:44:09.653022586Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 15:44:09.656957 env[1186]: time="2024-12-13T15:44:09.653120866Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 15:44:09.656957 env[1186]: time="2024-12-13T15:44:09.653237235Z" level=info msg="containerd successfully booted in 0.188388s" Dec 13 15:44:09.656957 env[1186]: time="2024-12-13T15:44:09.656037404Z" level=info msg="Start subscribing containerd event" Dec 13 15:44:09.656957 env[1186]: time="2024-12-13T15:44:09.656139684Z" level=info msg="Start recovering state" Dec 13 15:44:09.656957 env[1186]: time="2024-12-13T15:44:09.656294679Z" level=info msg="Start event monitor" Dec 13 15:44:09.656957 env[1186]: time="2024-12-13T15:44:09.656342307Z" level=info msg="Start snapshots syncer" Dec 13 15:44:09.656957 env[1186]: time="2024-12-13T15:44:09.656391902Z" level=info msg="Start cni network conf syncer for default" Dec 13 15:44:09.656957 env[1186]: time="2024-12-13T15:44:09.656415897Z" level=info msg="Start streaming server" Dec 13 15:44:09.653431 systemd[1]: Started containerd.service. Dec 13 15:44:09.672682 systemd-hostnamed[1211]: Hostname set to (static) Dec 13 15:44:09.833684 locksmithd[1214]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 15:44:10.235742 systemd-networkd[1022]: eth0: Ignoring DHCPv6 address 2a02:1348:17d:652:24:19ff:fef4:194a/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:17d:652:24:19ff:fef4:194a/64 assigned by NDisc. Dec 13 15:44:10.235756 systemd-networkd[1022]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Dec 13 15:44:10.485053 systemd[1]: Created slice system-sshd.slice. Dec 13 15:44:10.563390 systemd[1]: Started kubelet.service. Dec 13 15:44:10.576119 sshd_keygen[1187]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 15:44:10.608879 systemd[1]: Finished sshd-keygen.service. Dec 13 15:44:10.612064 systemd[1]: Starting issuegen.service... Dec 13 15:44:10.615811 systemd[1]: Started sshd@0-10.244.25.74:22-139.178.68.195:56604.service. Dec 13 15:44:10.632632 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 15:44:10.632893 systemd[1]: Finished issuegen.service. Dec 13 15:44:10.636371 systemd[1]: Starting systemd-user-sessions.service... Dec 13 15:44:10.647320 systemd[1]: Finished systemd-user-sessions.service. Dec 13 15:44:10.650482 systemd[1]: Started getty@tty1.service. Dec 13 15:44:10.655473 systemd[1]: Started serial-getty@ttyS0.service. Dec 13 15:44:10.656576 systemd[1]: Reached target getty.target. Dec 13 15:44:11.301719 kubelet[1244]: E1213 15:44:11.301628 1244 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 15:44:11.304056 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 15:44:11.304362 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 15:44:11.304851 systemd[1]: kubelet.service: Consumed 1.115s CPU time. Dec 13 15:44:11.526477 sshd[1253]: Accepted publickey for core from 139.178.68.195 port 56604 ssh2: RSA SHA256:BRWuvX4vngANWcecei9LW91Zd3OWx+vtbErQ53ehsZc Dec 13 15:44:11.529751 sshd[1253]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 15:44:11.546296 systemd[1]: Created slice user-500.slice. Dec 13 15:44:11.549714 systemd[1]: Starting user-runtime-dir@500.service... Dec 13 15:44:11.556475 systemd-logind[1178]: New session 1 of user core. Dec 13 15:44:11.567569 systemd[1]: Finished user-runtime-dir@500.service. Dec 13 15:44:11.572132 systemd[1]: Starting user@500.service... Dec 13 15:44:11.580850 (systemd)[1266]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 15:44:11.686594 systemd[1266]: Queued start job for default target default.target. Dec 13 15:44:11.687448 systemd[1266]: Reached target paths.target. Dec 13 15:44:11.687485 systemd[1266]: Reached target sockets.target. Dec 13 15:44:11.687508 systemd[1266]: Reached target timers.target. Dec 13 15:44:11.687528 systemd[1266]: Reached target basic.target. Dec 13 15:44:11.687671 systemd[1]: Started user@500.service. Dec 13 15:44:11.694014 systemd[1]: Started session-1.scope. Dec 13 15:44:11.695385 systemd[1266]: Reached target default.target. Dec 13 15:44:11.695467 systemd[1266]: Startup finished in 104ms. Dec 13 15:44:12.323301 systemd[1]: Started sshd@1-10.244.25.74:22-139.178.68.195:56606.service. Dec 13 15:44:13.215903 sshd[1276]: Accepted publickey for core from 139.178.68.195 port 56606 ssh2: RSA SHA256:BRWuvX4vngANWcecei9LW91Zd3OWx+vtbErQ53ehsZc Dec 13 15:44:13.219239 sshd[1276]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 15:44:13.232908 systemd[1]: Started session-2.scope. Dec 13 15:44:13.235451 systemd-logind[1178]: New session 2 of user core. Dec 13 15:44:13.839409 sshd[1276]: pam_unix(sshd:session): session closed for user core Dec 13 15:44:13.844552 systemd[1]: sshd@1-10.244.25.74:22-139.178.68.195:56606.service: Deactivated successfully. Dec 13 15:44:13.846083 systemd[1]: session-2.scope: Deactivated successfully. Dec 13 15:44:13.847122 systemd-logind[1178]: Session 2 logged out. Waiting for processes to exit. Dec 13 15:44:13.848561 systemd-logind[1178]: Removed session 2. Dec 13 15:44:13.987748 systemd[1]: Started sshd@2-10.244.25.74:22-139.178.68.195:56608.service. Dec 13 15:44:14.879630 sshd[1282]: Accepted publickey for core from 139.178.68.195 port 56608 ssh2: RSA SHA256:BRWuvX4vngANWcecei9LW91Zd3OWx+vtbErQ53ehsZc Dec 13 15:44:14.881624 sshd[1282]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 15:44:14.889080 systemd[1]: Started session-3.scope. Dec 13 15:44:14.889272 systemd-logind[1178]: New session 3 of user core. Dec 13 15:44:15.500064 sshd[1282]: pam_unix(sshd:session): session closed for user core Dec 13 15:44:15.503937 systemd-logind[1178]: Session 3 logged out. Waiting for processes to exit. Dec 13 15:44:15.504839 systemd[1]: sshd@2-10.244.25.74:22-139.178.68.195:56608.service: Deactivated successfully. Dec 13 15:44:15.505787 systemd[1]: session-3.scope: Deactivated successfully. Dec 13 15:44:15.506975 systemd-logind[1178]: Removed session 3. Dec 13 15:44:16.289940 coreos-metadata[1167]: Dec 13 15:44:16.289 WARN failed to locate config-drive, using the metadata service API instead Dec 13 15:44:16.343776 coreos-metadata[1167]: Dec 13 15:44:16.343 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Dec 13 15:44:16.372151 coreos-metadata[1167]: Dec 13 15:44:16.371 INFO Fetch successful Dec 13 15:44:16.372576 coreos-metadata[1167]: Dec 13 15:44:16.372 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Dec 13 15:44:16.411642 coreos-metadata[1167]: Dec 13 15:44:16.411 INFO Fetch successful Dec 13 15:44:16.413659 unknown[1167]: wrote ssh authorized keys file for user: core Dec 13 15:44:16.428052 update-ssh-keys[1289]: Updated "/home/core/.ssh/authorized_keys" Dec 13 15:44:16.429124 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Dec 13 15:44:16.429710 systemd[1]: Reached target multi-user.target. Dec 13 15:44:16.432029 systemd[1]: Starting systemd-update-utmp-runlevel.service... Dec 13 15:44:16.443208 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Dec 13 15:44:16.443489 systemd[1]: Finished systemd-update-utmp-runlevel.service. Dec 13 15:44:16.443752 systemd[1]: Startup finished in 1.160s (kernel) + 5.946s (initrd) + 13.689s (userspace) = 20.796s. Dec 13 15:44:21.474354 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 15:44:21.474905 systemd[1]: Stopped kubelet.service. Dec 13 15:44:21.475014 systemd[1]: kubelet.service: Consumed 1.115s CPU time. Dec 13 15:44:21.477957 systemd[1]: Starting kubelet.service... Dec 13 15:44:21.640686 systemd[1]: Started kubelet.service. Dec 13 15:44:21.725860 kubelet[1295]: E1213 15:44:21.725294 1295 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 15:44:21.729806 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 15:44:21.730046 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 15:44:25.650396 systemd[1]: Started sshd@3-10.244.25.74:22-139.178.68.195:58176.service. Dec 13 15:44:26.537490 sshd[1302]: Accepted publickey for core from 139.178.68.195 port 58176 ssh2: RSA SHA256:BRWuvX4vngANWcecei9LW91Zd3OWx+vtbErQ53ehsZc Dec 13 15:44:26.540320 sshd[1302]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 15:44:26.548112 systemd[1]: Started session-4.scope. Dec 13 15:44:26.548644 systemd-logind[1178]: New session 4 of user core. Dec 13 15:44:27.159458 sshd[1302]: pam_unix(sshd:session): session closed for user core Dec 13 15:44:27.164631 systemd[1]: sshd@3-10.244.25.74:22-139.178.68.195:58176.service: Deactivated successfully. Dec 13 15:44:27.165433 systemd-logind[1178]: Session 4 logged out. Waiting for processes to exit. Dec 13 15:44:27.165628 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 15:44:27.167353 systemd-logind[1178]: Removed session 4. Dec 13 15:44:27.308747 systemd[1]: Started sshd@4-10.244.25.74:22-139.178.68.195:50938.service. Dec 13 15:44:28.197348 sshd[1308]: Accepted publickey for core from 139.178.68.195 port 50938 ssh2: RSA SHA256:BRWuvX4vngANWcecei9LW91Zd3OWx+vtbErQ53ehsZc Dec 13 15:44:28.199331 sshd[1308]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 15:44:28.206111 systemd-logind[1178]: New session 5 of user core. Dec 13 15:44:28.206983 systemd[1]: Started session-5.scope. Dec 13 15:44:28.813335 sshd[1308]: pam_unix(sshd:session): session closed for user core Dec 13 15:44:28.817688 systemd[1]: sshd@4-10.244.25.74:22-139.178.68.195:50938.service: Deactivated successfully. Dec 13 15:44:28.818831 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 15:44:28.819572 systemd-logind[1178]: Session 5 logged out. Waiting for processes to exit. Dec 13 15:44:28.820759 systemd-logind[1178]: Removed session 5. Dec 13 15:44:28.959717 systemd[1]: Started sshd@5-10.244.25.74:22-139.178.68.195:50952.service. Dec 13 15:44:29.852969 sshd[1314]: Accepted publickey for core from 139.178.68.195 port 50952 ssh2: RSA SHA256:BRWuvX4vngANWcecei9LW91Zd3OWx+vtbErQ53ehsZc Dec 13 15:44:29.855116 sshd[1314]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 15:44:29.863979 systemd-logind[1178]: New session 6 of user core. Dec 13 15:44:29.865573 systemd[1]: Started session-6.scope. Dec 13 15:44:30.471319 sshd[1314]: pam_unix(sshd:session): session closed for user core Dec 13 15:44:30.475590 systemd[1]: sshd@5-10.244.25.74:22-139.178.68.195:50952.service: Deactivated successfully. Dec 13 15:44:30.476566 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 15:44:30.477413 systemd-logind[1178]: Session 6 logged out. Waiting for processes to exit. Dec 13 15:44:30.478929 systemd-logind[1178]: Removed session 6. Dec 13 15:44:30.619687 systemd[1]: Started sshd@6-10.244.25.74:22-139.178.68.195:50968.service. Dec 13 15:44:31.513891 sshd[1320]: Accepted publickey for core from 139.178.68.195 port 50968 ssh2: RSA SHA256:BRWuvX4vngANWcecei9LW91Zd3OWx+vtbErQ53ehsZc Dec 13 15:44:31.515909 sshd[1320]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 15:44:31.522822 systemd-logind[1178]: New session 7 of user core. Dec 13 15:44:31.523631 systemd[1]: Started session-7.scope. Dec 13 15:44:31.974093 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 15:44:31.974408 systemd[1]: Stopped kubelet.service. Dec 13 15:44:31.976980 systemd[1]: Starting kubelet.service... Dec 13 15:44:32.173905 systemd[1]: Started kubelet.service. Dec 13 15:44:32.184910 sudo[1325]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 15:44:32.186050 sudo[1325]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Dec 13 15:44:32.210599 systemd[1]: Starting coreos-metadata.service... Dec 13 15:44:32.250379 kubelet[1328]: E1213 15:44:32.249637 1328 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 15:44:32.252635 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 15:44:32.252885 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 15:44:39.268647 coreos-metadata[1336]: Dec 13 15:44:39.267 WARN failed to locate config-drive, using the metadata service API instead Dec 13 15:44:39.320292 coreos-metadata[1336]: Dec 13 15:44:39.320 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Dec 13 15:44:39.322099 coreos-metadata[1336]: Dec 13 15:44:39.321 INFO Fetch successful Dec 13 15:44:39.322427 coreos-metadata[1336]: Dec 13 15:44:39.322 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Dec 13 15:44:39.338575 coreos-metadata[1336]: Dec 13 15:44:39.338 INFO Fetch successful Dec 13 15:44:39.338895 coreos-metadata[1336]: Dec 13 15:44:39.338 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Dec 13 15:44:39.352594 coreos-metadata[1336]: Dec 13 15:44:39.352 INFO Fetch successful Dec 13 15:44:39.352945 coreos-metadata[1336]: Dec 13 15:44:39.352 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Dec 13 15:44:39.375397 coreos-metadata[1336]: Dec 13 15:44:39.375 INFO Fetch successful Dec 13 15:44:39.375770 coreos-metadata[1336]: Dec 13 15:44:39.375 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Dec 13 15:44:39.395035 coreos-metadata[1336]: Dec 13 15:44:39.394 INFO Fetch successful Dec 13 15:44:39.407890 systemd[1]: Finished coreos-metadata.service. Dec 13 15:44:40.167106 systemd[1]: Stopped kubelet.service. Dec 13 15:44:40.172512 systemd[1]: Starting kubelet.service... Dec 13 15:44:40.211546 systemd[1]: Reloading. Dec 13 15:44:40.387164 /usr/lib/systemd/system-generators/torcx-generator[1395]: time="2024-12-13T15:44:40Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 15:44:40.387895 /usr/lib/systemd/system-generators/torcx-generator[1395]: time="2024-12-13T15:44:40Z" level=info msg="torcx already run" Dec 13 15:44:40.486932 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 15:44:40.487485 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 15:44:40.518942 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 15:44:40.661866 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Dec 13 15:44:40.677068 systemd[1]: Started kubelet.service. Dec 13 15:44:40.683559 systemd[1]: Stopping kubelet.service... Dec 13 15:44:40.684585 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 15:44:40.685058 systemd[1]: Stopped kubelet.service. Dec 13 15:44:40.688591 systemd[1]: Starting kubelet.service... Dec 13 15:44:40.812438 systemd[1]: Started kubelet.service. Dec 13 15:44:40.906029 kubelet[1447]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 15:44:40.906876 kubelet[1447]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 15:44:40.907004 kubelet[1447]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 15:44:40.907353 kubelet[1447]: I1213 15:44:40.907282 1447 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 15:44:41.471119 kubelet[1447]: I1213 15:44:41.471018 1447 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Dec 13 15:44:41.471119 kubelet[1447]: I1213 15:44:41.471073 1447 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 15:44:41.471509 kubelet[1447]: I1213 15:44:41.471482 1447 server.go:929] "Client rotation is on, will bootstrap in background" Dec 13 15:44:41.547184 kubelet[1447]: I1213 15:44:41.547075 1447 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 15:44:41.564095 kubelet[1447]: E1213 15:44:41.564014 1447 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Dec 13 15:44:41.564095 kubelet[1447]: I1213 15:44:41.564098 1447 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Dec 13 15:44:41.574586 kubelet[1447]: I1213 15:44:41.574546 1447 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 15:44:41.576492 kubelet[1447]: I1213 15:44:41.576443 1447 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Dec 13 15:44:41.576852 kubelet[1447]: I1213 15:44:41.576789 1447 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 15:44:41.577154 kubelet[1447]: I1213 15:44:41.576849 1447 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.244.25.74","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 13 15:44:41.577456 kubelet[1447]: I1213 15:44:41.577186 1447 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 15:44:41.577456 kubelet[1447]: I1213 15:44:41.577206 1447 container_manager_linux.go:300] "Creating device plugin manager" Dec 13 15:44:41.577596 kubelet[1447]: I1213 15:44:41.577516 1447 state_mem.go:36] "Initialized new in-memory state store" Dec 13 15:44:41.580188 kubelet[1447]: I1213 15:44:41.580139 1447 kubelet.go:408] "Attempting to sync node with API server" Dec 13 15:44:41.580303 kubelet[1447]: I1213 15:44:41.580211 1447 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 15:44:41.580447 kubelet[1447]: I1213 15:44:41.580344 1447 kubelet.go:314] "Adding apiserver pod source" Dec 13 15:44:41.580447 kubelet[1447]: I1213 15:44:41.580415 1447 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 15:44:41.582629 kubelet[1447]: E1213 15:44:41.582567 1447 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:44:41.582746 kubelet[1447]: E1213 15:44:41.582676 1447 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:44:41.591744 kubelet[1447]: I1213 15:44:41.591700 1447 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Dec 13 15:44:41.594305 kubelet[1447]: I1213 15:44:41.594261 1447 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 15:44:41.595515 kubelet[1447]: W1213 15:44:41.595488 1447 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 15:44:41.596880 kubelet[1447]: I1213 15:44:41.596849 1447 server.go:1269] "Started kubelet" Dec 13 15:44:41.598202 kubelet[1447]: I1213 15:44:41.597466 1447 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 15:44:41.600005 kubelet[1447]: I1213 15:44:41.599409 1447 server.go:460] "Adding debug handlers to kubelet server" Dec 13 15:44:41.603384 kubelet[1447]: I1213 15:44:41.603280 1447 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 15:44:41.604137 kubelet[1447]: I1213 15:44:41.604110 1447 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 15:44:41.605899 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Dec 13 15:44:41.606884 kubelet[1447]: I1213 15:44:41.606391 1447 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 15:44:41.612572 kubelet[1447]: I1213 15:44:41.612536 1447 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 13 15:44:41.616252 kubelet[1447]: I1213 15:44:41.616194 1447 volume_manager.go:289] "Starting Kubelet Volume Manager" Dec 13 15:44:41.617040 kubelet[1447]: I1213 15:44:41.617016 1447 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 13 15:44:41.617517 kubelet[1447]: I1213 15:44:41.617495 1447 reconciler.go:26] "Reconciler: start to sync state" Dec 13 15:44:41.618717 kubelet[1447]: E1213 15:44:41.618664 1447 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.244.25.74\" not found" Dec 13 15:44:41.619667 kubelet[1447]: E1213 15:44:41.619640 1447 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 15:44:41.620499 kubelet[1447]: I1213 15:44:41.620429 1447 factory.go:221] Registration of the systemd container factory successfully Dec 13 15:44:41.620982 kubelet[1447]: I1213 15:44:41.620929 1447 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 15:44:41.625514 kubelet[1447]: I1213 15:44:41.625483 1447 factory.go:221] Registration of the containerd container factory successfully Dec 13 15:44:41.656825 kubelet[1447]: E1213 15:44:41.656753 1447 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.244.25.74\" not found" node="10.244.25.74" Dec 13 15:44:41.668035 kubelet[1447]: I1213 15:44:41.667992 1447 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 15:44:41.668339 kubelet[1447]: I1213 15:44:41.668312 1447 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 15:44:41.668585 kubelet[1447]: I1213 15:44:41.668562 1447 state_mem.go:36] "Initialized new in-memory state store" Dec 13 15:44:41.673029 kubelet[1447]: I1213 15:44:41.672988 1447 policy_none.go:49] "None policy: Start" Dec 13 15:44:41.674263 kubelet[1447]: I1213 15:44:41.674237 1447 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 15:44:41.674495 kubelet[1447]: I1213 15:44:41.674472 1447 state_mem.go:35] "Initializing new in-memory state store" Dec 13 15:44:41.695396 systemd[1]: Created slice kubepods.slice. Dec 13 15:44:41.709999 systemd[1]: Created slice kubepods-burstable.slice. Dec 13 15:44:41.719908 kubelet[1447]: E1213 15:44:41.719860 1447 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.244.25.74\" not found" Dec 13 15:44:41.723788 systemd[1]: Created slice kubepods-besteffort.slice. Dec 13 15:44:41.735514 kubelet[1447]: I1213 15:44:41.735011 1447 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 15:44:41.736724 kubelet[1447]: I1213 15:44:41.736067 1447 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 13 15:44:41.736724 kubelet[1447]: I1213 15:44:41.736111 1447 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 15:44:41.739414 kubelet[1447]: I1213 15:44:41.739045 1447 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 15:44:41.739835 kubelet[1447]: E1213 15:44:41.739805 1447 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.244.25.74\" not found" Dec 13 15:44:41.813772 kubelet[1447]: I1213 15:44:41.813647 1447 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 15:44:41.816544 kubelet[1447]: I1213 15:44:41.816504 1447 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 15:44:41.816657 kubelet[1447]: I1213 15:44:41.816595 1447 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 15:44:41.816657 kubelet[1447]: I1213 15:44:41.816643 1447 kubelet.go:2321] "Starting kubelet main sync loop" Dec 13 15:44:41.816806 kubelet[1447]: E1213 15:44:41.816737 1447 kubelet.go:2345] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Dec 13 15:44:41.839006 kubelet[1447]: I1213 15:44:41.838923 1447 kubelet_node_status.go:72] "Attempting to register node" node="10.244.25.74" Dec 13 15:44:41.847482 kubelet[1447]: I1213 15:44:41.847430 1447 kubelet_node_status.go:75] "Successfully registered node" node="10.244.25.74" Dec 13 15:44:41.847482 kubelet[1447]: E1213 15:44:41.847483 1447 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"10.244.25.74\": node \"10.244.25.74\" not found" Dec 13 15:44:41.960524 kubelet[1447]: I1213 15:44:41.960472 1447 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Dec 13 15:44:41.962055 env[1186]: time="2024-12-13T15:44:41.961822187Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 15:44:41.962908 kubelet[1447]: I1213 15:44:41.962551 1447 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Dec 13 15:44:42.354921 sudo[1325]: pam_unix(sudo:session): session closed for user root Dec 13 15:44:42.474738 kubelet[1447]: I1213 15:44:42.474666 1447 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Dec 13 15:44:42.475953 kubelet[1447]: W1213 15:44:42.475884 1447 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Dec 13 15:44:42.476497 kubelet[1447]: W1213 15:44:42.476241 1447 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Dec 13 15:44:42.476611 kubelet[1447]: W1213 15:44:42.476331 1447 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Dec 13 15:44:42.501616 sshd[1320]: pam_unix(sshd:session): session closed for user core Dec 13 15:44:42.506560 systemd[1]: sshd@6-10.244.25.74:22-139.178.68.195:50968.service: Deactivated successfully. Dec 13 15:44:42.506864 systemd-logind[1178]: Session 7 logged out. Waiting for processes to exit. Dec 13 15:44:42.507911 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 15:44:42.509698 systemd-logind[1178]: Removed session 7. Dec 13 15:44:42.583176 kubelet[1447]: I1213 15:44:42.583118 1447 apiserver.go:52] "Watching apiserver" Dec 13 15:44:42.583702 kubelet[1447]: E1213 15:44:42.583673 1447 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:44:42.598897 systemd[1]: Created slice kubepods-burstable-pod6121434d_09ad_4586_b7c4_c4d5fa1c4fca.slice. Dec 13 15:44:42.613287 systemd[1]: Created slice kubepods-besteffort-pod8a8265ac_f0f0_4c0c_bb0e_18e649645064.slice. Dec 13 15:44:42.618067 kubelet[1447]: I1213 15:44:42.618032 1447 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 13 15:44:42.625402 kubelet[1447]: I1213 15:44:42.625343 1447 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6121434d-09ad-4586-b7c4-c4d5fa1c4fca-bpf-maps\") pod \"cilium-gzrzn\" (UID: \"6121434d-09ad-4586-b7c4-c4d5fa1c4fca\") " pod="kube-system/cilium-gzrzn" Dec 13 15:44:42.625588 kubelet[1447]: I1213 15:44:42.625411 1447 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6121434d-09ad-4586-b7c4-c4d5fa1c4fca-host-proc-sys-kernel\") pod \"cilium-gzrzn\" (UID: \"6121434d-09ad-4586-b7c4-c4d5fa1c4fca\") " pod="kube-system/cilium-gzrzn" Dec 13 15:44:42.625588 kubelet[1447]: I1213 15:44:42.625442 1447 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/8a8265ac-f0f0-4c0c-bb0e-18e649645064-kube-proxy\") pod \"kube-proxy-g962h\" (UID: \"8a8265ac-f0f0-4c0c-bb0e-18e649645064\") " pod="kube-system/kube-proxy-g962h" Dec 13 15:44:42.625588 kubelet[1447]: I1213 15:44:42.625471 1447 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8a8265ac-f0f0-4c0c-bb0e-18e649645064-xtables-lock\") pod \"kube-proxy-g962h\" (UID: \"8a8265ac-f0f0-4c0c-bb0e-18e649645064\") " pod="kube-system/kube-proxy-g962h" Dec 13 15:44:42.625588 kubelet[1447]: I1213 15:44:42.625498 1447 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndfrn\" (UniqueName: \"kubernetes.io/projected/8a8265ac-f0f0-4c0c-bb0e-18e649645064-kube-api-access-ndfrn\") pod \"kube-proxy-g962h\" (UID: \"8a8265ac-f0f0-4c0c-bb0e-18e649645064\") " pod="kube-system/kube-proxy-g962h" Dec 13 15:44:42.625588 kubelet[1447]: I1213 15:44:42.625522 1447 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6121434d-09ad-4586-b7c4-c4d5fa1c4fca-cilium-run\") pod \"cilium-gzrzn\" (UID: \"6121434d-09ad-4586-b7c4-c4d5fa1c4fca\") " pod="kube-system/cilium-gzrzn" Dec 13 15:44:42.625874 kubelet[1447]: I1213 15:44:42.625549 1447 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6121434d-09ad-4586-b7c4-c4d5fa1c4fca-etc-cni-netd\") pod \"cilium-gzrzn\" (UID: \"6121434d-09ad-4586-b7c4-c4d5fa1c4fca\") " pod="kube-system/cilium-gzrzn" Dec 13 15:44:42.625874 kubelet[1447]: I1213 15:44:42.625573 1447 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6121434d-09ad-4586-b7c4-c4d5fa1c4fca-host-proc-sys-net\") pod \"cilium-gzrzn\" (UID: \"6121434d-09ad-4586-b7c4-c4d5fa1c4fca\") " pod="kube-system/cilium-gzrzn" Dec 13 15:44:42.625874 kubelet[1447]: I1213 15:44:42.625599 1447 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8a8265ac-f0f0-4c0c-bb0e-18e649645064-lib-modules\") pod \"kube-proxy-g962h\" (UID: \"8a8265ac-f0f0-4c0c-bb0e-18e649645064\") " pod="kube-system/kube-proxy-g962h" Dec 13 15:44:42.625874 kubelet[1447]: I1213 15:44:42.625623 1447 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6121434d-09ad-4586-b7c4-c4d5fa1c4fca-hostproc\") pod \"cilium-gzrzn\" (UID: \"6121434d-09ad-4586-b7c4-c4d5fa1c4fca\") " pod="kube-system/cilium-gzrzn" Dec 13 15:44:42.625874 kubelet[1447]: I1213 15:44:42.625647 1447 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6121434d-09ad-4586-b7c4-c4d5fa1c4fca-cilium-cgroup\") pod \"cilium-gzrzn\" (UID: \"6121434d-09ad-4586-b7c4-c4d5fa1c4fca\") " pod="kube-system/cilium-gzrzn" Dec 13 15:44:42.625874 kubelet[1447]: I1213 15:44:42.625672 1447 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6121434d-09ad-4586-b7c4-c4d5fa1c4fca-cni-path\") pod \"cilium-gzrzn\" (UID: \"6121434d-09ad-4586-b7c4-c4d5fa1c4fca\") " pod="kube-system/cilium-gzrzn" Dec 13 15:44:42.626184 kubelet[1447]: I1213 15:44:42.625700 1447 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6121434d-09ad-4586-b7c4-c4d5fa1c4fca-cilium-config-path\") pod \"cilium-gzrzn\" (UID: \"6121434d-09ad-4586-b7c4-c4d5fa1c4fca\") " pod="kube-system/cilium-gzrzn" Dec 13 15:44:42.626184 kubelet[1447]: I1213 15:44:42.625741 1447 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6121434d-09ad-4586-b7c4-c4d5fa1c4fca-lib-modules\") pod \"cilium-gzrzn\" (UID: \"6121434d-09ad-4586-b7c4-c4d5fa1c4fca\") " pod="kube-system/cilium-gzrzn" Dec 13 15:44:42.626184 kubelet[1447]: I1213 15:44:42.625773 1447 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6121434d-09ad-4586-b7c4-c4d5fa1c4fca-xtables-lock\") pod \"cilium-gzrzn\" (UID: \"6121434d-09ad-4586-b7c4-c4d5fa1c4fca\") " pod="kube-system/cilium-gzrzn" Dec 13 15:44:42.626184 kubelet[1447]: I1213 15:44:42.625815 1447 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6121434d-09ad-4586-b7c4-c4d5fa1c4fca-clustermesh-secrets\") pod \"cilium-gzrzn\" (UID: \"6121434d-09ad-4586-b7c4-c4d5fa1c4fca\") " pod="kube-system/cilium-gzrzn" Dec 13 15:44:42.626184 kubelet[1447]: I1213 15:44:42.625842 1447 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6121434d-09ad-4586-b7c4-c4d5fa1c4fca-hubble-tls\") pod \"cilium-gzrzn\" (UID: \"6121434d-09ad-4586-b7c4-c4d5fa1c4fca\") " pod="kube-system/cilium-gzrzn" Dec 13 15:44:42.626184 kubelet[1447]: I1213 15:44:42.625884 1447 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crl7l\" (UniqueName: \"kubernetes.io/projected/6121434d-09ad-4586-b7c4-c4d5fa1c4fca-kube-api-access-crl7l\") pod \"cilium-gzrzn\" (UID: \"6121434d-09ad-4586-b7c4-c4d5fa1c4fca\") " pod="kube-system/cilium-gzrzn" Dec 13 15:44:42.730344 kubelet[1447]: I1213 15:44:42.730284 1447 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Dec 13 15:44:42.913894 env[1186]: time="2024-12-13T15:44:42.911678880Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gzrzn,Uid:6121434d-09ad-4586-b7c4-c4d5fa1c4fca,Namespace:kube-system,Attempt:0,}" Dec 13 15:44:42.922713 env[1186]: time="2024-12-13T15:44:42.922619455Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-g962h,Uid:8a8265ac-f0f0-4c0c-bb0e-18e649645064,Namespace:kube-system,Attempt:0,}" Dec 13 15:44:43.584925 kubelet[1447]: E1213 15:44:43.584864 1447 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:44:43.752984 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1305020954.mount: Deactivated successfully. Dec 13 15:44:43.760206 env[1186]: time="2024-12-13T15:44:43.760073534Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 15:44:43.762289 env[1186]: time="2024-12-13T15:44:43.762161866Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 15:44:43.765201 env[1186]: time="2024-12-13T15:44:43.765135491Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 15:44:43.766650 env[1186]: time="2024-12-13T15:44:43.766609101Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 15:44:43.769380 env[1186]: time="2024-12-13T15:44:43.769326931Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 15:44:43.771401 env[1186]: time="2024-12-13T15:44:43.771350207Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 15:44:43.772629 env[1186]: time="2024-12-13T15:44:43.772590083Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 15:44:43.776840 env[1186]: time="2024-12-13T15:44:43.776762832Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 15:44:43.830180 env[1186]: time="2024-12-13T15:44:43.829806736Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 15:44:43.830180 env[1186]: time="2024-12-13T15:44:43.829890767Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 15:44:43.830180 env[1186]: time="2024-12-13T15:44:43.829918922Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 15:44:43.833123 env[1186]: time="2024-12-13T15:44:43.831042240Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 15:44:43.833123 env[1186]: time="2024-12-13T15:44:43.831096348Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 15:44:43.833123 env[1186]: time="2024-12-13T15:44:43.831113957Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 15:44:43.833123 env[1186]: time="2024-12-13T15:44:43.831278160Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/099f901aff0b89d36afb9d6fbd75802ef652d2edc049e8553207531cbb3ed5db pid=1511 runtime=io.containerd.runc.v2 Dec 13 15:44:43.833547 env[1186]: time="2024-12-13T15:44:43.830751530Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/217fbe343ef633ec957edc80a0296fc9c94d4f748e64c60dd77c30a4a922864b pid=1510 runtime=io.containerd.runc.v2 Dec 13 15:44:43.864889 systemd[1]: Started cri-containerd-217fbe343ef633ec957edc80a0296fc9c94d4f748e64c60dd77c30a4a922864b.scope. Dec 13 15:44:43.881087 systemd[1]: Started cri-containerd-099f901aff0b89d36afb9d6fbd75802ef652d2edc049e8553207531cbb3ed5db.scope. Dec 13 15:44:43.940607 env[1186]: time="2024-12-13T15:44:43.940521021Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gzrzn,Uid:6121434d-09ad-4586-b7c4-c4d5fa1c4fca,Namespace:kube-system,Attempt:0,} returns sandbox id \"099f901aff0b89d36afb9d6fbd75802ef652d2edc049e8553207531cbb3ed5db\"" Dec 13 15:44:43.944391 env[1186]: time="2024-12-13T15:44:43.944320734Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Dec 13 15:44:43.953118 env[1186]: time="2024-12-13T15:44:43.953046149Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-g962h,Uid:8a8265ac-f0f0-4c0c-bb0e-18e649645064,Namespace:kube-system,Attempt:0,} returns sandbox id \"217fbe343ef633ec957edc80a0296fc9c94d4f748e64c60dd77c30a4a922864b\"" Dec 13 15:44:44.587136 kubelet[1447]: E1213 15:44:44.587004 1447 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:44:45.588898 kubelet[1447]: E1213 15:44:45.588814 1447 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:44:46.590029 kubelet[1447]: E1213 15:44:46.589948 1447 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:44:47.590600 kubelet[1447]: E1213 15:44:47.590464 1447 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:44:48.592731 kubelet[1447]: E1213 15:44:48.590923 1447 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:44:49.591524 kubelet[1447]: E1213 15:44:49.591426 1447 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:44:50.592595 kubelet[1447]: E1213 15:44:50.592506 1447 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:44:51.593634 kubelet[1447]: E1213 15:44:51.593536 1447 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:44:52.419415 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2103382866.mount: Deactivated successfully. Dec 13 15:44:52.594117 kubelet[1447]: E1213 15:44:52.594046 1447 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:44:53.594891 kubelet[1447]: E1213 15:44:53.594802 1447 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:44:54.596657 kubelet[1447]: E1213 15:44:54.596529 1447 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:44:54.659471 update_engine[1182]: I1213 15:44:54.659236 1182 update_attempter.cc:509] Updating boot flags... Dec 13 15:44:55.597647 kubelet[1447]: E1213 15:44:55.597551 1447 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:44:56.598692 kubelet[1447]: E1213 15:44:56.598589 1447 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:44:57.043663 env[1186]: time="2024-12-13T15:44:57.043497749Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 15:44:57.047509 env[1186]: time="2024-12-13T15:44:57.047318005Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 15:44:57.052460 env[1186]: time="2024-12-13T15:44:57.052405397Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 15:44:57.054416 env[1186]: time="2024-12-13T15:44:57.053475171Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Dec 13 15:44:57.058286 env[1186]: time="2024-12-13T15:44:57.058226287Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.4\"" Dec 13 15:44:57.060291 env[1186]: time="2024-12-13T15:44:57.060233198Z" level=info msg="CreateContainer within sandbox \"099f901aff0b89d36afb9d6fbd75802ef652d2edc049e8553207531cbb3ed5db\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 15:44:57.077399 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1200506112.mount: Deactivated successfully. Dec 13 15:44:57.086354 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3769388386.mount: Deactivated successfully. Dec 13 15:44:57.091144 env[1186]: time="2024-12-13T15:44:57.091089069Z" level=info msg="CreateContainer within sandbox \"099f901aff0b89d36afb9d6fbd75802ef652d2edc049e8553207531cbb3ed5db\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"a7796837310eb05dae0baabe52e07cff15bfbd80ad071219e8b5ad9a77523f51\"" Dec 13 15:44:57.092251 env[1186]: time="2024-12-13T15:44:57.092215694Z" level=info msg="StartContainer for \"a7796837310eb05dae0baabe52e07cff15bfbd80ad071219e8b5ad9a77523f51\"" Dec 13 15:44:57.129815 systemd[1]: Started cri-containerd-a7796837310eb05dae0baabe52e07cff15bfbd80ad071219e8b5ad9a77523f51.scope. Dec 13 15:44:57.191793 env[1186]: time="2024-12-13T15:44:57.191728619Z" level=info msg="StartContainer for \"a7796837310eb05dae0baabe52e07cff15bfbd80ad071219e8b5ad9a77523f51\" returns successfully" Dec 13 15:44:57.205463 systemd[1]: cri-containerd-a7796837310eb05dae0baabe52e07cff15bfbd80ad071219e8b5ad9a77523f51.scope: Deactivated successfully. Dec 13 15:44:57.361844 env[1186]: time="2024-12-13T15:44:57.360872543Z" level=info msg="shim disconnected" id=a7796837310eb05dae0baabe52e07cff15bfbd80ad071219e8b5ad9a77523f51 Dec 13 15:44:57.361844 env[1186]: time="2024-12-13T15:44:57.360954920Z" level=warning msg="cleaning up after shim disconnected" id=a7796837310eb05dae0baabe52e07cff15bfbd80ad071219e8b5ad9a77523f51 namespace=k8s.io Dec 13 15:44:57.361844 env[1186]: time="2024-12-13T15:44:57.360974250Z" level=info msg="cleaning up dead shim" Dec 13 15:44:57.372828 env[1186]: time="2024-12-13T15:44:57.372747627Z" level=warning msg="cleanup warnings time=\"2024-12-13T15:44:57Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1643 runtime=io.containerd.runc.v2\n" Dec 13 15:44:57.599287 kubelet[1447]: E1213 15:44:57.599124 1447 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:44:57.894638 env[1186]: time="2024-12-13T15:44:57.894558378Z" level=info msg="CreateContainer within sandbox \"099f901aff0b89d36afb9d6fbd75802ef652d2edc049e8553207531cbb3ed5db\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 15:44:57.934473 env[1186]: time="2024-12-13T15:44:57.934394231Z" level=info msg="CreateContainer within sandbox \"099f901aff0b89d36afb9d6fbd75802ef652d2edc049e8553207531cbb3ed5db\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"2ed4b63d583981dfa4a21fbf59a5f990dc8067ec0e0f9ef97cd04427eb092d87\"" Dec 13 15:44:57.936090 env[1186]: time="2024-12-13T15:44:57.936054324Z" level=info msg="StartContainer for \"2ed4b63d583981dfa4a21fbf59a5f990dc8067ec0e0f9ef97cd04427eb092d87\"" Dec 13 15:44:58.012832 systemd[1]: Started cri-containerd-2ed4b63d583981dfa4a21fbf59a5f990dc8067ec0e0f9ef97cd04427eb092d87.scope. Dec 13 15:44:58.076964 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a7796837310eb05dae0baabe52e07cff15bfbd80ad071219e8b5ad9a77523f51-rootfs.mount: Deactivated successfully. Dec 13 15:44:58.101129 env[1186]: time="2024-12-13T15:44:58.098865819Z" level=info msg="StartContainer for \"2ed4b63d583981dfa4a21fbf59a5f990dc8067ec0e0f9ef97cd04427eb092d87\" returns successfully" Dec 13 15:44:58.108744 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 15:44:58.109125 systemd[1]: Stopped systemd-sysctl.service. Dec 13 15:44:58.109626 systemd[1]: Stopping systemd-sysctl.service... Dec 13 15:44:58.114039 systemd[1]: Starting systemd-sysctl.service... Dec 13 15:44:58.125048 systemd[1]: cri-containerd-2ed4b63d583981dfa4a21fbf59a5f990dc8067ec0e0f9ef97cd04427eb092d87.scope: Deactivated successfully. Dec 13 15:44:58.132333 systemd[1]: Finished systemd-sysctl.service. Dec 13 15:44:58.191531 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2ed4b63d583981dfa4a21fbf59a5f990dc8067ec0e0f9ef97cd04427eb092d87-rootfs.mount: Deactivated successfully. Dec 13 15:44:58.279395 env[1186]: time="2024-12-13T15:44:58.279269093Z" level=info msg="shim disconnected" id=2ed4b63d583981dfa4a21fbf59a5f990dc8067ec0e0f9ef97cd04427eb092d87 Dec 13 15:44:58.279395 env[1186]: time="2024-12-13T15:44:58.279387168Z" level=warning msg="cleaning up after shim disconnected" id=2ed4b63d583981dfa4a21fbf59a5f990dc8067ec0e0f9ef97cd04427eb092d87 namespace=k8s.io Dec 13 15:44:58.279395 env[1186]: time="2024-12-13T15:44:58.279407942Z" level=info msg="cleaning up dead shim" Dec 13 15:44:58.304441 env[1186]: time="2024-12-13T15:44:58.304346905Z" level=warning msg="cleanup warnings time=\"2024-12-13T15:44:58Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1707 runtime=io.containerd.runc.v2\n" Dec 13 15:44:58.600645 kubelet[1447]: E1213 15:44:58.600457 1447 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:44:58.870033 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2431225440.mount: Deactivated successfully. Dec 13 15:44:58.899156 env[1186]: time="2024-12-13T15:44:58.899082859Z" level=info msg="CreateContainer within sandbox \"099f901aff0b89d36afb9d6fbd75802ef652d2edc049e8553207531cbb3ed5db\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 15:44:58.929589 env[1186]: time="2024-12-13T15:44:58.929502118Z" level=info msg="CreateContainer within sandbox \"099f901aff0b89d36afb9d6fbd75802ef652d2edc049e8553207531cbb3ed5db\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"bfff801533df96e38f1dcd21a4e33b0705a04dcac4e3486ae0a7d8f1bf854acf\"" Dec 13 15:44:58.931030 env[1186]: time="2024-12-13T15:44:58.930954354Z" level=info msg="StartContainer for \"bfff801533df96e38f1dcd21a4e33b0705a04dcac4e3486ae0a7d8f1bf854acf\"" Dec 13 15:44:58.976516 systemd[1]: Started cri-containerd-bfff801533df96e38f1dcd21a4e33b0705a04dcac4e3486ae0a7d8f1bf854acf.scope. Dec 13 15:44:59.052750 env[1186]: time="2024-12-13T15:44:59.052677373Z" level=info msg="StartContainer for \"bfff801533df96e38f1dcd21a4e33b0705a04dcac4e3486ae0a7d8f1bf854acf\" returns successfully" Dec 13 15:44:59.055407 systemd[1]: cri-containerd-bfff801533df96e38f1dcd21a4e33b0705a04dcac4e3486ae0a7d8f1bf854acf.scope: Deactivated successfully. Dec 13 15:44:59.093544 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bfff801533df96e38f1dcd21a4e33b0705a04dcac4e3486ae0a7d8f1bf854acf-rootfs.mount: Deactivated successfully. Dec 13 15:44:59.190054 env[1186]: time="2024-12-13T15:44:59.189263898Z" level=info msg="shim disconnected" id=bfff801533df96e38f1dcd21a4e33b0705a04dcac4e3486ae0a7d8f1bf854acf Dec 13 15:44:59.190054 env[1186]: time="2024-12-13T15:44:59.189334216Z" level=warning msg="cleaning up after shim disconnected" id=bfff801533df96e38f1dcd21a4e33b0705a04dcac4e3486ae0a7d8f1bf854acf namespace=k8s.io Dec 13 15:44:59.190054 env[1186]: time="2024-12-13T15:44:59.189352994Z" level=info msg="cleaning up dead shim" Dec 13 15:44:59.206887 env[1186]: time="2024-12-13T15:44:59.206807795Z" level=warning msg="cleanup warnings time=\"2024-12-13T15:44:59Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1768 runtime=io.containerd.runc.v2\n" Dec 13 15:44:59.601567 kubelet[1447]: E1213 15:44:59.601487 1447 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:44:59.903042 env[1186]: time="2024-12-13T15:44:59.902408211Z" level=info msg="CreateContainer within sandbox \"099f901aff0b89d36afb9d6fbd75802ef652d2edc049e8553207531cbb3ed5db\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 15:44:59.935252 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount399716185.mount: Deactivated successfully. Dec 13 15:44:59.946743 env[1186]: time="2024-12-13T15:44:59.946667871Z" level=info msg="CreateContainer within sandbox \"099f901aff0b89d36afb9d6fbd75802ef652d2edc049e8553207531cbb3ed5db\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"9f4763e77e1f061519f12719c5e9e24960308820b5d9214b8fa132ebfe88d493\"" Dec 13 15:44:59.948036 env[1186]: time="2024-12-13T15:44:59.947999974Z" level=info msg="StartContainer for \"9f4763e77e1f061519f12719c5e9e24960308820b5d9214b8fa132ebfe88d493\"" Dec 13 15:44:59.985273 systemd[1]: Started cri-containerd-9f4763e77e1f061519f12719c5e9e24960308820b5d9214b8fa132ebfe88d493.scope. Dec 13 15:45:00.075521 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2388520500.mount: Deactivated successfully. Dec 13 15:45:00.095053 systemd[1]: cri-containerd-9f4763e77e1f061519f12719c5e9e24960308820b5d9214b8fa132ebfe88d493.scope: Deactivated successfully. Dec 13 15:45:00.097153 env[1186]: time="2024-12-13T15:45:00.096907813Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6121434d_09ad_4586_b7c4_c4d5fa1c4fca.slice/cri-containerd-9f4763e77e1f061519f12719c5e9e24960308820b5d9214b8fa132ebfe88d493.scope/memory.events\": no such file or directory" Dec 13 15:45:00.107664 env[1186]: time="2024-12-13T15:45:00.107595169Z" level=info msg="StartContainer for \"9f4763e77e1f061519f12719c5e9e24960308820b5d9214b8fa132ebfe88d493\" returns successfully" Dec 13 15:45:00.135615 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9f4763e77e1f061519f12719c5e9e24960308820b5d9214b8fa132ebfe88d493-rootfs.mount: Deactivated successfully. Dec 13 15:45:00.158844 env[1186]: time="2024-12-13T15:45:00.158548708Z" level=info msg="shim disconnected" id=9f4763e77e1f061519f12719c5e9e24960308820b5d9214b8fa132ebfe88d493 Dec 13 15:45:00.158844 env[1186]: time="2024-12-13T15:45:00.158637070Z" level=warning msg="cleaning up after shim disconnected" id=9f4763e77e1f061519f12719c5e9e24960308820b5d9214b8fa132ebfe88d493 namespace=k8s.io Dec 13 15:45:00.158844 env[1186]: time="2024-12-13T15:45:00.158656740Z" level=info msg="cleaning up dead shim" Dec 13 15:45:00.180987 env[1186]: time="2024-12-13T15:45:00.180890972Z" level=warning msg="cleanup warnings time=\"2024-12-13T15:45:00Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1828 runtime=io.containerd.runc.v2\n" Dec 13 15:45:00.187063 env[1186]: time="2024-12-13T15:45:00.187016396Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.31.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 15:45:00.188988 env[1186]: time="2024-12-13T15:45:00.188948801Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ebf80573666f86f115452db568feb34f6f771c3bdc7bfed14b9577f992cfa300,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 15:45:00.190640 env[1186]: time="2024-12-13T15:45:00.190604276Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.31.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 15:45:00.191495 env[1186]: time="2024-12-13T15:45:00.191435201Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.4\" returns image reference \"sha256:ebf80573666f86f115452db568feb34f6f771c3bdc7bfed14b9577f992cfa300\"" Dec 13 15:45:00.192970 env[1186]: time="2024-12-13T15:45:00.192906820Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:1739b3febca392035bf6edfe31efdfa55226be7b57389b2001ae357f7dcb99cf,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 15:45:00.194958 env[1186]: time="2024-12-13T15:45:00.194908501Z" level=info msg="CreateContainer within sandbox \"217fbe343ef633ec957edc80a0296fc9c94d4f748e64c60dd77c30a4a922864b\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 15:45:00.210958 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1380016808.mount: Deactivated successfully. Dec 13 15:45:00.221797 env[1186]: time="2024-12-13T15:45:00.221732191Z" level=info msg="CreateContainer within sandbox \"217fbe343ef633ec957edc80a0296fc9c94d4f748e64c60dd77c30a4a922864b\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"6a04c498d9f64221cb123eb4108b056b84042665fa5076da3126a3bc9a86f52a\"" Dec 13 15:45:00.223478 env[1186]: time="2024-12-13T15:45:00.223437894Z" level=info msg="StartContainer for \"6a04c498d9f64221cb123eb4108b056b84042665fa5076da3126a3bc9a86f52a\"" Dec 13 15:45:00.248251 systemd[1]: Started cri-containerd-6a04c498d9f64221cb123eb4108b056b84042665fa5076da3126a3bc9a86f52a.scope. Dec 13 15:45:00.309730 env[1186]: time="2024-12-13T15:45:00.309662219Z" level=info msg="StartContainer for \"6a04c498d9f64221cb123eb4108b056b84042665fa5076da3126a3bc9a86f52a\" returns successfully" Dec 13 15:45:00.602637 kubelet[1447]: E1213 15:45:00.602567 1447 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:45:00.909386 env[1186]: time="2024-12-13T15:45:00.908907137Z" level=info msg="CreateContainer within sandbox \"099f901aff0b89d36afb9d6fbd75802ef652d2edc049e8553207531cbb3ed5db\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 15:45:00.933537 env[1186]: time="2024-12-13T15:45:00.933409427Z" level=info msg="CreateContainer within sandbox \"099f901aff0b89d36afb9d6fbd75802ef652d2edc049e8553207531cbb3ed5db\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"2f486633dbfee25958e61d8edeb9969f4eca88fa2909caeca31537a8bc8cf6b9\"" Dec 13 15:45:00.934500 env[1186]: time="2024-12-13T15:45:00.934460012Z" level=info msg="StartContainer for \"2f486633dbfee25958e61d8edeb9969f4eca88fa2909caeca31537a8bc8cf6b9\"" Dec 13 15:45:00.959153 systemd[1]: Started cri-containerd-2f486633dbfee25958e61d8edeb9969f4eca88fa2909caeca31537a8bc8cf6b9.scope. Dec 13 15:45:01.023844 env[1186]: time="2024-12-13T15:45:01.023710491Z" level=info msg="StartContainer for \"2f486633dbfee25958e61d8edeb9969f4eca88fa2909caeca31537a8bc8cf6b9\" returns successfully" Dec 13 15:45:01.077172 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2293448434.mount: Deactivated successfully. Dec 13 15:45:01.242541 kubelet[1447]: I1213 15:45:01.241468 1447 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Dec 13 15:45:01.581670 kubelet[1447]: E1213 15:45:01.581463 1447 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:45:01.603222 kubelet[1447]: E1213 15:45:01.603165 1447 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:45:01.667411 kernel: Initializing XFRM netlink socket Dec 13 15:45:01.944809 kubelet[1447]: I1213 15:45:01.944500 1447 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-gzrzn" podStartSLOduration=7.831729083 podStartE2EDuration="20.944427603s" podCreationTimestamp="2024-12-13 15:44:41 +0000 UTC" firstStartedPulling="2024-12-13 15:44:43.943420835 +0000 UTC m=+3.124345578" lastFinishedPulling="2024-12-13 15:44:57.056119353 +0000 UTC m=+16.237044098" observedRunningTime="2024-12-13 15:45:01.944246378 +0000 UTC m=+21.125171134" watchObservedRunningTime="2024-12-13 15:45:01.944427603 +0000 UTC m=+21.125352348" Dec 13 15:45:01.945517 kubelet[1447]: I1213 15:45:01.945435 1447 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-g962h" podStartSLOduration=4.709652317 podStartE2EDuration="20.945423075s" podCreationTimestamp="2024-12-13 15:44:41 +0000 UTC" firstStartedPulling="2024-12-13 15:44:43.957393212 +0000 UTC m=+3.138317958" lastFinishedPulling="2024-12-13 15:45:00.193163972 +0000 UTC m=+19.374088716" observedRunningTime="2024-12-13 15:45:00.943151002 +0000 UTC m=+20.124075757" watchObservedRunningTime="2024-12-13 15:45:01.945423075 +0000 UTC m=+21.126347842" Dec 13 15:45:02.604835 kubelet[1447]: E1213 15:45:02.604696 1447 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:45:03.427928 systemd-networkd[1022]: cilium_host: Link UP Dec 13 15:45:03.430207 systemd-networkd[1022]: cilium_net: Link UP Dec 13 15:45:03.434702 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Dec 13 15:45:03.434919 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Dec 13 15:45:03.434862 systemd-networkd[1022]: cilium_net: Gained carrier Dec 13 15:45:03.436157 systemd-networkd[1022]: cilium_host: Gained carrier Dec 13 15:45:03.606110 kubelet[1447]: E1213 15:45:03.605862 1447 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:45:03.629686 systemd-networkd[1022]: cilium_vxlan: Link UP Dec 13 15:45:03.629700 systemd-networkd[1022]: cilium_vxlan: Gained carrier Dec 13 15:45:03.790717 systemd-networkd[1022]: cilium_host: Gained IPv6LL Dec 13 15:45:04.016454 kernel: NET: Registered PF_ALG protocol family Dec 13 15:45:04.302701 systemd-networkd[1022]: cilium_net: Gained IPv6LL Dec 13 15:45:04.606782 kubelet[1447]: E1213 15:45:04.606594 1447 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:45:05.080368 systemd-networkd[1022]: lxc_health: Link UP Dec 13 15:45:05.112400 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Dec 13 15:45:05.109607 systemd-networkd[1022]: lxc_health: Gained carrier Dec 13 15:45:05.454977 systemd-networkd[1022]: cilium_vxlan: Gained IPv6LL Dec 13 15:45:05.607864 kubelet[1447]: E1213 15:45:05.607771 1447 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:45:06.222651 systemd-networkd[1022]: lxc_health: Gained IPv6LL Dec 13 15:45:06.539260 systemd[1]: Created slice kubepods-besteffort-podbd691130_f2aa_49a9_9fac_0f152f4b0925.slice. Dec 13 15:45:06.608792 kubelet[1447]: E1213 15:45:06.608707 1447 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:45:06.612163 kubelet[1447]: I1213 15:45:06.612107 1447 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2pb9\" (UniqueName: \"kubernetes.io/projected/bd691130-f2aa-49a9-9fac-0f152f4b0925-kube-api-access-j2pb9\") pod \"nginx-deployment-8587fbcb89-2g2r5\" (UID: \"bd691130-f2aa-49a9-9fac-0f152f4b0925\") " pod="default/nginx-deployment-8587fbcb89-2g2r5" Dec 13 15:45:06.846865 env[1186]: time="2024-12-13T15:45:06.846133400Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-2g2r5,Uid:bd691130-f2aa-49a9-9fac-0f152f4b0925,Namespace:default,Attempt:0,}" Dec 13 15:45:06.914512 systemd-networkd[1022]: lxc80f7390602d9: Link UP Dec 13 15:45:06.929668 kernel: eth0: renamed from tmp8655d Dec 13 15:45:06.937622 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 15:45:06.937781 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc80f7390602d9: link becomes ready Dec 13 15:45:06.935133 systemd-networkd[1022]: lxc80f7390602d9: Gained carrier Dec 13 15:45:07.202772 kubelet[1447]: I1213 15:45:07.202613 1447 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 15:45:07.609228 kubelet[1447]: E1213 15:45:07.609142 1447 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:45:08.270879 systemd-networkd[1022]: lxc80f7390602d9: Gained IPv6LL Dec 13 15:45:08.610836 kubelet[1447]: E1213 15:45:08.610529 1447 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:45:09.612030 kubelet[1447]: E1213 15:45:09.611948 1447 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:45:10.615599 kubelet[1447]: E1213 15:45:10.615421 1447 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:45:11.615966 kubelet[1447]: E1213 15:45:11.615900 1447 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:45:11.858467 env[1186]: time="2024-12-13T15:45:11.858282349Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 15:45:11.859302 env[1186]: time="2024-12-13T15:45:11.858407696Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 15:45:11.859302 env[1186]: time="2024-12-13T15:45:11.859245792Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 15:45:11.860571 env[1186]: time="2024-12-13T15:45:11.859917170Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8655dd02a0536b32f7bd85df04194b7790f1571b8015944f514a4358937c144e pid=2518 runtime=io.containerd.runc.v2 Dec 13 15:45:11.903237 systemd[1]: run-containerd-runc-k8s.io-8655dd02a0536b32f7bd85df04194b7790f1571b8015944f514a4358937c144e-runc.xPHPJJ.mount: Deactivated successfully. Dec 13 15:45:11.913513 systemd[1]: Started cri-containerd-8655dd02a0536b32f7bd85df04194b7790f1571b8015944f514a4358937c144e.scope. Dec 13 15:45:12.002253 env[1186]: time="2024-12-13T15:45:12.002156172Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-2g2r5,Uid:bd691130-f2aa-49a9-9fac-0f152f4b0925,Namespace:default,Attempt:0,} returns sandbox id \"8655dd02a0536b32f7bd85df04194b7790f1571b8015944f514a4358937c144e\"" Dec 13 15:45:12.006922 env[1186]: time="2024-12-13T15:45:12.006866458Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Dec 13 15:45:12.618216 kubelet[1447]: E1213 15:45:12.618108 1447 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:45:12.784701 systemd[1]: Started sshd@7-10.244.25.74:22-218.92.0.218:49798.service. Dec 13 15:45:13.619194 kubelet[1447]: E1213 15:45:13.619061 1447 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:45:14.268973 sshd[2554]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.218 user=root Dec 13 15:45:14.620594 kubelet[1447]: E1213 15:45:14.619640 1447 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:45:15.502620 sshd[2554]: Failed password for root from 218.92.0.218 port 49798 ssh2 Dec 13 15:45:15.620155 kubelet[1447]: E1213 15:45:15.620069 1447 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:45:16.248718 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount715377773.mount: Deactivated successfully. Dec 13 15:45:16.621389 kubelet[1447]: E1213 15:45:16.620411 1447 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:45:17.622917 kubelet[1447]: E1213 15:45:17.622809 1447 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:45:18.623965 kubelet[1447]: E1213 15:45:18.623802 1447 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:45:18.749109 env[1186]: time="2024-12-13T15:45:18.748956803Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 15:45:18.751409 env[1186]: time="2024-12-13T15:45:18.751350684Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 15:45:18.754096 env[1186]: time="2024-12-13T15:45:18.754061320Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 15:45:18.756885 env[1186]: time="2024-12-13T15:45:18.756825666Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx@sha256:e04edf30a4ea4c5a4107110797c72d3ee8a654415f00acd4019be17218afd9a1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 15:45:18.759447 env[1186]: time="2024-12-13T15:45:18.759385218Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3\"" Dec 13 15:45:18.763743 env[1186]: time="2024-12-13T15:45:18.763692902Z" level=info msg="CreateContainer within sandbox \"8655dd02a0536b32f7bd85df04194b7790f1571b8015944f514a4358937c144e\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Dec 13 15:45:18.779528 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount572082733.mount: Deactivated successfully. Dec 13 15:45:18.788498 env[1186]: time="2024-12-13T15:45:18.788393865Z" level=info msg="CreateContainer within sandbox \"8655dd02a0536b32f7bd85df04194b7790f1571b8015944f514a4358937c144e\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"d742804c692d70f30d1c39bef96869f14b580be807192906b8083bec1df33603\"" Dec 13 15:45:18.790342 env[1186]: time="2024-12-13T15:45:18.790283762Z" level=info msg="StartContainer for \"d742804c692d70f30d1c39bef96869f14b580be807192906b8083bec1df33603\"" Dec 13 15:45:18.839624 systemd[1]: Started cri-containerd-d742804c692d70f30d1c39bef96869f14b580be807192906b8083bec1df33603.scope. Dec 13 15:45:18.880039 sshd[2554]: Failed password for root from 218.92.0.218 port 49798 ssh2 Dec 13 15:45:18.898731 env[1186]: time="2024-12-13T15:45:18.898592997Z" level=info msg="StartContainer for \"d742804c692d70f30d1c39bef96869f14b580be807192906b8083bec1df33603\" returns successfully" Dec 13 15:45:18.989803 kubelet[1447]: I1213 15:45:18.989464 1447 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-8587fbcb89-2g2r5" podStartSLOduration=6.234025536 podStartE2EDuration="12.989422473s" podCreationTimestamp="2024-12-13 15:45:06 +0000 UTC" firstStartedPulling="2024-12-13 15:45:12.005587878 +0000 UTC m=+31.186512617" lastFinishedPulling="2024-12-13 15:45:18.760984815 +0000 UTC m=+37.941909554" observedRunningTime="2024-12-13 15:45:18.9874493 +0000 UTC m=+38.168374061" watchObservedRunningTime="2024-12-13 15:45:18.989422473 +0000 UTC m=+38.170347225" Dec 13 15:45:19.625105 kubelet[1447]: E1213 15:45:19.625034 1447 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:45:20.627295 kubelet[1447]: E1213 15:45:20.627147 1447 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:45:21.581549 kubelet[1447]: E1213 15:45:21.581471 1447 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:45:21.628035 kubelet[1447]: E1213 15:45:21.627956 1447 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:45:22.398764 sshd[2554]: Failed password for root from 218.92.0.218 port 49798 ssh2 Dec 13 15:45:22.629981 kubelet[1447]: E1213 15:45:22.629897 1447 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:45:23.630733 kubelet[1447]: E1213 15:45:23.630669 1447 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:45:24.318302 sshd[2554]: Received disconnect from 218.92.0.218 port 49798:11: [preauth] Dec 13 15:45:24.318302 sshd[2554]: Disconnected from authenticating user root 218.92.0.218 port 49798 [preauth] Dec 13 15:45:24.319093 sshd[2554]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.218 user=root Dec 13 15:45:24.321400 systemd[1]: sshd@7-10.244.25.74:22-218.92.0.218:49798.service: Deactivated successfully. Dec 13 15:45:24.592446 systemd[1]: Started sshd@8-10.244.25.74:22-218.92.0.218:34204.service. Dec 13 15:45:24.632314 kubelet[1447]: E1213 15:45:24.632213 1447 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:45:25.633502 kubelet[1447]: E1213 15:45:25.633398 1447 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:45:26.302119 sshd[2615]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.218 user=root Dec 13 15:45:26.634424 kubelet[1447]: E1213 15:45:26.634100 1447 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:45:27.635191 kubelet[1447]: E1213 15:45:27.635129 1447 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:45:27.969759 systemd[1]: Created slice kubepods-besteffort-poda896917d_858d_4138_941e_778bc69d8aa2.slice. Dec 13 15:45:28.075991 kubelet[1447]: I1213 15:45:28.075936 1447 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/a896917d-858d-4138-941e-778bc69d8aa2-data\") pod \"nfs-server-provisioner-0\" (UID: \"a896917d-858d-4138-941e-778bc69d8aa2\") " pod="default/nfs-server-provisioner-0" Dec 13 15:45:28.076390 kubelet[1447]: I1213 15:45:28.076330 1447 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sthxp\" (UniqueName: \"kubernetes.io/projected/a896917d-858d-4138-941e-778bc69d8aa2-kube-api-access-sthxp\") pod \"nfs-server-provisioner-0\" (UID: \"a896917d-858d-4138-941e-778bc69d8aa2\") " pod="default/nfs-server-provisioner-0" Dec 13 15:45:28.182640 sshd[2615]: Failed password for root from 218.92.0.218 port 34204 ssh2 Dec 13 15:45:28.276550 env[1186]: time="2024-12-13T15:45:28.276227619Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:a896917d-858d-4138-941e-778bc69d8aa2,Namespace:default,Attempt:0,}" Dec 13 15:45:28.343511 systemd-networkd[1022]: lxc81de5310485f: Link UP Dec 13 15:45:28.354403 kernel: eth0: renamed from tmpd2d6a Dec 13 15:45:28.363430 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 15:45:28.363748 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc81de5310485f: link becomes ready Dec 13 15:45:28.368491 systemd-networkd[1022]: lxc81de5310485f: Gained carrier Dec 13 15:45:28.628532 env[1186]: time="2024-12-13T15:45:28.628184088Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 15:45:28.628532 env[1186]: time="2024-12-13T15:45:28.628279408Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 15:45:28.628532 env[1186]: time="2024-12-13T15:45:28.628298673Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 15:45:28.629380 env[1186]: time="2024-12-13T15:45:28.629281138Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d2d6a309db9e0143bded223729310dd005216c3d59d3102f3373967c343abe03 pid=2661 runtime=io.containerd.runc.v2 Dec 13 15:45:28.636564 kubelet[1447]: E1213 15:45:28.636502 1447 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:45:28.657574 systemd[1]: Started cri-containerd-d2d6a309db9e0143bded223729310dd005216c3d59d3102f3373967c343abe03.scope. Dec 13 15:45:28.748196 env[1186]: time="2024-12-13T15:45:28.748120399Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:a896917d-858d-4138-941e-778bc69d8aa2,Namespace:default,Attempt:0,} returns sandbox id \"d2d6a309db9e0143bded223729310dd005216c3d59d3102f3373967c343abe03\"" Dec 13 15:45:28.751851 env[1186]: time="2024-12-13T15:45:28.751815216Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Dec 13 15:45:29.194379 systemd[1]: run-containerd-runc-k8s.io-d2d6a309db9e0143bded223729310dd005216c3d59d3102f3373967c343abe03-runc.b558Ym.mount: Deactivated successfully. Dec 13 15:45:29.637319 kubelet[1447]: E1213 15:45:29.637221 1447 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:45:30.031193 systemd-networkd[1022]: lxc81de5310485f: Gained IPv6LL Dec 13 15:45:30.346169 sshd[2615]: pam_faillock(sshd:auth): Consecutive login failures for user root account temporarily locked Dec 13 15:45:30.641752 kubelet[1447]: E1213 15:45:30.640063 1447 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:45:31.644355 kubelet[1447]: E1213 15:45:31.644229 1447 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:45:32.645489 kubelet[1447]: E1213 15:45:32.645387 1447 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:45:32.653321 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3227347692.mount: Deactivated successfully. Dec 13 15:45:33.110068 sshd[2615]: Failed password for root from 218.92.0.218 port 34204 ssh2 Dec 13 15:45:33.645670 kubelet[1447]: E1213 15:45:33.645590 1447 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:45:34.646556 kubelet[1447]: E1213 15:45:34.646453 1447 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:45:35.647717 kubelet[1447]: E1213 15:45:35.647636 1447 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:45:36.311947 env[1186]: time="2024-12-13T15:45:36.311812102Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 15:45:36.314269 env[1186]: time="2024-12-13T15:45:36.314220705Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 15:45:36.316986 env[1186]: time="2024-12-13T15:45:36.316943812Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 15:45:36.319555 env[1186]: time="2024-12-13T15:45:36.319516509Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 15:45:36.321021 env[1186]: time="2024-12-13T15:45:36.320949678Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Dec 13 15:45:36.325519 env[1186]: time="2024-12-13T15:45:36.325400819Z" level=info msg="CreateContainer within sandbox \"d2d6a309db9e0143bded223729310dd005216c3d59d3102f3373967c343abe03\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Dec 13 15:45:36.344809 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2840565043.mount: Deactivated successfully. Dec 13 15:45:36.354091 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount703465697.mount: Deactivated successfully. Dec 13 15:45:36.357588 env[1186]: time="2024-12-13T15:45:36.357516146Z" level=info msg="CreateContainer within sandbox \"d2d6a309db9e0143bded223729310dd005216c3d59d3102f3373967c343abe03\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"6c16f33b86c8ea0879b1c516862e701d7a1cd1c4629b591426b3b465643fa3df\"" Dec 13 15:45:36.359069 env[1186]: time="2024-12-13T15:45:36.359033005Z" level=info msg="StartContainer for \"6c16f33b86c8ea0879b1c516862e701d7a1cd1c4629b591426b3b465643fa3df\"" Dec 13 15:45:36.400000 systemd[1]: Started cri-containerd-6c16f33b86c8ea0879b1c516862e701d7a1cd1c4629b591426b3b465643fa3df.scope. Dec 13 15:45:36.467526 env[1186]: time="2024-12-13T15:45:36.467444209Z" level=info msg="StartContainer for \"6c16f33b86c8ea0879b1c516862e701d7a1cd1c4629b591426b3b465643fa3df\" returns successfully" Dec 13 15:45:36.646674 sshd[2615]: Failed password for root from 218.92.0.218 port 34204 ssh2 Dec 13 15:45:36.648710 kubelet[1447]: E1213 15:45:36.648636 1447 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:45:37.038127 kubelet[1447]: I1213 15:45:37.037995 1447 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=2.465475578 podStartE2EDuration="10.037921392s" podCreationTimestamp="2024-12-13 15:45:27 +0000 UTC" firstStartedPulling="2024-12-13 15:45:28.750604131 +0000 UTC m=+47.931528869" lastFinishedPulling="2024-12-13 15:45:36.323049937 +0000 UTC m=+55.503974683" observedRunningTime="2024-12-13 15:45:37.037175379 +0000 UTC m=+56.218100126" watchObservedRunningTime="2024-12-13 15:45:37.037921392 +0000 UTC m=+56.218846144" Dec 13 15:45:37.649892 kubelet[1447]: E1213 15:45:37.649827 1447 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:45:38.347949 sshd[2615]: Received disconnect from 218.92.0.218 port 34204:11: [preauth] Dec 13 15:45:38.347949 sshd[2615]: Disconnected from authenticating user root 218.92.0.218 port 34204 [preauth] Dec 13 15:45:38.349011 sshd[2615]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.218 user=root Dec 13 15:45:38.351706 systemd[1]: sshd@8-10.244.25.74:22-218.92.0.218:34204.service: Deactivated successfully. Dec 13 15:45:38.636497 systemd[1]: Started sshd@9-10.244.25.74:22-218.92.0.218:48780.service. Dec 13 15:45:38.651065 kubelet[1447]: E1213 15:45:38.651017 1447 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:45:39.652270 kubelet[1447]: E1213 15:45:39.652196 1447 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:45:40.433422 sshd[2764]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.218 user=root Dec 13 15:45:40.653395 kubelet[1447]: E1213 15:45:40.653286 1447 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:45:41.581315 kubelet[1447]: E1213 15:45:41.581243 1447 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:45:41.653700 kubelet[1447]: E1213 15:45:41.653617 1447 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:45:42.570024 sshd[2764]: Failed password for root from 218.92.0.218 port 48780 ssh2 Dec 13 15:45:42.654854 kubelet[1447]: E1213 15:45:42.654784 1447 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:45:43.655275 kubelet[1447]: E1213 15:45:43.655153 1447 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:45:44.656326 kubelet[1447]: E1213 15:45:44.656248 1447 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:45:45.657266 kubelet[1447]: E1213 15:45:45.657183 1447 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:45:46.143596 sshd[2764]: Failed password for root from 218.92.0.218 port 48780 ssh2 Dec 13 15:45:46.437633 systemd[1]: Created slice kubepods-besteffort-podd1ffd787_ca09_4843_ad9b_cc246dd37df3.slice. Dec 13 15:45:46.513791 kubelet[1447]: I1213 15:45:46.513727 1447 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-763544a2-2c5b-41fc-9b11-600e90a823bc\" (UniqueName: \"kubernetes.io/nfs/d1ffd787-ca09-4843-ad9b-cc246dd37df3-pvc-763544a2-2c5b-41fc-9b11-600e90a823bc\") pod \"test-pod-1\" (UID: \"d1ffd787-ca09-4843-ad9b-cc246dd37df3\") " pod="default/test-pod-1" Dec 13 15:45:46.514261 kubelet[1447]: I1213 15:45:46.514233 1447 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qtwm\" (UniqueName: \"kubernetes.io/projected/d1ffd787-ca09-4843-ad9b-cc246dd37df3-kube-api-access-5qtwm\") pod \"test-pod-1\" (UID: \"d1ffd787-ca09-4843-ad9b-cc246dd37df3\") " pod="default/test-pod-1" Dec 13 15:45:46.658469 kubelet[1447]: E1213 15:45:46.658100 1447 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:45:46.665486 kernel: FS-Cache: Loaded Dec 13 15:45:46.730391 kernel: RPC: Registered named UNIX socket transport module. Dec 13 15:45:46.730621 kernel: RPC: Registered udp transport module. Dec 13 15:45:46.730688 kernel: RPC: Registered tcp transport module. Dec 13 15:45:46.731434 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Dec 13 15:45:46.822413 kernel: FS-Cache: Netfs 'nfs' registered for caching Dec 13 15:45:47.067931 kernel: NFS: Registering the id_resolver key type Dec 13 15:45:47.068193 kernel: Key type id_resolver registered Dec 13 15:45:47.068276 kernel: Key type id_legacy registered Dec 13 15:45:47.160883 nfsidmap[2786]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'gb1.brightbox.com' Dec 13 15:45:47.170039 nfsidmap[2789]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'gb1.brightbox.com' Dec 13 15:45:47.347938 env[1186]: time="2024-12-13T15:45:47.346400407Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:d1ffd787-ca09-4843-ad9b-cc246dd37df3,Namespace:default,Attempt:0,}" Dec 13 15:45:47.430399 kernel: eth0: renamed from tmp31633 Dec 13 15:45:47.438756 systemd-networkd[1022]: lxc23ba111d67bb: Link UP Dec 13 15:45:47.447420 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 15:45:47.447570 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc23ba111d67bb: link becomes ready Dec 13 15:45:47.447800 systemd-networkd[1022]: lxc23ba111d67bb: Gained carrier Dec 13 15:45:47.659459 kubelet[1447]: E1213 15:45:47.659319 1447 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:45:47.708849 env[1186]: time="2024-12-13T15:45:47.707977212Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 15:45:47.709137 env[1186]: time="2024-12-13T15:45:47.708089158Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 15:45:47.709137 env[1186]: time="2024-12-13T15:45:47.708114625Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 15:45:47.709372 env[1186]: time="2024-12-13T15:45:47.709185175Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3163313032bd623a9c285e97ab2ed5b12677972cc4eb6d733725412761ffbedc pid=2825 runtime=io.containerd.runc.v2 Dec 13 15:45:47.744983 systemd[1]: run-containerd-runc-k8s.io-3163313032bd623a9c285e97ab2ed5b12677972cc4eb6d733725412761ffbedc-runc.hLgaDJ.mount: Deactivated successfully. Dec 13 15:45:47.749737 systemd[1]: Started cri-containerd-3163313032bd623a9c285e97ab2ed5b12677972cc4eb6d733725412761ffbedc.scope. Dec 13 15:45:47.830011 env[1186]: time="2024-12-13T15:45:47.829942088Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:d1ffd787-ca09-4843-ad9b-cc246dd37df3,Namespace:default,Attempt:0,} returns sandbox id \"3163313032bd623a9c285e97ab2ed5b12677972cc4eb6d733725412761ffbedc\"" Dec 13 15:45:47.832735 env[1186]: time="2024-12-13T15:45:47.832701194Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Dec 13 15:45:48.233272 env[1186]: time="2024-12-13T15:45:48.233136477Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 15:45:48.236678 env[1186]: time="2024-12-13T15:45:48.236026696Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 15:45:48.238215 env[1186]: time="2024-12-13T15:45:48.238165994Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 15:45:48.240569 env[1186]: time="2024-12-13T15:45:48.240525210Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx@sha256:e04edf30a4ea4c5a4107110797c72d3ee8a654415f00acd4019be17218afd9a1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 15:45:48.241675 env[1186]: time="2024-12-13T15:45:48.241632828Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3\"" Dec 13 15:45:48.246597 env[1186]: time="2024-12-13T15:45:48.246555493Z" level=info msg="CreateContainer within sandbox \"3163313032bd623a9c285e97ab2ed5b12677972cc4eb6d733725412761ffbedc\" for container &ContainerMetadata{Name:test,Attempt:0,}" Dec 13 15:45:48.268158 env[1186]: time="2024-12-13T15:45:48.268078811Z" level=info msg="CreateContainer within sandbox \"3163313032bd623a9c285e97ab2ed5b12677972cc4eb6d733725412761ffbedc\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"5c096b72500b041b5058f96a4f76499fb662d0c40e000fa8ec9e77d363f1df97\"" Dec 13 15:45:48.270035 env[1186]: time="2024-12-13T15:45:48.269996949Z" level=info msg="StartContainer for \"5c096b72500b041b5058f96a4f76499fb662d0c40e000fa8ec9e77d363f1df97\"" Dec 13 15:45:48.295888 systemd[1]: Started cri-containerd-5c096b72500b041b5058f96a4f76499fb662d0c40e000fa8ec9e77d363f1df97.scope. Dec 13 15:45:48.350884 env[1186]: time="2024-12-13T15:45:48.350814848Z" level=info msg="StartContainer for \"5c096b72500b041b5058f96a4f76499fb662d0c40e000fa8ec9e77d363f1df97\" returns successfully" Dec 13 15:45:48.660419 kubelet[1447]: E1213 15:45:48.660140 1447 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:45:48.714832 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3783769171.mount: Deactivated successfully. Dec 13 15:45:48.905144 sshd[2764]: Failed password for root from 218.92.0.218 port 48780 ssh2 Dec 13 15:45:49.487044 systemd-networkd[1022]: lxc23ba111d67bb: Gained IPv6LL Dec 13 15:45:49.660523 kubelet[1447]: E1213 15:45:49.660323 1447 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:45:50.638706 sshd[2764]: Received disconnect from 218.92.0.218 port 48780:11: [preauth] Dec 13 15:45:50.638706 sshd[2764]: Disconnected from authenticating user root 218.92.0.218 port 48780 [preauth] Dec 13 15:45:50.639491 sshd[2764]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.218 user=root Dec 13 15:45:50.641684 systemd[1]: sshd@9-10.244.25.74:22-218.92.0.218:48780.service: Deactivated successfully. Dec 13 15:45:50.660690 kubelet[1447]: E1213 15:45:50.660643 1447 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:45:51.661655 kubelet[1447]: E1213 15:45:51.661449 1447 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:45:52.662809 kubelet[1447]: E1213 15:45:52.662653 1447 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:45:53.663693 kubelet[1447]: E1213 15:45:53.663622 1447 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:45:54.665026 kubelet[1447]: E1213 15:45:54.664959 1447 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:45:55.258889 kubelet[1447]: I1213 15:45:55.258773 1447 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=25.847243903 podStartE2EDuration="26.258726173s" podCreationTimestamp="2024-12-13 15:45:29 +0000 UTC" firstStartedPulling="2024-12-13 15:45:47.831978796 +0000 UTC m=+67.012903545" lastFinishedPulling="2024-12-13 15:45:48.243461072 +0000 UTC m=+67.424385815" observedRunningTime="2024-12-13 15:45:49.072398821 +0000 UTC m=+68.253323573" watchObservedRunningTime="2024-12-13 15:45:55.258726173 +0000 UTC m=+74.439650923" Dec 13 15:45:55.284882 systemd[1]: run-containerd-runc-k8s.io-2f486633dbfee25958e61d8edeb9969f4eca88fa2909caeca31537a8bc8cf6b9-runc.fGoKbu.mount: Deactivated successfully. Dec 13 15:45:55.327976 env[1186]: time="2024-12-13T15:45:55.327853878Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 15:45:55.337607 env[1186]: time="2024-12-13T15:45:55.337537504Z" level=info msg="StopContainer for \"2f486633dbfee25958e61d8edeb9969f4eca88fa2909caeca31537a8bc8cf6b9\" with timeout 2 (s)" Dec 13 15:45:55.338109 env[1186]: time="2024-12-13T15:45:55.338058114Z" level=info msg="Stop container \"2f486633dbfee25958e61d8edeb9969f4eca88fa2909caeca31537a8bc8cf6b9\" with signal terminated" Dec 13 15:45:55.351159 systemd-networkd[1022]: lxc_health: Link DOWN Dec 13 15:45:55.351171 systemd-networkd[1022]: lxc_health: Lost carrier Dec 13 15:45:55.398294 systemd[1]: cri-containerd-2f486633dbfee25958e61d8edeb9969f4eca88fa2909caeca31537a8bc8cf6b9.scope: Deactivated successfully. Dec 13 15:45:55.399169 systemd[1]: cri-containerd-2f486633dbfee25958e61d8edeb9969f4eca88fa2909caeca31537a8bc8cf6b9.scope: Consumed 10.237s CPU time. Dec 13 15:45:55.430144 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2f486633dbfee25958e61d8edeb9969f4eca88fa2909caeca31537a8bc8cf6b9-rootfs.mount: Deactivated successfully. Dec 13 15:45:55.438496 env[1186]: time="2024-12-13T15:45:55.438409259Z" level=info msg="shim disconnected" id=2f486633dbfee25958e61d8edeb9969f4eca88fa2909caeca31537a8bc8cf6b9 Dec 13 15:45:55.438496 env[1186]: time="2024-12-13T15:45:55.438482837Z" level=warning msg="cleaning up after shim disconnected" id=2f486633dbfee25958e61d8edeb9969f4eca88fa2909caeca31537a8bc8cf6b9 namespace=k8s.io Dec 13 15:45:55.438844 env[1186]: time="2024-12-13T15:45:55.438512109Z" level=info msg="cleaning up dead shim" Dec 13 15:45:55.452977 env[1186]: time="2024-12-13T15:45:55.452905127Z" level=warning msg="cleanup warnings time=\"2024-12-13T15:45:55Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2957 runtime=io.containerd.runc.v2\n" Dec 13 15:45:55.455694 env[1186]: time="2024-12-13T15:45:55.455652499Z" level=info msg="StopContainer for \"2f486633dbfee25958e61d8edeb9969f4eca88fa2909caeca31537a8bc8cf6b9\" returns successfully" Dec 13 15:45:55.457241 env[1186]: time="2024-12-13T15:45:55.457192439Z" level=info msg="StopPodSandbox for \"099f901aff0b89d36afb9d6fbd75802ef652d2edc049e8553207531cbb3ed5db\"" Dec 13 15:45:55.457342 env[1186]: time="2024-12-13T15:45:55.457283719Z" level=info msg="Container to stop \"bfff801533df96e38f1dcd21a4e33b0705a04dcac4e3486ae0a7d8f1bf854acf\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 15:45:55.457342 env[1186]: time="2024-12-13T15:45:55.457314595Z" level=info msg="Container to stop \"9f4763e77e1f061519f12719c5e9e24960308820b5d9214b8fa132ebfe88d493\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 15:45:55.457342 env[1186]: time="2024-12-13T15:45:55.457333428Z" level=info msg="Container to stop \"2f486633dbfee25958e61d8edeb9969f4eca88fa2909caeca31537a8bc8cf6b9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 15:45:55.460173 env[1186]: time="2024-12-13T15:45:55.457351982Z" level=info msg="Container to stop \"a7796837310eb05dae0baabe52e07cff15bfbd80ad071219e8b5ad9a77523f51\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 15:45:55.460173 env[1186]: time="2024-12-13T15:45:55.457484718Z" level=info msg="Container to stop \"2ed4b63d583981dfa4a21fbf59a5f990dc8067ec0e0f9ef97cd04427eb092d87\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 15:45:55.460207 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-099f901aff0b89d36afb9d6fbd75802ef652d2edc049e8553207531cbb3ed5db-shm.mount: Deactivated successfully. Dec 13 15:45:55.468893 systemd[1]: cri-containerd-099f901aff0b89d36afb9d6fbd75802ef652d2edc049e8553207531cbb3ed5db.scope: Deactivated successfully. Dec 13 15:45:55.498433 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-099f901aff0b89d36afb9d6fbd75802ef652d2edc049e8553207531cbb3ed5db-rootfs.mount: Deactivated successfully. Dec 13 15:45:55.503673 env[1186]: time="2024-12-13T15:45:55.503611919Z" level=info msg="shim disconnected" id=099f901aff0b89d36afb9d6fbd75802ef652d2edc049e8553207531cbb3ed5db Dec 13 15:45:55.503987 env[1186]: time="2024-12-13T15:45:55.503953719Z" level=warning msg="cleaning up after shim disconnected" id=099f901aff0b89d36afb9d6fbd75802ef652d2edc049e8553207531cbb3ed5db namespace=k8s.io Dec 13 15:45:55.504124 env[1186]: time="2024-12-13T15:45:55.504094615Z" level=info msg="cleaning up dead shim" Dec 13 15:45:55.517803 env[1186]: time="2024-12-13T15:45:55.516156463Z" level=warning msg="cleanup warnings time=\"2024-12-13T15:45:55Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2987 runtime=io.containerd.runc.v2\n" Dec 13 15:45:55.517803 env[1186]: time="2024-12-13T15:45:55.517076568Z" level=info msg="TearDown network for sandbox \"099f901aff0b89d36afb9d6fbd75802ef652d2edc049e8553207531cbb3ed5db\" successfully" Dec 13 15:45:55.517803 env[1186]: time="2024-12-13T15:45:55.517110399Z" level=info msg="StopPodSandbox for \"099f901aff0b89d36afb9d6fbd75802ef652d2edc049e8553207531cbb3ed5db\" returns successfully" Dec 13 15:45:55.666114 kubelet[1447]: E1213 15:45:55.666043 1447 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:45:55.687567 kubelet[1447]: I1213 15:45:55.687504 1447 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6121434d-09ad-4586-b7c4-c4d5fa1c4fca-cilium-config-path\") pod \"6121434d-09ad-4586-b7c4-c4d5fa1c4fca\" (UID: \"6121434d-09ad-4586-b7c4-c4d5fa1c4fca\") " Dec 13 15:45:55.688738 kubelet[1447]: I1213 15:45:55.687945 1447 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6121434d-09ad-4586-b7c4-c4d5fa1c4fca-hubble-tls\") pod \"6121434d-09ad-4586-b7c4-c4d5fa1c4fca\" (UID: \"6121434d-09ad-4586-b7c4-c4d5fa1c4fca\") " Dec 13 15:45:55.688738 kubelet[1447]: I1213 15:45:55.687994 1447 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-crl7l\" (UniqueName: \"kubernetes.io/projected/6121434d-09ad-4586-b7c4-c4d5fa1c4fca-kube-api-access-crl7l\") pod \"6121434d-09ad-4586-b7c4-c4d5fa1c4fca\" (UID: \"6121434d-09ad-4586-b7c4-c4d5fa1c4fca\") " Dec 13 15:45:55.688738 kubelet[1447]: I1213 15:45:55.688031 1447 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6121434d-09ad-4586-b7c4-c4d5fa1c4fca-bpf-maps\") pod \"6121434d-09ad-4586-b7c4-c4d5fa1c4fca\" (UID: \"6121434d-09ad-4586-b7c4-c4d5fa1c4fca\") " Dec 13 15:45:55.688738 kubelet[1447]: I1213 15:45:55.688063 1447 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6121434d-09ad-4586-b7c4-c4d5fa1c4fca-host-proc-sys-kernel\") pod \"6121434d-09ad-4586-b7c4-c4d5fa1c4fca\" (UID: \"6121434d-09ad-4586-b7c4-c4d5fa1c4fca\") " Dec 13 15:45:55.688738 kubelet[1447]: I1213 15:45:55.688089 1447 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6121434d-09ad-4586-b7c4-c4d5fa1c4fca-cilium-run\") pod \"6121434d-09ad-4586-b7c4-c4d5fa1c4fca\" (UID: \"6121434d-09ad-4586-b7c4-c4d5fa1c4fca\") " Dec 13 15:45:55.688738 kubelet[1447]: I1213 15:45:55.688111 1447 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6121434d-09ad-4586-b7c4-c4d5fa1c4fca-hostproc\") pod \"6121434d-09ad-4586-b7c4-c4d5fa1c4fca\" (UID: \"6121434d-09ad-4586-b7c4-c4d5fa1c4fca\") " Dec 13 15:45:55.689229 kubelet[1447]: I1213 15:45:55.688134 1447 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6121434d-09ad-4586-b7c4-c4d5fa1c4fca-lib-modules\") pod \"6121434d-09ad-4586-b7c4-c4d5fa1c4fca\" (UID: \"6121434d-09ad-4586-b7c4-c4d5fa1c4fca\") " Dec 13 15:45:55.689229 kubelet[1447]: I1213 15:45:55.688156 1447 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6121434d-09ad-4586-b7c4-c4d5fa1c4fca-xtables-lock\") pod \"6121434d-09ad-4586-b7c4-c4d5fa1c4fca\" (UID: \"6121434d-09ad-4586-b7c4-c4d5fa1c4fca\") " Dec 13 15:45:55.689229 kubelet[1447]: I1213 15:45:55.688180 1447 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6121434d-09ad-4586-b7c4-c4d5fa1c4fca-cni-path\") pod \"6121434d-09ad-4586-b7c4-c4d5fa1c4fca\" (UID: \"6121434d-09ad-4586-b7c4-c4d5fa1c4fca\") " Dec 13 15:45:55.689229 kubelet[1447]: I1213 15:45:55.688217 1447 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6121434d-09ad-4586-b7c4-c4d5fa1c4fca-clustermesh-secrets\") pod \"6121434d-09ad-4586-b7c4-c4d5fa1c4fca\" (UID: \"6121434d-09ad-4586-b7c4-c4d5fa1c4fca\") " Dec 13 15:45:55.689229 kubelet[1447]: I1213 15:45:55.688254 1447 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6121434d-09ad-4586-b7c4-c4d5fa1c4fca-etc-cni-netd\") pod \"6121434d-09ad-4586-b7c4-c4d5fa1c4fca\" (UID: \"6121434d-09ad-4586-b7c4-c4d5fa1c4fca\") " Dec 13 15:45:55.689229 kubelet[1447]: I1213 15:45:55.688295 1447 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6121434d-09ad-4586-b7c4-c4d5fa1c4fca-host-proc-sys-net\") pod \"6121434d-09ad-4586-b7c4-c4d5fa1c4fca\" (UID: \"6121434d-09ad-4586-b7c4-c4d5fa1c4fca\") " Dec 13 15:45:55.689653 kubelet[1447]: I1213 15:45:55.688328 1447 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6121434d-09ad-4586-b7c4-c4d5fa1c4fca-cilium-cgroup\") pod \"6121434d-09ad-4586-b7c4-c4d5fa1c4fca\" (UID: \"6121434d-09ad-4586-b7c4-c4d5fa1c4fca\") " Dec 13 15:45:55.690641 kubelet[1447]: I1213 15:45:55.690128 1447 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6121434d-09ad-4586-b7c4-c4d5fa1c4fca-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "6121434d-09ad-4586-b7c4-c4d5fa1c4fca" (UID: "6121434d-09ad-4586-b7c4-c4d5fa1c4fca"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 15:45:55.691646 kubelet[1447]: I1213 15:45:55.691607 1447 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6121434d-09ad-4586-b7c4-c4d5fa1c4fca-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "6121434d-09ad-4586-b7c4-c4d5fa1c4fca" (UID: "6121434d-09ad-4586-b7c4-c4d5fa1c4fca"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 15:45:55.691758 kubelet[1447]: I1213 15:45:55.691685 1447 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6121434d-09ad-4586-b7c4-c4d5fa1c4fca-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "6121434d-09ad-4586-b7c4-c4d5fa1c4fca" (UID: "6121434d-09ad-4586-b7c4-c4d5fa1c4fca"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 15:45:55.697212 kubelet[1447]: I1213 15:45:55.697167 1447 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6121434d-09ad-4586-b7c4-c4d5fa1c4fca-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "6121434d-09ad-4586-b7c4-c4d5fa1c4fca" (UID: "6121434d-09ad-4586-b7c4-c4d5fa1c4fca"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 15:45:55.697471 kubelet[1447]: I1213 15:45:55.697442 1447 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6121434d-09ad-4586-b7c4-c4d5fa1c4fca-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "6121434d-09ad-4586-b7c4-c4d5fa1c4fca" (UID: "6121434d-09ad-4586-b7c4-c4d5fa1c4fca"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 15:45:55.697634 kubelet[1447]: I1213 15:45:55.697261 1447 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6121434d-09ad-4586-b7c4-c4d5fa1c4fca-kube-api-access-crl7l" (OuterVolumeSpecName: "kube-api-access-crl7l") pod "6121434d-09ad-4586-b7c4-c4d5fa1c4fca" (UID: "6121434d-09ad-4586-b7c4-c4d5fa1c4fca"). InnerVolumeSpecName "kube-api-access-crl7l". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 15:45:55.697763 kubelet[1447]: I1213 15:45:55.697514 1447 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6121434d-09ad-4586-b7c4-c4d5fa1c4fca-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "6121434d-09ad-4586-b7c4-c4d5fa1c4fca" (UID: "6121434d-09ad-4586-b7c4-c4d5fa1c4fca"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 15:45:55.697900 kubelet[1447]: I1213 15:45:55.697543 1447 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6121434d-09ad-4586-b7c4-c4d5fa1c4fca-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "6121434d-09ad-4586-b7c4-c4d5fa1c4fca" (UID: "6121434d-09ad-4586-b7c4-c4d5fa1c4fca"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 15:45:55.698027 kubelet[1447]: I1213 15:45:55.697565 1447 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6121434d-09ad-4586-b7c4-c4d5fa1c4fca-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "6121434d-09ad-4586-b7c4-c4d5fa1c4fca" (UID: "6121434d-09ad-4586-b7c4-c4d5fa1c4fca"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 15:45:55.698139 kubelet[1447]: I1213 15:45:55.697587 1447 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6121434d-09ad-4586-b7c4-c4d5fa1c4fca-hostproc" (OuterVolumeSpecName: "hostproc") pod "6121434d-09ad-4586-b7c4-c4d5fa1c4fca" (UID: "6121434d-09ad-4586-b7c4-c4d5fa1c4fca"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 15:45:55.698323 kubelet[1447]: I1213 15:45:55.698277 1447 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6121434d-09ad-4586-b7c4-c4d5fa1c4fca-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "6121434d-09ad-4586-b7c4-c4d5fa1c4fca" (UID: "6121434d-09ad-4586-b7c4-c4d5fa1c4fca"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 15:45:55.698550 kubelet[1447]: I1213 15:45:55.698512 1447 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6121434d-09ad-4586-b7c4-c4d5fa1c4fca-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "6121434d-09ad-4586-b7c4-c4d5fa1c4fca" (UID: "6121434d-09ad-4586-b7c4-c4d5fa1c4fca"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 15:45:55.698720 kubelet[1447]: I1213 15:45:55.698694 1447 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6121434d-09ad-4586-b7c4-c4d5fa1c4fca-cni-path" (OuterVolumeSpecName: "cni-path") pod "6121434d-09ad-4586-b7c4-c4d5fa1c4fca" (UID: "6121434d-09ad-4586-b7c4-c4d5fa1c4fca"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 15:45:55.701545 kubelet[1447]: I1213 15:45:55.701509 1447 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6121434d-09ad-4586-b7c4-c4d5fa1c4fca-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "6121434d-09ad-4586-b7c4-c4d5fa1c4fca" (UID: "6121434d-09ad-4586-b7c4-c4d5fa1c4fca"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 15:45:55.790148 kubelet[1447]: I1213 15:45:55.788977 1447 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6121434d-09ad-4586-b7c4-c4d5fa1c4fca-cilium-config-path\") on node \"10.244.25.74\" DevicePath \"\"" Dec 13 15:45:55.790148 kubelet[1447]: I1213 15:45:55.789798 1447 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6121434d-09ad-4586-b7c4-c4d5fa1c4fca-hubble-tls\") on node \"10.244.25.74\" DevicePath \"\"" Dec 13 15:45:55.790148 kubelet[1447]: I1213 15:45:55.789824 1447 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-crl7l\" (UniqueName: \"kubernetes.io/projected/6121434d-09ad-4586-b7c4-c4d5fa1c4fca-kube-api-access-crl7l\") on node \"10.244.25.74\" DevicePath \"\"" Dec 13 15:45:55.790148 kubelet[1447]: I1213 15:45:55.789840 1447 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6121434d-09ad-4586-b7c4-c4d5fa1c4fca-bpf-maps\") on node \"10.244.25.74\" DevicePath \"\"" Dec 13 15:45:55.790148 kubelet[1447]: I1213 15:45:55.789855 1447 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6121434d-09ad-4586-b7c4-c4d5fa1c4fca-host-proc-sys-kernel\") on node \"10.244.25.74\" DevicePath \"\"" Dec 13 15:45:55.790148 kubelet[1447]: I1213 15:45:55.789879 1447 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6121434d-09ad-4586-b7c4-c4d5fa1c4fca-cilium-run\") on node \"10.244.25.74\" DevicePath \"\"" Dec 13 15:45:55.790148 kubelet[1447]: I1213 15:45:55.789893 1447 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6121434d-09ad-4586-b7c4-c4d5fa1c4fca-hostproc\") on node \"10.244.25.74\" DevicePath \"\"" Dec 13 15:45:55.790148 kubelet[1447]: I1213 15:45:55.789922 1447 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6121434d-09ad-4586-b7c4-c4d5fa1c4fca-lib-modules\") on node \"10.244.25.74\" DevicePath \"\"" Dec 13 15:45:55.790801 kubelet[1447]: I1213 15:45:55.789936 1447 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6121434d-09ad-4586-b7c4-c4d5fa1c4fca-xtables-lock\") on node \"10.244.25.74\" DevicePath \"\"" Dec 13 15:45:55.790801 kubelet[1447]: I1213 15:45:55.789951 1447 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6121434d-09ad-4586-b7c4-c4d5fa1c4fca-cni-path\") on node \"10.244.25.74\" DevicePath \"\"" Dec 13 15:45:55.790801 kubelet[1447]: I1213 15:45:55.789976 1447 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6121434d-09ad-4586-b7c4-c4d5fa1c4fca-clustermesh-secrets\") on node \"10.244.25.74\" DevicePath \"\"" Dec 13 15:45:55.790801 kubelet[1447]: I1213 15:45:55.789990 1447 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6121434d-09ad-4586-b7c4-c4d5fa1c4fca-etc-cni-netd\") on node \"10.244.25.74\" DevicePath \"\"" Dec 13 15:45:55.790801 kubelet[1447]: I1213 15:45:55.790004 1447 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6121434d-09ad-4586-b7c4-c4d5fa1c4fca-host-proc-sys-net\") on node \"10.244.25.74\" DevicePath \"\"" Dec 13 15:45:55.790801 kubelet[1447]: I1213 15:45:55.790017 1447 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6121434d-09ad-4586-b7c4-c4d5fa1c4fca-cilium-cgroup\") on node \"10.244.25.74\" DevicePath \"\"" Dec 13 15:45:55.826998 systemd[1]: Removed slice kubepods-burstable-pod6121434d_09ad_4586_b7c4_c4d5fa1c4fca.slice. Dec 13 15:45:55.827152 systemd[1]: kubepods-burstable-pod6121434d_09ad_4586_b7c4_c4d5fa1c4fca.slice: Consumed 10.430s CPU time. Dec 13 15:45:56.083596 kubelet[1447]: I1213 15:45:56.080501 1447 scope.go:117] "RemoveContainer" containerID="2f486633dbfee25958e61d8edeb9969f4eca88fa2909caeca31537a8bc8cf6b9" Dec 13 15:45:56.087542 env[1186]: time="2024-12-13T15:45:56.086893669Z" level=info msg="RemoveContainer for \"2f486633dbfee25958e61d8edeb9969f4eca88fa2909caeca31537a8bc8cf6b9\"" Dec 13 15:45:56.092981 env[1186]: time="2024-12-13T15:45:56.092919052Z" level=info msg="RemoveContainer for \"2f486633dbfee25958e61d8edeb9969f4eca88fa2909caeca31537a8bc8cf6b9\" returns successfully" Dec 13 15:45:56.093527 kubelet[1447]: I1213 15:45:56.093481 1447 scope.go:117] "RemoveContainer" containerID="9f4763e77e1f061519f12719c5e9e24960308820b5d9214b8fa132ebfe88d493" Dec 13 15:45:56.096193 env[1186]: time="2024-12-13T15:45:56.095634887Z" level=info msg="RemoveContainer for \"9f4763e77e1f061519f12719c5e9e24960308820b5d9214b8fa132ebfe88d493\"" Dec 13 15:45:56.100069 env[1186]: time="2024-12-13T15:45:56.099977749Z" level=info msg="RemoveContainer for \"9f4763e77e1f061519f12719c5e9e24960308820b5d9214b8fa132ebfe88d493\" returns successfully" Dec 13 15:45:56.100622 kubelet[1447]: I1213 15:45:56.100579 1447 scope.go:117] "RemoveContainer" containerID="bfff801533df96e38f1dcd21a4e33b0705a04dcac4e3486ae0a7d8f1bf854acf" Dec 13 15:45:56.102615 env[1186]: time="2024-12-13T15:45:56.102415458Z" level=info msg="RemoveContainer for \"bfff801533df96e38f1dcd21a4e33b0705a04dcac4e3486ae0a7d8f1bf854acf\"" Dec 13 15:45:56.111969 env[1186]: time="2024-12-13T15:45:56.111915470Z" level=info msg="RemoveContainer for \"bfff801533df96e38f1dcd21a4e33b0705a04dcac4e3486ae0a7d8f1bf854acf\" returns successfully" Dec 13 15:45:56.112778 kubelet[1447]: I1213 15:45:56.112747 1447 scope.go:117] "RemoveContainer" containerID="2ed4b63d583981dfa4a21fbf59a5f990dc8067ec0e0f9ef97cd04427eb092d87" Dec 13 15:45:56.114440 env[1186]: time="2024-12-13T15:45:56.114392268Z" level=info msg="RemoveContainer for \"2ed4b63d583981dfa4a21fbf59a5f990dc8067ec0e0f9ef97cd04427eb092d87\"" Dec 13 15:45:56.118079 env[1186]: time="2024-12-13T15:45:56.118037999Z" level=info msg="RemoveContainer for \"2ed4b63d583981dfa4a21fbf59a5f990dc8067ec0e0f9ef97cd04427eb092d87\" returns successfully" Dec 13 15:45:56.118402 kubelet[1447]: I1213 15:45:56.118349 1447 scope.go:117] "RemoveContainer" containerID="a7796837310eb05dae0baabe52e07cff15bfbd80ad071219e8b5ad9a77523f51" Dec 13 15:45:56.119933 env[1186]: time="2024-12-13T15:45:56.119869380Z" level=info msg="RemoveContainer for \"a7796837310eb05dae0baabe52e07cff15bfbd80ad071219e8b5ad9a77523f51\"" Dec 13 15:45:56.123119 env[1186]: time="2024-12-13T15:45:56.123061417Z" level=info msg="RemoveContainer for \"a7796837310eb05dae0baabe52e07cff15bfbd80ad071219e8b5ad9a77523f51\" returns successfully" Dec 13 15:45:56.123431 kubelet[1447]: I1213 15:45:56.123341 1447 scope.go:117] "RemoveContainer" containerID="2f486633dbfee25958e61d8edeb9969f4eca88fa2909caeca31537a8bc8cf6b9" Dec 13 15:45:56.123989 env[1186]: time="2024-12-13T15:45:56.123823150Z" level=error msg="ContainerStatus for \"2f486633dbfee25958e61d8edeb9969f4eca88fa2909caeca31537a8bc8cf6b9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2f486633dbfee25958e61d8edeb9969f4eca88fa2909caeca31537a8bc8cf6b9\": not found" Dec 13 15:45:56.124254 kubelet[1447]: E1213 15:45:56.124204 1447 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2f486633dbfee25958e61d8edeb9969f4eca88fa2909caeca31537a8bc8cf6b9\": not found" containerID="2f486633dbfee25958e61d8edeb9969f4eca88fa2909caeca31537a8bc8cf6b9" Dec 13 15:45:56.124532 kubelet[1447]: I1213 15:45:56.124404 1447 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2f486633dbfee25958e61d8edeb9969f4eca88fa2909caeca31537a8bc8cf6b9"} err="failed to get container status \"2f486633dbfee25958e61d8edeb9969f4eca88fa2909caeca31537a8bc8cf6b9\": rpc error: code = NotFound desc = an error occurred when try to find container \"2f486633dbfee25958e61d8edeb9969f4eca88fa2909caeca31537a8bc8cf6b9\": not found" Dec 13 15:45:56.124692 kubelet[1447]: I1213 15:45:56.124666 1447 scope.go:117] "RemoveContainer" containerID="9f4763e77e1f061519f12719c5e9e24960308820b5d9214b8fa132ebfe88d493" Dec 13 15:45:56.125064 env[1186]: time="2024-12-13T15:45:56.125001270Z" level=error msg="ContainerStatus for \"9f4763e77e1f061519f12719c5e9e24960308820b5d9214b8fa132ebfe88d493\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9f4763e77e1f061519f12719c5e9e24960308820b5d9214b8fa132ebfe88d493\": not found" Dec 13 15:45:56.125321 kubelet[1447]: E1213 15:45:56.125291 1447 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9f4763e77e1f061519f12719c5e9e24960308820b5d9214b8fa132ebfe88d493\": not found" containerID="9f4763e77e1f061519f12719c5e9e24960308820b5d9214b8fa132ebfe88d493" Dec 13 15:45:56.125567 kubelet[1447]: I1213 15:45:56.125524 1447 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9f4763e77e1f061519f12719c5e9e24960308820b5d9214b8fa132ebfe88d493"} err="failed to get container status \"9f4763e77e1f061519f12719c5e9e24960308820b5d9214b8fa132ebfe88d493\": rpc error: code = NotFound desc = an error occurred when try to find container \"9f4763e77e1f061519f12719c5e9e24960308820b5d9214b8fa132ebfe88d493\": not found" Dec 13 15:45:56.125712 kubelet[1447]: I1213 15:45:56.125670 1447 scope.go:117] "RemoveContainer" containerID="bfff801533df96e38f1dcd21a4e33b0705a04dcac4e3486ae0a7d8f1bf854acf" Dec 13 15:45:56.126135 env[1186]: time="2024-12-13T15:45:56.126054964Z" level=error msg="ContainerStatus for \"bfff801533df96e38f1dcd21a4e33b0705a04dcac4e3486ae0a7d8f1bf854acf\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bfff801533df96e38f1dcd21a4e33b0705a04dcac4e3486ae0a7d8f1bf854acf\": not found" Dec 13 15:45:56.126336 kubelet[1447]: E1213 15:45:56.126307 1447 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bfff801533df96e38f1dcd21a4e33b0705a04dcac4e3486ae0a7d8f1bf854acf\": not found" containerID="bfff801533df96e38f1dcd21a4e33b0705a04dcac4e3486ae0a7d8f1bf854acf" Dec 13 15:45:56.126550 kubelet[1447]: I1213 15:45:56.126507 1447 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"bfff801533df96e38f1dcd21a4e33b0705a04dcac4e3486ae0a7d8f1bf854acf"} err="failed to get container status \"bfff801533df96e38f1dcd21a4e33b0705a04dcac4e3486ae0a7d8f1bf854acf\": rpc error: code = NotFound desc = an error occurred when try to find container \"bfff801533df96e38f1dcd21a4e33b0705a04dcac4e3486ae0a7d8f1bf854acf\": not found" Dec 13 15:45:56.126713 kubelet[1447]: I1213 15:45:56.126689 1447 scope.go:117] "RemoveContainer" containerID="2ed4b63d583981dfa4a21fbf59a5f990dc8067ec0e0f9ef97cd04427eb092d87" Dec 13 15:45:56.127250 env[1186]: time="2024-12-13T15:45:56.127142329Z" level=error msg="ContainerStatus for \"2ed4b63d583981dfa4a21fbf59a5f990dc8067ec0e0f9ef97cd04427eb092d87\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2ed4b63d583981dfa4a21fbf59a5f990dc8067ec0e0f9ef97cd04427eb092d87\": not found" Dec 13 15:45:56.127480 kubelet[1447]: E1213 15:45:56.127450 1447 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2ed4b63d583981dfa4a21fbf59a5f990dc8067ec0e0f9ef97cd04427eb092d87\": not found" containerID="2ed4b63d583981dfa4a21fbf59a5f990dc8067ec0e0f9ef97cd04427eb092d87" Dec 13 15:45:56.127644 kubelet[1447]: I1213 15:45:56.127611 1447 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2ed4b63d583981dfa4a21fbf59a5f990dc8067ec0e0f9ef97cd04427eb092d87"} err="failed to get container status \"2ed4b63d583981dfa4a21fbf59a5f990dc8067ec0e0f9ef97cd04427eb092d87\": rpc error: code = NotFound desc = an error occurred when try to find container \"2ed4b63d583981dfa4a21fbf59a5f990dc8067ec0e0f9ef97cd04427eb092d87\": not found" Dec 13 15:45:56.127768 kubelet[1447]: I1213 15:45:56.127741 1447 scope.go:117] "RemoveContainer" containerID="a7796837310eb05dae0baabe52e07cff15bfbd80ad071219e8b5ad9a77523f51" Dec 13 15:45:56.128397 env[1186]: time="2024-12-13T15:45:56.128215827Z" level=error msg="ContainerStatus for \"a7796837310eb05dae0baabe52e07cff15bfbd80ad071219e8b5ad9a77523f51\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a7796837310eb05dae0baabe52e07cff15bfbd80ad071219e8b5ad9a77523f51\": not found" Dec 13 15:45:56.128621 kubelet[1447]: E1213 15:45:56.128587 1447 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a7796837310eb05dae0baabe52e07cff15bfbd80ad071219e8b5ad9a77523f51\": not found" containerID="a7796837310eb05dae0baabe52e07cff15bfbd80ad071219e8b5ad9a77523f51" Dec 13 15:45:56.128822 kubelet[1447]: I1213 15:45:56.128783 1447 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a7796837310eb05dae0baabe52e07cff15bfbd80ad071219e8b5ad9a77523f51"} err="failed to get container status \"a7796837310eb05dae0baabe52e07cff15bfbd80ad071219e8b5ad9a77523f51\": rpc error: code = NotFound desc = an error occurred when try to find container \"a7796837310eb05dae0baabe52e07cff15bfbd80ad071219e8b5ad9a77523f51\": not found" Dec 13 15:45:56.280394 systemd[1]: var-lib-kubelet-pods-6121434d\x2d09ad\x2d4586\x2db7c4\x2dc4d5fa1c4fca-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dcrl7l.mount: Deactivated successfully. Dec 13 15:45:56.280561 systemd[1]: var-lib-kubelet-pods-6121434d\x2d09ad\x2d4586\x2db7c4\x2dc4d5fa1c4fca-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 15:45:56.280686 systemd[1]: var-lib-kubelet-pods-6121434d\x2d09ad\x2d4586\x2db7c4\x2dc4d5fa1c4fca-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 15:45:56.667066 kubelet[1447]: E1213 15:45:56.666995 1447 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:45:56.773842 kubelet[1447]: E1213 15:45:56.773748 1447 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 15:45:57.668429 kubelet[1447]: E1213 15:45:57.668331 1447 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:45:57.821455 kubelet[1447]: I1213 15:45:57.821399 1447 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6121434d-09ad-4586-b7c4-c4d5fa1c4fca" path="/var/lib/kubelet/pods/6121434d-09ad-4586-b7c4-c4d5fa1c4fca/volumes" Dec 13 15:45:58.668885 kubelet[1447]: E1213 15:45:58.668807 1447 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:45:59.669662 kubelet[1447]: E1213 15:45:59.669574 1447 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:46:00.089710 kubelet[1447]: E1213 15:46:00.089656 1447 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6121434d-09ad-4586-b7c4-c4d5fa1c4fca" containerName="mount-bpf-fs" Dec 13 15:46:00.090042 kubelet[1447]: E1213 15:46:00.090014 1447 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6121434d-09ad-4586-b7c4-c4d5fa1c4fca" containerName="clean-cilium-state" Dec 13 15:46:00.090188 kubelet[1447]: E1213 15:46:00.090163 1447 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6121434d-09ad-4586-b7c4-c4d5fa1c4fca" containerName="cilium-agent" Dec 13 15:46:00.090320 kubelet[1447]: E1213 15:46:00.090296 1447 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6121434d-09ad-4586-b7c4-c4d5fa1c4fca" containerName="mount-cgroup" Dec 13 15:46:00.090490 kubelet[1447]: E1213 15:46:00.090465 1447 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6121434d-09ad-4586-b7c4-c4d5fa1c4fca" containerName="apply-sysctl-overwrites" Dec 13 15:46:00.090688 kubelet[1447]: I1213 15:46:00.090657 1447 memory_manager.go:354] "RemoveStaleState removing state" podUID="6121434d-09ad-4586-b7c4-c4d5fa1c4fca" containerName="cilium-agent" Dec 13 15:46:00.099001 systemd[1]: Created slice kubepods-burstable-pod9b30ca00_cfdb_4b4b_9d28_8c10899d57cf.slice. Dec 13 15:46:00.110288 systemd[1]: Created slice kubepods-besteffort-poda5e2d72d_df6b_4e89_969c_bd26f49ca09c.slice. Dec 13 15:46:00.216164 kubelet[1447]: I1213 15:46:00.216093 1447 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9b30ca00-cfdb-4b4b-9d28-8c10899d57cf-host-proc-sys-net\") pod \"cilium-4dgtw\" (UID: \"9b30ca00-cfdb-4b4b-9d28-8c10899d57cf\") " pod="kube-system/cilium-4dgtw" Dec 13 15:46:00.216506 kubelet[1447]: I1213 15:46:00.216473 1447 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55lzv\" (UniqueName: \"kubernetes.io/projected/9b30ca00-cfdb-4b4b-9d28-8c10899d57cf-kube-api-access-55lzv\") pod \"cilium-4dgtw\" (UID: \"9b30ca00-cfdb-4b4b-9d28-8c10899d57cf\") " pod="kube-system/cilium-4dgtw" Dec 13 15:46:00.216805 kubelet[1447]: I1213 15:46:00.216775 1447 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a5e2d72d-df6b-4e89-969c-bd26f49ca09c-cilium-config-path\") pod \"cilium-operator-5d85765b45-vrrkw\" (UID: \"a5e2d72d-df6b-4e89-969c-bd26f49ca09c\") " pod="kube-system/cilium-operator-5d85765b45-vrrkw" Dec 13 15:46:00.216978 kubelet[1447]: I1213 15:46:00.216946 1447 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9b30ca00-cfdb-4b4b-9d28-8c10899d57cf-lib-modules\") pod \"cilium-4dgtw\" (UID: \"9b30ca00-cfdb-4b4b-9d28-8c10899d57cf\") " pod="kube-system/cilium-4dgtw" Dec 13 15:46:00.217127 kubelet[1447]: I1213 15:46:00.217099 1447 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9b30ca00-cfdb-4b4b-9d28-8c10899d57cf-clustermesh-secrets\") pod \"cilium-4dgtw\" (UID: \"9b30ca00-cfdb-4b4b-9d28-8c10899d57cf\") " pod="kube-system/cilium-4dgtw" Dec 13 15:46:00.217291 kubelet[1447]: I1213 15:46:00.217263 1447 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9b30ca00-cfdb-4b4b-9d28-8c10899d57cf-cni-path\") pod \"cilium-4dgtw\" (UID: \"9b30ca00-cfdb-4b4b-9d28-8c10899d57cf\") " pod="kube-system/cilium-4dgtw" Dec 13 15:46:00.217474 kubelet[1447]: I1213 15:46:00.217445 1447 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9b30ca00-cfdb-4b4b-9d28-8c10899d57cf-host-proc-sys-kernel\") pod \"cilium-4dgtw\" (UID: \"9b30ca00-cfdb-4b4b-9d28-8c10899d57cf\") " pod="kube-system/cilium-4dgtw" Dec 13 15:46:00.217657 kubelet[1447]: I1213 15:46:00.217617 1447 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9b30ca00-cfdb-4b4b-9d28-8c10899d57cf-hubble-tls\") pod \"cilium-4dgtw\" (UID: \"9b30ca00-cfdb-4b4b-9d28-8c10899d57cf\") " pod="kube-system/cilium-4dgtw" Dec 13 15:46:00.217826 kubelet[1447]: I1213 15:46:00.217794 1447 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8czr6\" (UniqueName: \"kubernetes.io/projected/a5e2d72d-df6b-4e89-969c-bd26f49ca09c-kube-api-access-8czr6\") pod \"cilium-operator-5d85765b45-vrrkw\" (UID: \"a5e2d72d-df6b-4e89-969c-bd26f49ca09c\") " pod="kube-system/cilium-operator-5d85765b45-vrrkw" Dec 13 15:46:00.217990 kubelet[1447]: I1213 15:46:00.217961 1447 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9b30ca00-cfdb-4b4b-9d28-8c10899d57cf-bpf-maps\") pod \"cilium-4dgtw\" (UID: \"9b30ca00-cfdb-4b4b-9d28-8c10899d57cf\") " pod="kube-system/cilium-4dgtw" Dec 13 15:46:00.218163 kubelet[1447]: I1213 15:46:00.218127 1447 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9b30ca00-cfdb-4b4b-9d28-8c10899d57cf-cilium-cgroup\") pod \"cilium-4dgtw\" (UID: \"9b30ca00-cfdb-4b4b-9d28-8c10899d57cf\") " pod="kube-system/cilium-4dgtw" Dec 13 15:46:00.218333 kubelet[1447]: I1213 15:46:00.218303 1447 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9b30ca00-cfdb-4b4b-9d28-8c10899d57cf-cilium-config-path\") pod \"cilium-4dgtw\" (UID: \"9b30ca00-cfdb-4b4b-9d28-8c10899d57cf\") " pod="kube-system/cilium-4dgtw" Dec 13 15:46:00.218514 kubelet[1447]: I1213 15:46:00.218484 1447 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9b30ca00-cfdb-4b4b-9d28-8c10899d57cf-cilium-run\") pod \"cilium-4dgtw\" (UID: \"9b30ca00-cfdb-4b4b-9d28-8c10899d57cf\") " pod="kube-system/cilium-4dgtw" Dec 13 15:46:00.218685 kubelet[1447]: I1213 15:46:00.218657 1447 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9b30ca00-cfdb-4b4b-9d28-8c10899d57cf-hostproc\") pod \"cilium-4dgtw\" (UID: \"9b30ca00-cfdb-4b4b-9d28-8c10899d57cf\") " pod="kube-system/cilium-4dgtw" Dec 13 15:46:00.218832 kubelet[1447]: I1213 15:46:00.218804 1447 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9b30ca00-cfdb-4b4b-9d28-8c10899d57cf-etc-cni-netd\") pod \"cilium-4dgtw\" (UID: \"9b30ca00-cfdb-4b4b-9d28-8c10899d57cf\") " pod="kube-system/cilium-4dgtw" Dec 13 15:46:00.218985 kubelet[1447]: I1213 15:46:00.218956 1447 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9b30ca00-cfdb-4b4b-9d28-8c10899d57cf-xtables-lock\") pod \"cilium-4dgtw\" (UID: \"9b30ca00-cfdb-4b4b-9d28-8c10899d57cf\") " pod="kube-system/cilium-4dgtw" Dec 13 15:46:00.219122 kubelet[1447]: I1213 15:46:00.219095 1447 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/9b30ca00-cfdb-4b4b-9d28-8c10899d57cf-cilium-ipsec-secrets\") pod \"cilium-4dgtw\" (UID: \"9b30ca00-cfdb-4b4b-9d28-8c10899d57cf\") " pod="kube-system/cilium-4dgtw" Dec 13 15:46:00.408998 env[1186]: time="2024-12-13T15:46:00.408821184Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4dgtw,Uid:9b30ca00-cfdb-4b4b-9d28-8c10899d57cf,Namespace:kube-system,Attempt:0,}" Dec 13 15:46:00.416230 env[1186]: time="2024-12-13T15:46:00.414222284Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-vrrkw,Uid:a5e2d72d-df6b-4e89-969c-bd26f49ca09c,Namespace:kube-system,Attempt:0,}" Dec 13 15:46:00.440152 env[1186]: time="2024-12-13T15:46:00.439855244Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 15:46:00.440152 env[1186]: time="2024-12-13T15:46:00.439923847Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 15:46:00.440152 env[1186]: time="2024-12-13T15:46:00.439942827Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 15:46:00.440599 env[1186]: time="2024-12-13T15:46:00.440226778Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e8dd036614ece7271e170a26a36185290fc41323915d13897b20893f8a1ac326 pid=3023 runtime=io.containerd.runc.v2 Dec 13 15:46:00.441116 env[1186]: time="2024-12-13T15:46:00.441012363Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 15:46:00.441304 env[1186]: time="2024-12-13T15:46:00.441250381Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 15:46:00.441518 env[1186]: time="2024-12-13T15:46:00.441448329Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 15:46:00.441962 env[1186]: time="2024-12-13T15:46:00.441915233Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5436eeee5cc0a7d1eb121b66d338d9c20e6387f54ed8bf9be10c991c42f49fe0 pid=3025 runtime=io.containerd.runc.v2 Dec 13 15:46:00.460804 systemd[1]: Started cri-containerd-e8dd036614ece7271e170a26a36185290fc41323915d13897b20893f8a1ac326.scope. Dec 13 15:46:00.478460 systemd[1]: Started cri-containerd-5436eeee5cc0a7d1eb121b66d338d9c20e6387f54ed8bf9be10c991c42f49fe0.scope. Dec 13 15:46:00.533052 env[1186]: time="2024-12-13T15:46:00.532981028Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4dgtw,Uid:9b30ca00-cfdb-4b4b-9d28-8c10899d57cf,Namespace:kube-system,Attempt:0,} returns sandbox id \"e8dd036614ece7271e170a26a36185290fc41323915d13897b20893f8a1ac326\"" Dec 13 15:46:00.538201 env[1186]: time="2024-12-13T15:46:00.538151650Z" level=info msg="CreateContainer within sandbox \"e8dd036614ece7271e170a26a36185290fc41323915d13897b20893f8a1ac326\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 15:46:00.555966 env[1186]: time="2024-12-13T15:46:00.555905856Z" level=info msg="CreateContainer within sandbox \"e8dd036614ece7271e170a26a36185290fc41323915d13897b20893f8a1ac326\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"6f6e71a466e471a83e531c9aca0017bae5dac65551cfec89767723b4ab0cb962\"" Dec 13 15:46:00.557221 env[1186]: time="2024-12-13T15:46:00.557181371Z" level=info msg="StartContainer for \"6f6e71a466e471a83e531c9aca0017bae5dac65551cfec89767723b4ab0cb962\"" Dec 13 15:46:00.573216 env[1186]: time="2024-12-13T15:46:00.573126091Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-vrrkw,Uid:a5e2d72d-df6b-4e89-969c-bd26f49ca09c,Namespace:kube-system,Attempt:0,} returns sandbox id \"5436eeee5cc0a7d1eb121b66d338d9c20e6387f54ed8bf9be10c991c42f49fe0\"" Dec 13 15:46:00.576124 env[1186]: time="2024-12-13T15:46:00.576086617Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Dec 13 15:46:00.594784 systemd[1]: Started cri-containerd-6f6e71a466e471a83e531c9aca0017bae5dac65551cfec89767723b4ab0cb962.scope. Dec 13 15:46:00.614118 systemd[1]: cri-containerd-6f6e71a466e471a83e531c9aca0017bae5dac65551cfec89767723b4ab0cb962.scope: Deactivated successfully. Dec 13 15:46:00.638116 env[1186]: time="2024-12-13T15:46:00.638041727Z" level=info msg="shim disconnected" id=6f6e71a466e471a83e531c9aca0017bae5dac65551cfec89767723b4ab0cb962 Dec 13 15:46:00.638508 env[1186]: time="2024-12-13T15:46:00.638473724Z" level=warning msg="cleaning up after shim disconnected" id=6f6e71a466e471a83e531c9aca0017bae5dac65551cfec89767723b4ab0cb962 namespace=k8s.io Dec 13 15:46:00.638648 env[1186]: time="2024-12-13T15:46:00.638604596Z" level=info msg="cleaning up dead shim" Dec 13 15:46:00.649295 env[1186]: time="2024-12-13T15:46:00.649206875Z" level=warning msg="cleanup warnings time=\"2024-12-13T15:46:00Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3122 runtime=io.containerd.runc.v2\ntime=\"2024-12-13T15:46:00Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/6f6e71a466e471a83e531c9aca0017bae5dac65551cfec89767723b4ab0cb962/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Dec 13 15:46:00.650074 env[1186]: time="2024-12-13T15:46:00.649913049Z" level=error msg="copy shim log" error="read /proc/self/fd/59: file already closed" Dec 13 15:46:00.652452 env[1186]: time="2024-12-13T15:46:00.650411146Z" level=error msg="Failed to pipe stdout of container \"6f6e71a466e471a83e531c9aca0017bae5dac65551cfec89767723b4ab0cb962\"" error="reading from a closed fifo" Dec 13 15:46:00.652672 env[1186]: time="2024-12-13T15:46:00.652613220Z" level=error msg="Failed to pipe stderr of container \"6f6e71a466e471a83e531c9aca0017bae5dac65551cfec89767723b4ab0cb962\"" error="reading from a closed fifo" Dec 13 15:46:00.654470 env[1186]: time="2024-12-13T15:46:00.654403913Z" level=error msg="StartContainer for \"6f6e71a466e471a83e531c9aca0017bae5dac65551cfec89767723b4ab0cb962\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Dec 13 15:46:00.655314 kubelet[1447]: E1213 15:46:00.654945 1447 log.go:32] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="6f6e71a466e471a83e531c9aca0017bae5dac65551cfec89767723b4ab0cb962" Dec 13 15:46:00.659333 kubelet[1447]: E1213 15:46:00.659201 1447 kuberuntime_manager.go:1272] "Unhandled Error" err=< Dec 13 15:46:00.659333 kubelet[1447]: init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Dec 13 15:46:00.659333 kubelet[1447]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Dec 13 15:46:00.659333 kubelet[1447]: rm /hostbin/cilium-mount Dec 13 15:46:00.660253 kubelet[1447]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-55lzv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:&AppArmorProfile{Type:Unconfined,LocalhostProfile:nil,},},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-4dgtw_kube-system(9b30ca00-cfdb-4b4b-9d28-8c10899d57cf): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Dec 13 15:46:00.660253 kubelet[1447]: > logger="UnhandledError" Dec 13 15:46:00.661725 kubelet[1447]: E1213 15:46:00.661575 1447 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-4dgtw" podUID="9b30ca00-cfdb-4b4b-9d28-8c10899d57cf" Dec 13 15:46:00.669835 kubelet[1447]: E1213 15:46:00.669765 1447 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:46:01.100426 env[1186]: time="2024-12-13T15:46:01.100345630Z" level=info msg="CreateContainer within sandbox \"e8dd036614ece7271e170a26a36185290fc41323915d13897b20893f8a1ac326\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:1,}" Dec 13 15:46:01.119778 env[1186]: time="2024-12-13T15:46:01.119607714Z" level=info msg="CreateContainer within sandbox \"e8dd036614ece7271e170a26a36185290fc41323915d13897b20893f8a1ac326\" for &ContainerMetadata{Name:mount-cgroup,Attempt:1,} returns container id \"a4bfd1f08cbda844059627059a68c10ea5815dd0c7693587f0d2bde2315b67b2\"" Dec 13 15:46:01.123207 env[1186]: time="2024-12-13T15:46:01.123136681Z" level=info msg="StartContainer for \"a4bfd1f08cbda844059627059a68c10ea5815dd0c7693587f0d2bde2315b67b2\"" Dec 13 15:46:01.148415 systemd[1]: Started cri-containerd-a4bfd1f08cbda844059627059a68c10ea5815dd0c7693587f0d2bde2315b67b2.scope. Dec 13 15:46:01.161936 systemd[1]: cri-containerd-a4bfd1f08cbda844059627059a68c10ea5815dd0c7693587f0d2bde2315b67b2.scope: Deactivated successfully. Dec 13 15:46:01.173553 env[1186]: time="2024-12-13T15:46:01.173456020Z" level=info msg="shim disconnected" id=a4bfd1f08cbda844059627059a68c10ea5815dd0c7693587f0d2bde2315b67b2 Dec 13 15:46:01.173553 env[1186]: time="2024-12-13T15:46:01.173546973Z" level=warning msg="cleaning up after shim disconnected" id=a4bfd1f08cbda844059627059a68c10ea5815dd0c7693587f0d2bde2315b67b2 namespace=k8s.io Dec 13 15:46:01.173899 env[1186]: time="2024-12-13T15:46:01.173566870Z" level=info msg="cleaning up dead shim" Dec 13 15:46:01.191108 env[1186]: time="2024-12-13T15:46:01.190986937Z" level=warning msg="cleanup warnings time=\"2024-12-13T15:46:01Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3159 runtime=io.containerd.runc.v2\ntime=\"2024-12-13T15:46:01Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/a4bfd1f08cbda844059627059a68c10ea5815dd0c7693587f0d2bde2315b67b2/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Dec 13 15:46:01.191508 env[1186]: time="2024-12-13T15:46:01.191419190Z" level=error msg="copy shim log" error="read /proc/self/fd/59: file already closed" Dec 13 15:46:01.191876 env[1186]: time="2024-12-13T15:46:01.191825989Z" level=error msg="Failed to pipe stderr of container \"a4bfd1f08cbda844059627059a68c10ea5815dd0c7693587f0d2bde2315b67b2\"" error="reading from a closed fifo" Dec 13 15:46:01.193491 env[1186]: time="2024-12-13T15:46:01.193442093Z" level=error msg="Failed to pipe stdout of container \"a4bfd1f08cbda844059627059a68c10ea5815dd0c7693587f0d2bde2315b67b2\"" error="reading from a closed fifo" Dec 13 15:46:01.195226 env[1186]: time="2024-12-13T15:46:01.195178507Z" level=error msg="StartContainer for \"a4bfd1f08cbda844059627059a68c10ea5815dd0c7693587f0d2bde2315b67b2\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Dec 13 15:46:01.195808 kubelet[1447]: E1213 15:46:01.195749 1447 log.go:32] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="a4bfd1f08cbda844059627059a68c10ea5815dd0c7693587f0d2bde2315b67b2" Dec 13 15:46:01.196008 kubelet[1447]: E1213 15:46:01.195958 1447 kuberuntime_manager.go:1272] "Unhandled Error" err=< Dec 13 15:46:01.196008 kubelet[1447]: init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Dec 13 15:46:01.196008 kubelet[1447]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Dec 13 15:46:01.196008 kubelet[1447]: rm /hostbin/cilium-mount Dec 13 15:46:01.196008 kubelet[1447]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-55lzv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:&AppArmorProfile{Type:Unconfined,LocalhostProfile:nil,},},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-4dgtw_kube-system(9b30ca00-cfdb-4b4b-9d28-8c10899d57cf): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Dec 13 15:46:01.196008 kubelet[1447]: > logger="UnhandledError" Dec 13 15:46:01.197755 kubelet[1447]: E1213 15:46:01.197701 1447 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-4dgtw" podUID="9b30ca00-cfdb-4b4b-9d28-8c10899d57cf" Dec 13 15:46:01.581149 kubelet[1447]: E1213 15:46:01.581052 1447 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:46:01.670518 kubelet[1447]: E1213 15:46:01.670439 1447 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:46:01.775560 kubelet[1447]: E1213 15:46:01.775494 1447 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 15:46:02.102481 kubelet[1447]: I1213 15:46:02.102431 1447 scope.go:117] "RemoveContainer" containerID="6f6e71a466e471a83e531c9aca0017bae5dac65551cfec89767723b4ab0cb962" Dec 13 15:46:02.103789 env[1186]: time="2024-12-13T15:46:02.103714497Z" level=info msg="StopPodSandbox for \"e8dd036614ece7271e170a26a36185290fc41323915d13897b20893f8a1ac326\"" Dec 13 15:46:02.111603 env[1186]: time="2024-12-13T15:46:02.103803203Z" level=info msg="Container to stop \"a4bfd1f08cbda844059627059a68c10ea5815dd0c7693587f0d2bde2315b67b2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 15:46:02.111603 env[1186]: time="2024-12-13T15:46:02.103832407Z" level=info msg="Container to stop \"6f6e71a466e471a83e531c9aca0017bae5dac65551cfec89767723b4ab0cb962\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 15:46:02.106632 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e8dd036614ece7271e170a26a36185290fc41323915d13897b20893f8a1ac326-shm.mount: Deactivated successfully. Dec 13 15:46:02.115825 env[1186]: time="2024-12-13T15:46:02.115756279Z" level=info msg="RemoveContainer for \"6f6e71a466e471a83e531c9aca0017bae5dac65551cfec89767723b4ab0cb962\"" Dec 13 15:46:02.120694 systemd[1]: cri-containerd-e8dd036614ece7271e170a26a36185290fc41323915d13897b20893f8a1ac326.scope: Deactivated successfully. Dec 13 15:46:02.135160 env[1186]: time="2024-12-13T15:46:02.135054230Z" level=info msg="RemoveContainer for \"6f6e71a466e471a83e531c9aca0017bae5dac65551cfec89767723b4ab0cb962\" returns successfully" Dec 13 15:46:02.161488 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e8dd036614ece7271e170a26a36185290fc41323915d13897b20893f8a1ac326-rootfs.mount: Deactivated successfully. Dec 13 15:46:02.173311 env[1186]: time="2024-12-13T15:46:02.173232741Z" level=info msg="shim disconnected" id=e8dd036614ece7271e170a26a36185290fc41323915d13897b20893f8a1ac326 Dec 13 15:46:02.173311 env[1186]: time="2024-12-13T15:46:02.173304769Z" level=warning msg="cleaning up after shim disconnected" id=e8dd036614ece7271e170a26a36185290fc41323915d13897b20893f8a1ac326 namespace=k8s.io Dec 13 15:46:02.173311 env[1186]: time="2024-12-13T15:46:02.173322749Z" level=info msg="cleaning up dead shim" Dec 13 15:46:02.186984 env[1186]: time="2024-12-13T15:46:02.186909066Z" level=warning msg="cleanup warnings time=\"2024-12-13T15:46:02Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3191 runtime=io.containerd.runc.v2\n" Dec 13 15:46:02.187766 env[1186]: time="2024-12-13T15:46:02.187719497Z" level=info msg="TearDown network for sandbox \"e8dd036614ece7271e170a26a36185290fc41323915d13897b20893f8a1ac326\" successfully" Dec 13 15:46:02.187921 env[1186]: time="2024-12-13T15:46:02.187886241Z" level=info msg="StopPodSandbox for \"e8dd036614ece7271e170a26a36185290fc41323915d13897b20893f8a1ac326\" returns successfully" Dec 13 15:46:02.339313 kubelet[1447]: I1213 15:46:02.339239 1447 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9b30ca00-cfdb-4b4b-9d28-8c10899d57cf-bpf-maps\") pod \"9b30ca00-cfdb-4b4b-9d28-8c10899d57cf\" (UID: \"9b30ca00-cfdb-4b4b-9d28-8c10899d57cf\") " Dec 13 15:46:02.339628 kubelet[1447]: I1213 15:46:02.339333 1447 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9b30ca00-cfdb-4b4b-9d28-8c10899d57cf-cilium-config-path\") pod \"9b30ca00-cfdb-4b4b-9d28-8c10899d57cf\" (UID: \"9b30ca00-cfdb-4b4b-9d28-8c10899d57cf\") " Dec 13 15:46:02.339628 kubelet[1447]: I1213 15:46:02.339385 1447 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9b30ca00-cfdb-4b4b-9d28-8c10899d57cf-hostproc\") pod \"9b30ca00-cfdb-4b4b-9d28-8c10899d57cf\" (UID: \"9b30ca00-cfdb-4b4b-9d28-8c10899d57cf\") " Dec 13 15:46:02.339628 kubelet[1447]: I1213 15:46:02.339414 1447 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9b30ca00-cfdb-4b4b-9d28-8c10899d57cf-etc-cni-netd\") pod \"9b30ca00-cfdb-4b4b-9d28-8c10899d57cf\" (UID: \"9b30ca00-cfdb-4b4b-9d28-8c10899d57cf\") " Dec 13 15:46:02.339628 kubelet[1447]: I1213 15:46:02.339443 1447 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/9b30ca00-cfdb-4b4b-9d28-8c10899d57cf-cilium-ipsec-secrets\") pod \"9b30ca00-cfdb-4b4b-9d28-8c10899d57cf\" (UID: \"9b30ca00-cfdb-4b4b-9d28-8c10899d57cf\") " Dec 13 15:46:02.339628 kubelet[1447]: I1213 15:46:02.339466 1447 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9b30ca00-cfdb-4b4b-9d28-8c10899d57cf-cilium-cgroup\") pod \"9b30ca00-cfdb-4b4b-9d28-8c10899d57cf\" (UID: \"9b30ca00-cfdb-4b4b-9d28-8c10899d57cf\") " Dec 13 15:46:02.339628 kubelet[1447]: I1213 15:46:02.339488 1447 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9b30ca00-cfdb-4b4b-9d28-8c10899d57cf-xtables-lock\") pod \"9b30ca00-cfdb-4b4b-9d28-8c10899d57cf\" (UID: \"9b30ca00-cfdb-4b4b-9d28-8c10899d57cf\") " Dec 13 15:46:02.339628 kubelet[1447]: I1213 15:46:02.339512 1447 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9b30ca00-cfdb-4b4b-9d28-8c10899d57cf-host-proc-sys-net\") pod \"9b30ca00-cfdb-4b4b-9d28-8c10899d57cf\" (UID: \"9b30ca00-cfdb-4b4b-9d28-8c10899d57cf\") " Dec 13 15:46:02.339628 kubelet[1447]: I1213 15:46:02.339540 1447 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-55lzv\" (UniqueName: \"kubernetes.io/projected/9b30ca00-cfdb-4b4b-9d28-8c10899d57cf-kube-api-access-55lzv\") pod \"9b30ca00-cfdb-4b4b-9d28-8c10899d57cf\" (UID: \"9b30ca00-cfdb-4b4b-9d28-8c10899d57cf\") " Dec 13 15:46:02.339628 kubelet[1447]: I1213 15:46:02.339564 1447 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9b30ca00-cfdb-4b4b-9d28-8c10899d57cf-host-proc-sys-kernel\") pod \"9b30ca00-cfdb-4b4b-9d28-8c10899d57cf\" (UID: \"9b30ca00-cfdb-4b4b-9d28-8c10899d57cf\") " Dec 13 15:46:02.339628 kubelet[1447]: I1213 15:46:02.339587 1447 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9b30ca00-cfdb-4b4b-9d28-8c10899d57cf-cni-path\") pod \"9b30ca00-cfdb-4b4b-9d28-8c10899d57cf\" (UID: \"9b30ca00-cfdb-4b4b-9d28-8c10899d57cf\") " Dec 13 15:46:02.339628 kubelet[1447]: I1213 15:46:02.339610 1447 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9b30ca00-cfdb-4b4b-9d28-8c10899d57cf-cilium-run\") pod \"9b30ca00-cfdb-4b4b-9d28-8c10899d57cf\" (UID: \"9b30ca00-cfdb-4b4b-9d28-8c10899d57cf\") " Dec 13 15:46:02.339628 kubelet[1447]: I1213 15:46:02.339632 1447 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9b30ca00-cfdb-4b4b-9d28-8c10899d57cf-lib-modules\") pod \"9b30ca00-cfdb-4b4b-9d28-8c10899d57cf\" (UID: \"9b30ca00-cfdb-4b4b-9d28-8c10899d57cf\") " Dec 13 15:46:02.340451 kubelet[1447]: I1213 15:46:02.339659 1447 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9b30ca00-cfdb-4b4b-9d28-8c10899d57cf-clustermesh-secrets\") pod \"9b30ca00-cfdb-4b4b-9d28-8c10899d57cf\" (UID: \"9b30ca00-cfdb-4b4b-9d28-8c10899d57cf\") " Dec 13 15:46:02.340451 kubelet[1447]: I1213 15:46:02.339701 1447 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9b30ca00-cfdb-4b4b-9d28-8c10899d57cf-hubble-tls\") pod \"9b30ca00-cfdb-4b4b-9d28-8c10899d57cf\" (UID: \"9b30ca00-cfdb-4b4b-9d28-8c10899d57cf\") " Dec 13 15:46:02.341039 kubelet[1447]: I1213 15:46:02.341002 1447 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9b30ca00-cfdb-4b4b-9d28-8c10899d57cf-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "9b30ca00-cfdb-4b4b-9d28-8c10899d57cf" (UID: "9b30ca00-cfdb-4b4b-9d28-8c10899d57cf"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 15:46:02.341121 kubelet[1447]: I1213 15:46:02.341060 1447 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9b30ca00-cfdb-4b4b-9d28-8c10899d57cf-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "9b30ca00-cfdb-4b4b-9d28-8c10899d57cf" (UID: "9b30ca00-cfdb-4b4b-9d28-8c10899d57cf"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 15:46:02.353988 kubelet[1447]: I1213 15:46:02.348145 1447 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b30ca00-cfdb-4b4b-9d28-8c10899d57cf-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "9b30ca00-cfdb-4b4b-9d28-8c10899d57cf" (UID: "9b30ca00-cfdb-4b4b-9d28-8c10899d57cf"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 15:46:02.353988 kubelet[1447]: I1213 15:46:02.348227 1447 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9b30ca00-cfdb-4b4b-9d28-8c10899d57cf-hostproc" (OuterVolumeSpecName: "hostproc") pod "9b30ca00-cfdb-4b4b-9d28-8c10899d57cf" (UID: "9b30ca00-cfdb-4b4b-9d28-8c10899d57cf"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 15:46:02.353988 kubelet[1447]: I1213 15:46:02.348258 1447 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9b30ca00-cfdb-4b4b-9d28-8c10899d57cf-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "9b30ca00-cfdb-4b4b-9d28-8c10899d57cf" (UID: "9b30ca00-cfdb-4b4b-9d28-8c10899d57cf"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 15:46:02.351538 systemd[1]: var-lib-kubelet-pods-9b30ca00\x2dcfdb\x2d4b4b\x2d9d28\x2d8c10899d57cf-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 15:46:02.355703 systemd[1]: var-lib-kubelet-pods-9b30ca00\x2dcfdb\x2d4b4b\x2d9d28\x2d8c10899d57cf-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Dec 13 15:46:02.356990 kubelet[1447]: I1213 15:46:02.356943 1447 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9b30ca00-cfdb-4b4b-9d28-8c10899d57cf-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "9b30ca00-cfdb-4b4b-9d28-8c10899d57cf" (UID: "9b30ca00-cfdb-4b4b-9d28-8c10899d57cf"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 15:46:02.357475 kubelet[1447]: I1213 15:46:02.357442 1447 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b30ca00-cfdb-4b4b-9d28-8c10899d57cf-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "9b30ca00-cfdb-4b4b-9d28-8c10899d57cf" (UID: "9b30ca00-cfdb-4b4b-9d28-8c10899d57cf"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 15:46:02.359818 kubelet[1447]: I1213 15:46:02.357615 1447 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9b30ca00-cfdb-4b4b-9d28-8c10899d57cf-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "9b30ca00-cfdb-4b4b-9d28-8c10899d57cf" (UID: "9b30ca00-cfdb-4b4b-9d28-8c10899d57cf"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 15:46:02.359818 kubelet[1447]: I1213 15:46:02.357705 1447 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9b30ca00-cfdb-4b4b-9d28-8c10899d57cf-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "9b30ca00-cfdb-4b4b-9d28-8c10899d57cf" (UID: "9b30ca00-cfdb-4b4b-9d28-8c10899d57cf"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 15:46:02.359818 kubelet[1447]: I1213 15:46:02.357737 1447 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9b30ca00-cfdb-4b4b-9d28-8c10899d57cf-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "9b30ca00-cfdb-4b4b-9d28-8c10899d57cf" (UID: "9b30ca00-cfdb-4b4b-9d28-8c10899d57cf"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 15:46:02.359818 kubelet[1447]: I1213 15:46:02.357772 1447 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9b30ca00-cfdb-4b4b-9d28-8c10899d57cf-cni-path" (OuterVolumeSpecName: "cni-path") pod "9b30ca00-cfdb-4b4b-9d28-8c10899d57cf" (UID: "9b30ca00-cfdb-4b4b-9d28-8c10899d57cf"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 15:46:02.359818 kubelet[1447]: I1213 15:46:02.357801 1447 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9b30ca00-cfdb-4b4b-9d28-8c10899d57cf-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "9b30ca00-cfdb-4b4b-9d28-8c10899d57cf" (UID: "9b30ca00-cfdb-4b4b-9d28-8c10899d57cf"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 15:46:02.360329 kubelet[1447]: I1213 15:46:02.360294 1447 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b30ca00-cfdb-4b4b-9d28-8c10899d57cf-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "9b30ca00-cfdb-4b4b-9d28-8c10899d57cf" (UID: "9b30ca00-cfdb-4b4b-9d28-8c10899d57cf"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 15:46:02.365115 systemd[1]: var-lib-kubelet-pods-9b30ca00\x2dcfdb\x2d4b4b\x2d9d28\x2d8c10899d57cf-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d55lzv.mount: Deactivated successfully. Dec 13 15:46:02.366894 kubelet[1447]: I1213 15:46:02.366845 1447 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b30ca00-cfdb-4b4b-9d28-8c10899d57cf-kube-api-access-55lzv" (OuterVolumeSpecName: "kube-api-access-55lzv") pod "9b30ca00-cfdb-4b4b-9d28-8c10899d57cf" (UID: "9b30ca00-cfdb-4b4b-9d28-8c10899d57cf"). InnerVolumeSpecName "kube-api-access-55lzv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 15:46:02.369373 systemd[1]: var-lib-kubelet-pods-9b30ca00\x2dcfdb\x2d4b4b\x2d9d28\x2d8c10899d57cf-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 15:46:02.370777 kubelet[1447]: I1213 15:46:02.370666 1447 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b30ca00-cfdb-4b4b-9d28-8c10899d57cf-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "9b30ca00-cfdb-4b4b-9d28-8c10899d57cf" (UID: "9b30ca00-cfdb-4b4b-9d28-8c10899d57cf"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 15:46:02.440006 kubelet[1447]: I1213 15:46:02.439946 1447 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9b30ca00-cfdb-4b4b-9d28-8c10899d57cf-cni-path\") on node \"10.244.25.74\" DevicePath \"\"" Dec 13 15:46:02.440305 kubelet[1447]: I1213 15:46:02.440279 1447 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9b30ca00-cfdb-4b4b-9d28-8c10899d57cf-hubble-tls\") on node \"10.244.25.74\" DevicePath \"\"" Dec 13 15:46:02.440477 kubelet[1447]: I1213 15:46:02.440451 1447 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9b30ca00-cfdb-4b4b-9d28-8c10899d57cf-cilium-run\") on node \"10.244.25.74\" DevicePath \"\"" Dec 13 15:46:02.440610 kubelet[1447]: I1213 15:46:02.440585 1447 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9b30ca00-cfdb-4b4b-9d28-8c10899d57cf-lib-modules\") on node \"10.244.25.74\" DevicePath \"\"" Dec 13 15:46:02.440767 kubelet[1447]: I1213 15:46:02.440735 1447 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9b30ca00-cfdb-4b4b-9d28-8c10899d57cf-clustermesh-secrets\") on node \"10.244.25.74\" DevicePath \"\"" Dec 13 15:46:02.440982 kubelet[1447]: I1213 15:46:02.440957 1447 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9b30ca00-cfdb-4b4b-9d28-8c10899d57cf-etc-cni-netd\") on node \"10.244.25.74\" DevicePath \"\"" Dec 13 15:46:02.441259 kubelet[1447]: I1213 15:46:02.441235 1447 reconciler_common.go:288] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/9b30ca00-cfdb-4b4b-9d28-8c10899d57cf-cilium-ipsec-secrets\") on node \"10.244.25.74\" DevicePath \"\"" Dec 13 15:46:02.441485 kubelet[1447]: I1213 15:46:02.441460 1447 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9b30ca00-cfdb-4b4b-9d28-8c10899d57cf-bpf-maps\") on node \"10.244.25.74\" DevicePath \"\"" Dec 13 15:46:02.441725 kubelet[1447]: I1213 15:46:02.441644 1447 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9b30ca00-cfdb-4b4b-9d28-8c10899d57cf-cilium-config-path\") on node \"10.244.25.74\" DevicePath \"\"" Dec 13 15:46:02.441846 kubelet[1447]: I1213 15:46:02.441822 1447 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9b30ca00-cfdb-4b4b-9d28-8c10899d57cf-hostproc\") on node \"10.244.25.74\" DevicePath \"\"" Dec 13 15:46:02.442017 kubelet[1447]: I1213 15:46:02.441990 1447 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-55lzv\" (UniqueName: \"kubernetes.io/projected/9b30ca00-cfdb-4b4b-9d28-8c10899d57cf-kube-api-access-55lzv\") on node \"10.244.25.74\" DevicePath \"\"" Dec 13 15:46:02.442145 kubelet[1447]: I1213 15:46:02.442121 1447 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9b30ca00-cfdb-4b4b-9d28-8c10899d57cf-host-proc-sys-kernel\") on node \"10.244.25.74\" DevicePath \"\"" Dec 13 15:46:02.442277 kubelet[1447]: I1213 15:46:02.442253 1447 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9b30ca00-cfdb-4b4b-9d28-8c10899d57cf-cilium-cgroup\") on node \"10.244.25.74\" DevicePath \"\"" Dec 13 15:46:02.442430 kubelet[1447]: I1213 15:46:02.442407 1447 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9b30ca00-cfdb-4b4b-9d28-8c10899d57cf-xtables-lock\") on node \"10.244.25.74\" DevicePath \"\"" Dec 13 15:46:02.442570 kubelet[1447]: I1213 15:46:02.442543 1447 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9b30ca00-cfdb-4b4b-9d28-8c10899d57cf-host-proc-sys-net\") on node \"10.244.25.74\" DevicePath \"\"" Dec 13 15:46:02.671055 kubelet[1447]: E1213 15:46:02.670804 1447 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:46:03.106664 kubelet[1447]: I1213 15:46:03.106596 1447 scope.go:117] "RemoveContainer" containerID="a4bfd1f08cbda844059627059a68c10ea5815dd0c7693587f0d2bde2315b67b2" Dec 13 15:46:03.108298 env[1186]: time="2024-12-13T15:46:03.108230238Z" level=info msg="RemoveContainer for \"a4bfd1f08cbda844059627059a68c10ea5815dd0c7693587f0d2bde2315b67b2\"" Dec 13 15:46:03.117388 env[1186]: time="2024-12-13T15:46:03.114024175Z" level=info msg="RemoveContainer for \"a4bfd1f08cbda844059627059a68c10ea5815dd0c7693587f0d2bde2315b67b2\" returns successfully" Dec 13 15:46:03.121443 systemd[1]: Removed slice kubepods-burstable-pod9b30ca00_cfdb_4b4b_9d28_8c10899d57cf.slice. Dec 13 15:46:03.151908 kubelet[1447]: I1213 15:46:03.151842 1447 setters.go:600] "Node became not ready" node="10.244.25.74" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-12-13T15:46:03Z","lastTransitionTime":"2024-12-13T15:46:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Dec 13 15:46:03.183142 kubelet[1447]: E1213 15:46:03.183079 1447 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9b30ca00-cfdb-4b4b-9d28-8c10899d57cf" containerName="mount-cgroup" Dec 13 15:46:03.183522 kubelet[1447]: I1213 15:46:03.183486 1447 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b30ca00-cfdb-4b4b-9d28-8c10899d57cf" containerName="mount-cgroup" Dec 13 15:46:03.183711 kubelet[1447]: E1213 15:46:03.183665 1447 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9b30ca00-cfdb-4b4b-9d28-8c10899d57cf" containerName="mount-cgroup" Dec 13 15:46:03.183877 kubelet[1447]: I1213 15:46:03.183852 1447 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b30ca00-cfdb-4b4b-9d28-8c10899d57cf" containerName="mount-cgroup" Dec 13 15:46:03.192256 systemd[1]: Created slice kubepods-burstable-pod069f1dc0_edea_4cb4_96d0_ecacd3a4548c.slice. Dec 13 15:46:03.248845 kubelet[1447]: I1213 15:46:03.248771 1447 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/069f1dc0-edea-4cb4-96d0-ecacd3a4548c-cilium-run\") pod \"cilium-hjt25\" (UID: \"069f1dc0-edea-4cb4-96d0-ecacd3a4548c\") " pod="kube-system/cilium-hjt25" Dec 13 15:46:03.249221 kubelet[1447]: I1213 15:46:03.249182 1447 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/069f1dc0-edea-4cb4-96d0-ecacd3a4548c-hostproc\") pod \"cilium-hjt25\" (UID: \"069f1dc0-edea-4cb4-96d0-ecacd3a4548c\") " pod="kube-system/cilium-hjt25" Dec 13 15:46:03.249437 kubelet[1447]: I1213 15:46:03.249409 1447 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/069f1dc0-edea-4cb4-96d0-ecacd3a4548c-host-proc-sys-kernel\") pod \"cilium-hjt25\" (UID: \"069f1dc0-edea-4cb4-96d0-ecacd3a4548c\") " pod="kube-system/cilium-hjt25" Dec 13 15:46:03.249640 kubelet[1447]: I1213 15:46:03.249605 1447 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/069f1dc0-edea-4cb4-96d0-ecacd3a4548c-clustermesh-secrets\") pod \"cilium-hjt25\" (UID: \"069f1dc0-edea-4cb4-96d0-ecacd3a4548c\") " pod="kube-system/cilium-hjt25" Dec 13 15:46:03.249828 kubelet[1447]: I1213 15:46:03.249791 1447 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/069f1dc0-edea-4cb4-96d0-ecacd3a4548c-cilium-config-path\") pod \"cilium-hjt25\" (UID: \"069f1dc0-edea-4cb4-96d0-ecacd3a4548c\") " pod="kube-system/cilium-hjt25" Dec 13 15:46:03.250052 kubelet[1447]: I1213 15:46:03.250006 1447 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/069f1dc0-edea-4cb4-96d0-ecacd3a4548c-cilium-ipsec-secrets\") pod \"cilium-hjt25\" (UID: \"069f1dc0-edea-4cb4-96d0-ecacd3a4548c\") " pod="kube-system/cilium-hjt25" Dec 13 15:46:03.250151 kubelet[1447]: I1213 15:46:03.250056 1447 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42kkh\" (UniqueName: \"kubernetes.io/projected/069f1dc0-edea-4cb4-96d0-ecacd3a4548c-kube-api-access-42kkh\") pod \"cilium-hjt25\" (UID: \"069f1dc0-edea-4cb4-96d0-ecacd3a4548c\") " pod="kube-system/cilium-hjt25" Dec 13 15:46:03.250151 kubelet[1447]: I1213 15:46:03.250085 1447 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/069f1dc0-edea-4cb4-96d0-ecacd3a4548c-bpf-maps\") pod \"cilium-hjt25\" (UID: \"069f1dc0-edea-4cb4-96d0-ecacd3a4548c\") " pod="kube-system/cilium-hjt25" Dec 13 15:46:03.250151 kubelet[1447]: I1213 15:46:03.250115 1447 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/069f1dc0-edea-4cb4-96d0-ecacd3a4548c-cilium-cgroup\") pod \"cilium-hjt25\" (UID: \"069f1dc0-edea-4cb4-96d0-ecacd3a4548c\") " pod="kube-system/cilium-hjt25" Dec 13 15:46:03.250420 kubelet[1447]: I1213 15:46:03.250149 1447 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/069f1dc0-edea-4cb4-96d0-ecacd3a4548c-etc-cni-netd\") pod \"cilium-hjt25\" (UID: \"069f1dc0-edea-4cb4-96d0-ecacd3a4548c\") " pod="kube-system/cilium-hjt25" Dec 13 15:46:03.250420 kubelet[1447]: I1213 15:46:03.250178 1447 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/069f1dc0-edea-4cb4-96d0-ecacd3a4548c-lib-modules\") pod \"cilium-hjt25\" (UID: \"069f1dc0-edea-4cb4-96d0-ecacd3a4548c\") " pod="kube-system/cilium-hjt25" Dec 13 15:46:03.250420 kubelet[1447]: I1213 15:46:03.250205 1447 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/069f1dc0-edea-4cb4-96d0-ecacd3a4548c-cni-path\") pod \"cilium-hjt25\" (UID: \"069f1dc0-edea-4cb4-96d0-ecacd3a4548c\") " pod="kube-system/cilium-hjt25" Dec 13 15:46:03.250420 kubelet[1447]: I1213 15:46:03.250233 1447 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/069f1dc0-edea-4cb4-96d0-ecacd3a4548c-xtables-lock\") pod \"cilium-hjt25\" (UID: \"069f1dc0-edea-4cb4-96d0-ecacd3a4548c\") " pod="kube-system/cilium-hjt25" Dec 13 15:46:03.250420 kubelet[1447]: I1213 15:46:03.250258 1447 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/069f1dc0-edea-4cb4-96d0-ecacd3a4548c-host-proc-sys-net\") pod \"cilium-hjt25\" (UID: \"069f1dc0-edea-4cb4-96d0-ecacd3a4548c\") " pod="kube-system/cilium-hjt25" Dec 13 15:46:03.250420 kubelet[1447]: I1213 15:46:03.250282 1447 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/069f1dc0-edea-4cb4-96d0-ecacd3a4548c-hubble-tls\") pod \"cilium-hjt25\" (UID: \"069f1dc0-edea-4cb4-96d0-ecacd3a4548c\") " pod="kube-system/cilium-hjt25" Dec 13 15:46:03.501560 env[1186]: time="2024-12-13T15:46:03.501378295Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-hjt25,Uid:069f1dc0-edea-4cb4-96d0-ecacd3a4548c,Namespace:kube-system,Attempt:0,}" Dec 13 15:46:03.522265 env[1186]: time="2024-12-13T15:46:03.522162861Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 15:46:03.522602 env[1186]: time="2024-12-13T15:46:03.522224574Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 15:46:03.522602 env[1186]: time="2024-12-13T15:46:03.522242820Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 15:46:03.522602 env[1186]: time="2024-12-13T15:46:03.522436179Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2ef400d60dc8e957e2de267d57d85828a0aebebc9e56077da8600daedf1dd635 pid=3220 runtime=io.containerd.runc.v2 Dec 13 15:46:03.539481 systemd[1]: Started cri-containerd-2ef400d60dc8e957e2de267d57d85828a0aebebc9e56077da8600daedf1dd635.scope. Dec 13 15:46:03.587253 env[1186]: time="2024-12-13T15:46:03.587183550Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-hjt25,Uid:069f1dc0-edea-4cb4-96d0-ecacd3a4548c,Namespace:kube-system,Attempt:0,} returns sandbox id \"2ef400d60dc8e957e2de267d57d85828a0aebebc9e56077da8600daedf1dd635\"" Dec 13 15:46:03.591707 env[1186]: time="2024-12-13T15:46:03.591644525Z" level=info msg="CreateContainer within sandbox \"2ef400d60dc8e957e2de267d57d85828a0aebebc9e56077da8600daedf1dd635\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 15:46:03.607389 env[1186]: time="2024-12-13T15:46:03.607268499Z" level=info msg="CreateContainer within sandbox \"2ef400d60dc8e957e2de267d57d85828a0aebebc9e56077da8600daedf1dd635\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"2e9d8db0e31b83f8407e6be23cf26e3cf7c1d1920963ffa813df7dddd7cf24df\"" Dec 13 15:46:03.608738 env[1186]: time="2024-12-13T15:46:03.608683624Z" level=info msg="StartContainer for \"2e9d8db0e31b83f8407e6be23cf26e3cf7c1d1920963ffa813df7dddd7cf24df\"" Dec 13 15:46:03.633129 systemd[1]: Started cri-containerd-2e9d8db0e31b83f8407e6be23cf26e3cf7c1d1920963ffa813df7dddd7cf24df.scope. Dec 13 15:46:03.671988 kubelet[1447]: E1213 15:46:03.671865 1447 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:46:03.683862 env[1186]: time="2024-12-13T15:46:03.683770326Z" level=info msg="StartContainer for \"2e9d8db0e31b83f8407e6be23cf26e3cf7c1d1920963ffa813df7dddd7cf24df\" returns successfully" Dec 13 15:46:03.704045 systemd[1]: cri-containerd-2e9d8db0e31b83f8407e6be23cf26e3cf7c1d1920963ffa813df7dddd7cf24df.scope: Deactivated successfully. Dec 13 15:46:03.743856 env[1186]: time="2024-12-13T15:46:03.743731567Z" level=info msg="shim disconnected" id=2e9d8db0e31b83f8407e6be23cf26e3cf7c1d1920963ffa813df7dddd7cf24df Dec 13 15:46:03.743856 env[1186]: time="2024-12-13T15:46:03.743801851Z" level=warning msg="cleaning up after shim disconnected" id=2e9d8db0e31b83f8407e6be23cf26e3cf7c1d1920963ffa813df7dddd7cf24df namespace=k8s.io Dec 13 15:46:03.743856 env[1186]: time="2024-12-13T15:46:03.743819927Z" level=info msg="cleaning up dead shim" Dec 13 15:46:03.747945 kubelet[1447]: W1213 15:46:03.745726 1447 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9b30ca00_cfdb_4b4b_9d28_8c10899d57cf.slice/cri-containerd-6f6e71a466e471a83e531c9aca0017bae5dac65551cfec89767723b4ab0cb962.scope WatchSource:0}: container "6f6e71a466e471a83e531c9aca0017bae5dac65551cfec89767723b4ab0cb962" in namespace "k8s.io": not found Dec 13 15:46:03.768591 env[1186]: time="2024-12-13T15:46:03.768430023Z" level=warning msg="cleanup warnings time=\"2024-12-13T15:46:03Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3302 runtime=io.containerd.runc.v2\n" Dec 13 15:46:03.820916 kubelet[1447]: I1213 15:46:03.820866 1447 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b30ca00-cfdb-4b4b-9d28-8c10899d57cf" path="/var/lib/kubelet/pods/9b30ca00-cfdb-4b4b-9d28-8c10899d57cf/volumes" Dec 13 15:46:04.116449 env[1186]: time="2024-12-13T15:46:04.116272417Z" level=info msg="CreateContainer within sandbox \"2ef400d60dc8e957e2de267d57d85828a0aebebc9e56077da8600daedf1dd635\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 15:46:04.133679 env[1186]: time="2024-12-13T15:46:04.133587317Z" level=info msg="CreateContainer within sandbox \"2ef400d60dc8e957e2de267d57d85828a0aebebc9e56077da8600daedf1dd635\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"c2eee8a8b25f7fbba22b72b796c9defd7dd03800b780fe1acd14013ef00772cc\"" Dec 13 15:46:04.134664 env[1186]: time="2024-12-13T15:46:04.134628484Z" level=info msg="StartContainer for \"c2eee8a8b25f7fbba22b72b796c9defd7dd03800b780fe1acd14013ef00772cc\"" Dec 13 15:46:04.160768 systemd[1]: Started cri-containerd-c2eee8a8b25f7fbba22b72b796c9defd7dd03800b780fe1acd14013ef00772cc.scope. Dec 13 15:46:04.213069 env[1186]: time="2024-12-13T15:46:04.212984013Z" level=info msg="StartContainer for \"c2eee8a8b25f7fbba22b72b796c9defd7dd03800b780fe1acd14013ef00772cc\" returns successfully" Dec 13 15:46:04.223885 systemd[1]: cri-containerd-c2eee8a8b25f7fbba22b72b796c9defd7dd03800b780fe1acd14013ef00772cc.scope: Deactivated successfully. Dec 13 15:46:04.255617 env[1186]: time="2024-12-13T15:46:04.255542447Z" level=info msg="shim disconnected" id=c2eee8a8b25f7fbba22b72b796c9defd7dd03800b780fe1acd14013ef00772cc Dec 13 15:46:04.256070 env[1186]: time="2024-12-13T15:46:04.256038390Z" level=warning msg="cleaning up after shim disconnected" id=c2eee8a8b25f7fbba22b72b796c9defd7dd03800b780fe1acd14013ef00772cc namespace=k8s.io Dec 13 15:46:04.256229 env[1186]: time="2024-12-13T15:46:04.256199988Z" level=info msg="cleaning up dead shim" Dec 13 15:46:04.269957 env[1186]: time="2024-12-13T15:46:04.269905780Z" level=warning msg="cleanup warnings time=\"2024-12-13T15:46:04Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3368 runtime=io.containerd.runc.v2\n" Dec 13 15:46:04.672891 kubelet[1447]: E1213 15:46:04.672789 1447 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:46:05.119213 env[1186]: time="2024-12-13T15:46:05.119149410Z" level=info msg="CreateContainer within sandbox \"2ef400d60dc8e957e2de267d57d85828a0aebebc9e56077da8600daedf1dd635\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 15:46:05.137375 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1036499936.mount: Deactivated successfully. Dec 13 15:46:05.146538 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount540042648.mount: Deactivated successfully. Dec 13 15:46:05.153437 env[1186]: time="2024-12-13T15:46:05.153297298Z" level=info msg="CreateContainer within sandbox \"2ef400d60dc8e957e2de267d57d85828a0aebebc9e56077da8600daedf1dd635\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"1d50e31df9294d26c856d1b59f9b3111744c6e7f72ff5846db99837a6e6331fa\"" Dec 13 15:46:05.154582 env[1186]: time="2024-12-13T15:46:05.154546470Z" level=info msg="StartContainer for \"1d50e31df9294d26c856d1b59f9b3111744c6e7f72ff5846db99837a6e6331fa\"" Dec 13 15:46:05.183623 systemd[1]: Started cri-containerd-1d50e31df9294d26c856d1b59f9b3111744c6e7f72ff5846db99837a6e6331fa.scope. Dec 13 15:46:05.233908 env[1186]: time="2024-12-13T15:46:05.233832807Z" level=info msg="StartContainer for \"1d50e31df9294d26c856d1b59f9b3111744c6e7f72ff5846db99837a6e6331fa\" returns successfully" Dec 13 15:46:05.237274 systemd[1]: cri-containerd-1d50e31df9294d26c856d1b59f9b3111744c6e7f72ff5846db99837a6e6331fa.scope: Deactivated successfully. Dec 13 15:46:05.268409 env[1186]: time="2024-12-13T15:46:05.268247195Z" level=info msg="shim disconnected" id=1d50e31df9294d26c856d1b59f9b3111744c6e7f72ff5846db99837a6e6331fa Dec 13 15:46:05.268409 env[1186]: time="2024-12-13T15:46:05.268406236Z" level=warning msg="cleaning up after shim disconnected" id=1d50e31df9294d26c856d1b59f9b3111744c6e7f72ff5846db99837a6e6331fa namespace=k8s.io Dec 13 15:46:05.268751 env[1186]: time="2024-12-13T15:46:05.268428712Z" level=info msg="cleaning up dead shim" Dec 13 15:46:05.278440 env[1186]: time="2024-12-13T15:46:05.278340503Z" level=warning msg="cleanup warnings time=\"2024-12-13T15:46:05Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3424 runtime=io.containerd.runc.v2\n" Dec 13 15:46:05.674059 kubelet[1447]: E1213 15:46:05.673987 1447 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:46:06.125684 env[1186]: time="2024-12-13T15:46:06.125497593Z" level=info msg="CreateContainer within sandbox \"2ef400d60dc8e957e2de267d57d85828a0aebebc9e56077da8600daedf1dd635\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 15:46:06.155781 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3454953945.mount: Deactivated successfully. Dec 13 15:46:06.164465 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4241604468.mount: Deactivated successfully. Dec 13 15:46:06.180526 env[1186]: time="2024-12-13T15:46:06.180414197Z" level=info msg="CreateContainer within sandbox \"2ef400d60dc8e957e2de267d57d85828a0aebebc9e56077da8600daedf1dd635\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"69417713d4028702f50daec03860f87102cf33f63c68952fd7c3f35d0a2a8ae6\"" Dec 13 15:46:06.181589 env[1186]: time="2024-12-13T15:46:06.181550019Z" level=info msg="StartContainer for \"69417713d4028702f50daec03860f87102cf33f63c68952fd7c3f35d0a2a8ae6\"" Dec 13 15:46:06.210793 systemd[1]: Started cri-containerd-69417713d4028702f50daec03860f87102cf33f63c68952fd7c3f35d0a2a8ae6.scope. Dec 13 15:46:06.252627 systemd[1]: cri-containerd-69417713d4028702f50daec03860f87102cf33f63c68952fd7c3f35d0a2a8ae6.scope: Deactivated successfully. Dec 13 15:46:06.256935 env[1186]: time="2024-12-13T15:46:06.256542179Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod069f1dc0_edea_4cb4_96d0_ecacd3a4548c.slice/cri-containerd-69417713d4028702f50daec03860f87102cf33f63c68952fd7c3f35d0a2a8ae6.scope/memory.events\": no such file or directory" Dec 13 15:46:06.259088 env[1186]: time="2024-12-13T15:46:06.259032181Z" level=info msg="StartContainer for \"69417713d4028702f50daec03860f87102cf33f63c68952fd7c3f35d0a2a8ae6\" returns successfully" Dec 13 15:46:06.288656 env[1186]: time="2024-12-13T15:46:06.285892271Z" level=info msg="shim disconnected" id=69417713d4028702f50daec03860f87102cf33f63c68952fd7c3f35d0a2a8ae6 Dec 13 15:46:06.288656 env[1186]: time="2024-12-13T15:46:06.285975848Z" level=warning msg="cleaning up after shim disconnected" id=69417713d4028702f50daec03860f87102cf33f63c68952fd7c3f35d0a2a8ae6 namespace=k8s.io Dec 13 15:46:06.288656 env[1186]: time="2024-12-13T15:46:06.285995115Z" level=info msg="cleaning up dead shim" Dec 13 15:46:06.299807 env[1186]: time="2024-12-13T15:46:06.299689851Z" level=warning msg="cleanup warnings time=\"2024-12-13T15:46:06Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3477 runtime=io.containerd.runc.v2\n" Dec 13 15:46:06.676317 kubelet[1447]: E1213 15:46:06.676240 1447 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:46:06.777905 kubelet[1447]: E1213 15:46:06.777840 1447 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 15:46:06.870122 kubelet[1447]: W1213 15:46:06.869859 1447 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod069f1dc0_edea_4cb4_96d0_ecacd3a4548c.slice/cri-containerd-2e9d8db0e31b83f8407e6be23cf26e3cf7c1d1920963ffa813df7dddd7cf24df.scope WatchSource:0}: task 2e9d8db0e31b83f8407e6be23cf26e3cf7c1d1920963ffa813df7dddd7cf24df not found: not found Dec 13 15:46:07.131685 env[1186]: time="2024-12-13T15:46:07.131603723Z" level=info msg="CreateContainer within sandbox \"2ef400d60dc8e957e2de267d57d85828a0aebebc9e56077da8600daedf1dd635\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 15:46:07.154418 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2651240874.mount: Deactivated successfully. Dec 13 15:46:07.164316 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3895676231.mount: Deactivated successfully. Dec 13 15:46:07.170182 env[1186]: time="2024-12-13T15:46:07.170110882Z" level=info msg="CreateContainer within sandbox \"2ef400d60dc8e957e2de267d57d85828a0aebebc9e56077da8600daedf1dd635\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"414ff3722f64aac566c3b131017c65858df3255edbbfec819486e0b5cda95652\"" Dec 13 15:46:07.171323 env[1186]: time="2024-12-13T15:46:07.171268068Z" level=info msg="StartContainer for \"414ff3722f64aac566c3b131017c65858df3255edbbfec819486e0b5cda95652\"" Dec 13 15:46:07.204003 systemd[1]: Started cri-containerd-414ff3722f64aac566c3b131017c65858df3255edbbfec819486e0b5cda95652.scope. Dec 13 15:46:07.252412 env[1186]: time="2024-12-13T15:46:07.251970565Z" level=info msg="StartContainer for \"414ff3722f64aac566c3b131017c65858df3255edbbfec819486e0b5cda95652\" returns successfully" Dec 13 15:46:07.677390 kubelet[1447]: E1213 15:46:07.677228 1447 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:46:08.093507 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Dec 13 15:46:08.168220 kubelet[1447]: I1213 15:46:08.168104 1447 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-hjt25" podStartSLOduration=5.168058367 podStartE2EDuration="5.168058367s" podCreationTimestamp="2024-12-13 15:46:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 15:46:08.166849064 +0000 UTC m=+87.347773816" watchObservedRunningTime="2024-12-13 15:46:08.168058367 +0000 UTC m=+87.348983114" Dec 13 15:46:08.485812 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4222306255.mount: Deactivated successfully. Dec 13 15:46:08.677698 kubelet[1447]: E1213 15:46:08.677613 1447 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:46:09.655002 env[1186]: time="2024-12-13T15:46:09.654887500Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 15:46:09.662685 env[1186]: time="2024-12-13T15:46:09.662597662Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Dec 13 15:46:09.663011 env[1186]: time="2024-12-13T15:46:09.662942943Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 15:46:09.664052 env[1186]: time="2024-12-13T15:46:09.664011324Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 15:46:09.669455 env[1186]: time="2024-12-13T15:46:09.669400223Z" level=info msg="CreateContainer within sandbox \"5436eeee5cc0a7d1eb121b66d338d9c20e6387f54ed8bf9be10c991c42f49fe0\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Dec 13 15:46:09.678909 kubelet[1447]: E1213 15:46:09.678761 1447 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:46:09.692600 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount742966166.mount: Deactivated successfully. Dec 13 15:46:09.702524 env[1186]: time="2024-12-13T15:46:09.702209622Z" level=info msg="CreateContainer within sandbox \"5436eeee5cc0a7d1eb121b66d338d9c20e6387f54ed8bf9be10c991c42f49fe0\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"85d9a23d54aee091c11f9c039afdee4c552296c36d2cb437db8fe3ebbf1c06fa\"" Dec 13 15:46:09.703644 env[1186]: time="2024-12-13T15:46:09.703396383Z" level=info msg="StartContainer for \"85d9a23d54aee091c11f9c039afdee4c552296c36d2cb437db8fe3ebbf1c06fa\"" Dec 13 15:46:09.750914 systemd[1]: Started cri-containerd-85d9a23d54aee091c11f9c039afdee4c552296c36d2cb437db8fe3ebbf1c06fa.scope. Dec 13 15:46:09.825858 env[1186]: time="2024-12-13T15:46:09.825763680Z" level=info msg="StartContainer for \"85d9a23d54aee091c11f9c039afdee4c552296c36d2cb437db8fe3ebbf1c06fa\" returns successfully" Dec 13 15:46:09.985952 kubelet[1447]: W1213 15:46:09.984873 1447 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod069f1dc0_edea_4cb4_96d0_ecacd3a4548c.slice/cri-containerd-c2eee8a8b25f7fbba22b72b796c9defd7dd03800b780fe1acd14013ef00772cc.scope WatchSource:0}: task c2eee8a8b25f7fbba22b72b796c9defd7dd03800b780fe1acd14013ef00772cc not found: not found Dec 13 15:46:10.679128 kubelet[1447]: E1213 15:46:10.679023 1447 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:46:10.688438 systemd[1]: run-containerd-runc-k8s.io-85d9a23d54aee091c11f9c039afdee4c552296c36d2cb437db8fe3ebbf1c06fa-runc.R0Bvn9.mount: Deactivated successfully. Dec 13 15:46:11.680628 kubelet[1447]: E1213 15:46:11.680500 1447 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:46:11.812876 systemd-networkd[1022]: lxc_health: Link UP Dec 13 15:46:11.835438 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Dec 13 15:46:11.835196 systemd-networkd[1022]: lxc_health: Gained carrier Dec 13 15:46:12.271322 systemd[1]: run-containerd-runc-k8s.io-414ff3722f64aac566c3b131017c65858df3255edbbfec819486e0b5cda95652-runc.AWpbzq.mount: Deactivated successfully. Dec 13 15:46:12.681950 kubelet[1447]: E1213 15:46:12.681745 1447 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:46:13.098603 kubelet[1447]: W1213 15:46:13.098447 1447 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod069f1dc0_edea_4cb4_96d0_ecacd3a4548c.slice/cri-containerd-1d50e31df9294d26c856d1b59f9b3111744c6e7f72ff5846db99837a6e6331fa.scope WatchSource:0}: task 1d50e31df9294d26c856d1b59f9b3111744c6e7f72ff5846db99837a6e6331fa not found: not found Dec 13 15:46:13.531931 kubelet[1447]: I1213 15:46:13.531751 1447 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-vrrkw" podStartSLOduration=4.441630817 podStartE2EDuration="13.531664237s" podCreationTimestamp="2024-12-13 15:46:00 +0000 UTC" firstStartedPulling="2024-12-13 15:46:00.575580882 +0000 UTC m=+79.756505626" lastFinishedPulling="2024-12-13 15:46:09.665614302 +0000 UTC m=+88.846539046" observedRunningTime="2024-12-13 15:46:10.152615797 +0000 UTC m=+89.333540549" watchObservedRunningTime="2024-12-13 15:46:13.531664237 +0000 UTC m=+92.712588982" Dec 13 15:46:13.614962 systemd-networkd[1022]: lxc_health: Gained IPv6LL Dec 13 15:46:13.682069 kubelet[1447]: E1213 15:46:13.682007 1447 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:46:14.578638 systemd[1]: run-containerd-runc-k8s.io-414ff3722f64aac566c3b131017c65858df3255edbbfec819486e0b5cda95652-runc.L3Lzo6.mount: Deactivated successfully. Dec 13 15:46:14.683137 kubelet[1447]: E1213 15:46:14.683017 1447 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:46:15.683680 kubelet[1447]: E1213 15:46:15.683510 1447 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:46:16.210246 kubelet[1447]: W1213 15:46:16.210171 1447 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod069f1dc0_edea_4cb4_96d0_ecacd3a4548c.slice/cri-containerd-69417713d4028702f50daec03860f87102cf33f63c68952fd7c3f35d0a2a8ae6.scope WatchSource:0}: task 69417713d4028702f50daec03860f87102cf33f63c68952fd7c3f35d0a2a8ae6 not found: not found Dec 13 15:46:16.685633 kubelet[1447]: E1213 15:46:16.685517 1447 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:46:16.851231 systemd[1]: run-containerd-runc-k8s.io-414ff3722f64aac566c3b131017c65858df3255edbbfec819486e0b5cda95652-runc.iAB0uQ.mount: Deactivated successfully. Dec 13 15:46:17.687403 kubelet[1447]: E1213 15:46:17.687322 1447 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:46:18.688570 kubelet[1447]: E1213 15:46:18.688464 1447 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:46:19.689068 kubelet[1447]: E1213 15:46:19.688981 1447 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:46:20.690203 kubelet[1447]: E1213 15:46:20.690121 1447 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:46:21.581041 kubelet[1447]: E1213 15:46:21.580853 1447 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:46:21.691114 kubelet[1447]: E1213 15:46:21.691046 1447 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"