Dec 13 06:55:18.982445 kernel: Linux version 5.15.173-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Thu Dec 12 23:50:37 -00 2024 Dec 13 06:55:18.982490 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=66bd2580285375a2ba5b0e34ba63606314bcd90aaed1de1996371bdcb032485c Dec 13 06:55:18.982516 kernel: BIOS-provided physical RAM map: Dec 13 06:55:18.982526 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Dec 13 06:55:18.982536 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Dec 13 06:55:18.982545 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Dec 13 06:55:18.982557 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdbfff] usable Dec 13 06:55:18.982567 kernel: BIOS-e820: [mem 0x000000007ffdc000-0x000000007fffffff] reserved Dec 13 06:55:18.982577 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Dec 13 06:55:18.982587 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Dec 13 06:55:18.982601 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Dec 13 06:55:18.982611 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Dec 13 06:55:18.982621 kernel: NX (Execute Disable) protection: active Dec 13 06:55:18.982631 kernel: SMBIOS 2.8 present. Dec 13 06:55:18.982652 kernel: DMI: Red Hat KVM/RHEL-AV, BIOS 1.13.0-2.module_el8.5.0+2608+72063365 04/01/2014 Dec 13 06:55:18.982663 kernel: Hypervisor detected: KVM Dec 13 06:55:18.982678 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 13 06:55:18.982689 kernel: kvm-clock: cpu 0, msr 2819b001, primary cpu clock Dec 13 06:55:18.982700 kernel: kvm-clock: using sched offset of 4850920597 cycles Dec 13 06:55:18.982711 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 13 06:55:18.982722 kernel: tsc: Detected 2499.998 MHz processor Dec 13 06:55:18.982740 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 13 06:55:18.982751 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 13 06:55:18.984072 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Dec 13 06:55:18.984086 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 13 06:55:18.984104 kernel: Using GB pages for direct mapping Dec 13 06:55:18.984115 kernel: ACPI: Early table checksum verification disabled Dec 13 06:55:18.984126 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS ) Dec 13 06:55:18.984146 kernel: ACPI: RSDT 0x000000007FFE47A5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 06:55:18.984157 kernel: ACPI: FACP 0x000000007FFE438D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 06:55:18.984168 kernel: ACPI: DSDT 0x000000007FFDFD80 00460D (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 06:55:18.984179 kernel: ACPI: FACS 0x000000007FFDFD40 000040 Dec 13 06:55:18.984189 kernel: ACPI: APIC 0x000000007FFE4481 0000F0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 06:55:18.984200 kernel: ACPI: SRAT 0x000000007FFE4571 0001D0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 06:55:18.984216 kernel: ACPI: MCFG 0x000000007FFE4741 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 06:55:18.984227 kernel: ACPI: WAET 0x000000007FFE477D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 06:55:18.984237 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe438d-0x7ffe4480] Dec 13 06:55:18.984248 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfd80-0x7ffe438c] Dec 13 06:55:18.984259 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfd40-0x7ffdfd7f] Dec 13 06:55:18.984270 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe4481-0x7ffe4570] Dec 13 06:55:18.984287 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe4571-0x7ffe4740] Dec 13 06:55:18.984303 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe4741-0x7ffe477c] Dec 13 06:55:18.984314 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe477d-0x7ffe47a4] Dec 13 06:55:18.984326 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Dec 13 06:55:18.984337 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Dec 13 06:55:18.984349 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Dec 13 06:55:18.984360 kernel: SRAT: PXM 0 -> APIC 0x03 -> Node 0 Dec 13 06:55:18.984381 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Dec 13 06:55:18.984397 kernel: SRAT: PXM 0 -> APIC 0x05 -> Node 0 Dec 13 06:55:18.984408 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Dec 13 06:55:18.984420 kernel: SRAT: PXM 0 -> APIC 0x07 -> Node 0 Dec 13 06:55:18.984431 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Dec 13 06:55:18.984444 kernel: SRAT: PXM 0 -> APIC 0x09 -> Node 0 Dec 13 06:55:18.984455 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Dec 13 06:55:18.984466 kernel: SRAT: PXM 0 -> APIC 0x0b -> Node 0 Dec 13 06:55:18.984478 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Dec 13 06:55:18.984489 kernel: SRAT: PXM 0 -> APIC 0x0d -> Node 0 Dec 13 06:55:18.984501 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Dec 13 06:55:18.984516 kernel: SRAT: PXM 0 -> APIC 0x0f -> Node 0 Dec 13 06:55:18.984528 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Dec 13 06:55:18.984539 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Dec 13 06:55:18.984551 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x20800fffff] hotplug Dec 13 06:55:18.984562 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdbfff] -> [mem 0x00000000-0x7ffdbfff] Dec 13 06:55:18.984574 kernel: NODE_DATA(0) allocated [mem 0x7ffd6000-0x7ffdbfff] Dec 13 06:55:18.984586 kernel: Zone ranges: Dec 13 06:55:18.984598 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 13 06:55:18.984609 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdbfff] Dec 13 06:55:18.984625 kernel: Normal empty Dec 13 06:55:18.984637 kernel: Movable zone start for each node Dec 13 06:55:18.984648 kernel: Early memory node ranges Dec 13 06:55:18.984660 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Dec 13 06:55:18.984671 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdbfff] Dec 13 06:55:18.984682 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdbfff] Dec 13 06:55:18.984694 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 06:55:18.984705 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Dec 13 06:55:18.984717 kernel: On node 0, zone DMA32: 36 pages in unavailable ranges Dec 13 06:55:18.984732 kernel: ACPI: PM-Timer IO Port: 0x608 Dec 13 06:55:18.984744 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 13 06:55:18.984755 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 13 06:55:18.985872 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Dec 13 06:55:18.985888 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 13 06:55:18.985900 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 13 06:55:18.985912 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 13 06:55:18.985923 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 13 06:55:18.985935 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 13 06:55:18.985953 kernel: TSC deadline timer available Dec 13 06:55:18.985965 kernel: smpboot: Allowing 16 CPUs, 14 hotplug CPUs Dec 13 06:55:18.985977 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Dec 13 06:55:18.985988 kernel: Booting paravirtualized kernel on KVM Dec 13 06:55:18.986000 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 13 06:55:18.986012 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:16 nr_node_ids:1 Dec 13 06:55:18.986024 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u262144 Dec 13 06:55:18.986036 kernel: pcpu-alloc: s188696 r8192 d32488 u262144 alloc=1*2097152 Dec 13 06:55:18.986047 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Dec 13 06:55:18.986063 kernel: kvm-guest: stealtime: cpu 0, msr 7da1c0c0 Dec 13 06:55:18.986075 kernel: kvm-guest: PV spinlocks enabled Dec 13 06:55:18.986087 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 13 06:55:18.986098 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515804 Dec 13 06:55:18.986110 kernel: Policy zone: DMA32 Dec 13 06:55:18.986123 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=66bd2580285375a2ba5b0e34ba63606314bcd90aaed1de1996371bdcb032485c Dec 13 06:55:18.986135 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 06:55:18.986147 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 06:55:18.986163 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 13 06:55:18.986175 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 06:55:18.986187 kernel: Memory: 1903832K/2096616K available (12294K kernel code, 2275K rwdata, 13716K rodata, 47476K init, 4108K bss, 192524K reserved, 0K cma-reserved) Dec 13 06:55:18.986199 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Dec 13 06:55:18.986210 kernel: Kernel/User page tables isolation: enabled Dec 13 06:55:18.986222 kernel: ftrace: allocating 34549 entries in 135 pages Dec 13 06:55:18.986233 kernel: ftrace: allocated 135 pages with 4 groups Dec 13 06:55:18.986245 kernel: rcu: Hierarchical RCU implementation. Dec 13 06:55:18.986257 kernel: rcu: RCU event tracing is enabled. Dec 13 06:55:18.986273 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Dec 13 06:55:18.986285 kernel: Rude variant of Tasks RCU enabled. Dec 13 06:55:18.986297 kernel: Tracing variant of Tasks RCU enabled. Dec 13 06:55:18.986308 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 06:55:18.986320 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Dec 13 06:55:18.986331 kernel: NR_IRQS: 33024, nr_irqs: 552, preallocated irqs: 16 Dec 13 06:55:18.986343 kernel: random: crng init done Dec 13 06:55:18.986368 kernel: Console: colour VGA+ 80x25 Dec 13 06:55:18.986380 kernel: printk: console [tty0] enabled Dec 13 06:55:18.986393 kernel: printk: console [ttyS0] enabled Dec 13 06:55:18.986405 kernel: ACPI: Core revision 20210730 Dec 13 06:55:18.986426 kernel: APIC: Switch to symmetric I/O mode setup Dec 13 06:55:18.986442 kernel: x2apic enabled Dec 13 06:55:18.986454 kernel: Switched APIC routing to physical x2apic. Dec 13 06:55:18.986466 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Dec 13 06:55:18.986487 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499998) Dec 13 06:55:18.986499 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Dec 13 06:55:18.986516 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Dec 13 06:55:18.986528 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Dec 13 06:55:18.986540 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 13 06:55:18.986560 kernel: Spectre V2 : Mitigation: Retpolines Dec 13 06:55:18.986573 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Dec 13 06:55:18.986585 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Dec 13 06:55:18.986597 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Dec 13 06:55:18.986609 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 13 06:55:18.986621 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Dec 13 06:55:18.986633 kernel: MDS: Mitigation: Clear CPU buffers Dec 13 06:55:18.986645 kernel: MMIO Stale Data: Unknown: No mitigations Dec 13 06:55:18.986661 kernel: SRBDS: Unknown: Dependent on hypervisor status Dec 13 06:55:18.986673 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 13 06:55:18.986685 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 13 06:55:18.986697 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 13 06:55:18.986709 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 13 06:55:18.986735 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Dec 13 06:55:18.986748 kernel: Freeing SMP alternatives memory: 32K Dec 13 06:55:18.986770 kernel: pid_max: default: 32768 minimum: 301 Dec 13 06:55:18.986797 kernel: LSM: Security Framework initializing Dec 13 06:55:18.986818 kernel: SELinux: Initializing. Dec 13 06:55:18.986833 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 13 06:55:18.986851 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 13 06:55:18.986863 kernel: smpboot: CPU0: Intel Xeon E3-12xx v2 (Ivy Bridge, IBRS) (family: 0x6, model: 0x3a, stepping: 0x9) Dec 13 06:55:18.986876 kernel: Performance Events: unsupported p6 CPU model 58 no PMU driver, software events only. Dec 13 06:55:18.986888 kernel: signal: max sigframe size: 1776 Dec 13 06:55:18.986900 kernel: rcu: Hierarchical SRCU implementation. Dec 13 06:55:18.986912 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Dec 13 06:55:18.986924 kernel: smp: Bringing up secondary CPUs ... Dec 13 06:55:18.986936 kernel: x86: Booting SMP configuration: Dec 13 06:55:18.986948 kernel: .... node #0, CPUs: #1 Dec 13 06:55:18.986964 kernel: kvm-clock: cpu 1, msr 2819b041, secondary cpu clock Dec 13 06:55:18.986977 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Dec 13 06:55:18.986989 kernel: kvm-guest: stealtime: cpu 1, msr 7da5c0c0 Dec 13 06:55:18.987001 kernel: smp: Brought up 1 node, 2 CPUs Dec 13 06:55:18.987013 kernel: smpboot: Max logical packages: 16 Dec 13 06:55:18.987025 kernel: smpboot: Total of 2 processors activated (9999.99 BogoMIPS) Dec 13 06:55:18.987037 kernel: devtmpfs: initialized Dec 13 06:55:18.987050 kernel: x86/mm: Memory block size: 128MB Dec 13 06:55:18.987062 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 06:55:18.987074 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Dec 13 06:55:18.987090 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 06:55:18.987102 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 06:55:18.987115 kernel: audit: initializing netlink subsys (disabled) Dec 13 06:55:18.987127 kernel: audit: type=2000 audit(1734072918.043:1): state=initialized audit_enabled=0 res=1 Dec 13 06:55:18.987139 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 06:55:18.987154 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 13 06:55:18.987166 kernel: cpuidle: using governor menu Dec 13 06:55:18.987178 kernel: ACPI: bus type PCI registered Dec 13 06:55:18.987190 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 06:55:18.987206 kernel: dca service started, version 1.12.1 Dec 13 06:55:18.987219 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Dec 13 06:55:18.987231 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved in E820 Dec 13 06:55:18.987243 kernel: PCI: Using configuration type 1 for base access Dec 13 06:55:18.987255 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 13 06:55:18.987267 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 06:55:18.987279 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 06:55:18.987291 kernel: ACPI: Added _OSI(Module Device) Dec 13 06:55:18.987307 kernel: ACPI: Added _OSI(Processor Device) Dec 13 06:55:18.987320 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 06:55:18.987332 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 06:55:18.987344 kernel: ACPI: Added _OSI(Linux-Dell-Video) Dec 13 06:55:18.987356 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Dec 13 06:55:18.987377 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Dec 13 06:55:18.987389 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 06:55:18.987401 kernel: ACPI: Interpreter enabled Dec 13 06:55:18.987413 kernel: ACPI: PM: (supports S0 S5) Dec 13 06:55:18.987425 kernel: ACPI: Using IOAPIC for interrupt routing Dec 13 06:55:18.987450 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 13 06:55:18.987462 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Dec 13 06:55:18.987474 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 13 06:55:18.987736 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 13 06:55:18.987938 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Dec 13 06:55:18.988094 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Dec 13 06:55:18.988112 kernel: PCI host bridge to bus 0000:00 Dec 13 06:55:18.988340 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 13 06:55:18.988491 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 13 06:55:18.988638 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 13 06:55:18.988784 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Dec 13 06:55:18.988962 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Dec 13 06:55:18.989112 kernel: pci_bus 0000:00: root bus resource [mem 0x20c0000000-0x28bfffffff window] Dec 13 06:55:18.989295 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 13 06:55:18.989481 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Dec 13 06:55:18.989661 kernel: pci 0000:00:01.0: [1013:00b8] type 00 class 0x030000 Dec 13 06:55:18.992901 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfa000000-0xfbffffff pref] Dec 13 06:55:18.993071 kernel: pci 0000:00:01.0: reg 0x14: [mem 0xfea50000-0xfea50fff] Dec 13 06:55:18.993228 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea40000-0xfea4ffff pref] Dec 13 06:55:18.993383 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 13 06:55:18.993560 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Dec 13 06:55:18.993714 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea51000-0xfea51fff] Dec 13 06:55:18.993917 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Dec 13 06:55:18.994073 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea52000-0xfea52fff] Dec 13 06:55:18.994238 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Dec 13 06:55:18.994391 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea53000-0xfea53fff] Dec 13 06:55:18.994579 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Dec 13 06:55:18.994731 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea54000-0xfea54fff] Dec 13 06:55:18.994948 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Dec 13 06:55:18.995105 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea55000-0xfea55fff] Dec 13 06:55:18.995279 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Dec 13 06:55:18.995450 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea56000-0xfea56fff] Dec 13 06:55:18.995627 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Dec 13 06:55:18.995795 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea57000-0xfea57fff] Dec 13 06:55:18.995976 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Dec 13 06:55:18.996132 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea58000-0xfea58fff] Dec 13 06:55:18.996306 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Dec 13 06:55:18.996460 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc0c0-0xc0df] Dec 13 06:55:18.996615 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfea59000-0xfea59fff] Dec 13 06:55:18.996802 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Dec 13 06:55:18.996972 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfea00000-0xfea3ffff pref] Dec 13 06:55:18.997137 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Dec 13 06:55:18.997293 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Dec 13 06:55:18.997447 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfea5a000-0xfea5afff] Dec 13 06:55:18.997599 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfd004000-0xfd007fff 64bit pref] Dec 13 06:55:19.003870 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Dec 13 06:55:19.004059 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Dec 13 06:55:19.004235 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Dec 13 06:55:19.004392 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc0e0-0xc0ff] Dec 13 06:55:19.004545 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea5b000-0xfea5bfff] Dec 13 06:55:19.004753 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Dec 13 06:55:19.004959 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Dec 13 06:55:19.005139 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 Dec 13 06:55:19.005301 kernel: pci 0000:01:00.0: reg 0x10: [mem 0xfda00000-0xfda000ff 64bit] Dec 13 06:55:19.005456 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Dec 13 06:55:19.005608 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Dec 13 06:55:19.005772 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Dec 13 06:55:19.005962 kernel: pci_bus 0000:02: extended config space not accessible Dec 13 06:55:19.006149 kernel: pci 0000:02:01.0: [8086:25ab] type 00 class 0x088000 Dec 13 06:55:19.006320 kernel: pci 0000:02:01.0: reg 0x10: [mem 0xfd800000-0xfd80000f] Dec 13 06:55:19.006483 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Dec 13 06:55:19.006642 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Dec 13 06:55:19.010897 kernel: pci 0000:03:00.0: [1b36:000d] type 00 class 0x0c0330 Dec 13 06:55:19.011077 kernel: pci 0000:03:00.0: reg 0x10: [mem 0xfe800000-0xfe803fff 64bit] Dec 13 06:55:19.011240 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Dec 13 06:55:19.011404 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Dec 13 06:55:19.011564 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Dec 13 06:55:19.011749 kernel: pci 0000:04:00.0: [1af4:1044] type 00 class 0x00ff00 Dec 13 06:55:19.011945 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Dec 13 06:55:19.012104 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Dec 13 06:55:19.012257 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Dec 13 06:55:19.012411 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Dec 13 06:55:19.012566 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Dec 13 06:55:19.012728 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Dec 13 06:55:19.012907 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Dec 13 06:55:19.013064 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Dec 13 06:55:19.013218 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Dec 13 06:55:19.013372 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Dec 13 06:55:19.013524 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Dec 13 06:55:19.013677 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Dec 13 06:55:19.013857 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Dec 13 06:55:19.014022 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Dec 13 06:55:19.014177 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Dec 13 06:55:19.014333 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Dec 13 06:55:19.014496 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Dec 13 06:55:19.014651 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Dec 13 06:55:19.019905 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Dec 13 06:55:19.019930 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 13 06:55:19.019944 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 13 06:55:19.019964 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 13 06:55:19.019977 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 13 06:55:19.019990 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Dec 13 06:55:19.020002 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Dec 13 06:55:19.020015 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Dec 13 06:55:19.020027 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Dec 13 06:55:19.020040 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Dec 13 06:55:19.020052 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Dec 13 06:55:19.020065 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Dec 13 06:55:19.020082 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Dec 13 06:55:19.020094 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Dec 13 06:55:19.020107 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Dec 13 06:55:19.020119 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Dec 13 06:55:19.020132 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Dec 13 06:55:19.020144 kernel: iommu: Default domain type: Translated Dec 13 06:55:19.020157 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 13 06:55:19.020339 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Dec 13 06:55:19.020496 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 13 06:55:19.020655 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Dec 13 06:55:19.020674 kernel: vgaarb: loaded Dec 13 06:55:19.020695 kernel: pps_core: LinuxPPS API ver. 1 registered Dec 13 06:55:19.020708 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Dec 13 06:55:19.020720 kernel: PTP clock support registered Dec 13 06:55:19.020733 kernel: PCI: Using ACPI for IRQ routing Dec 13 06:55:19.020746 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 13 06:55:19.020758 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Dec 13 06:55:19.020777 kernel: e820: reserve RAM buffer [mem 0x7ffdc000-0x7fffffff] Dec 13 06:55:19.020867 kernel: clocksource: Switched to clocksource kvm-clock Dec 13 06:55:19.020882 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 06:55:19.020895 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 06:55:19.020907 kernel: pnp: PnP ACPI init Dec 13 06:55:19.021111 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Dec 13 06:55:19.021132 kernel: pnp: PnP ACPI: found 5 devices Dec 13 06:55:19.021145 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 13 06:55:19.021164 kernel: NET: Registered PF_INET protocol family Dec 13 06:55:19.021177 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 06:55:19.021190 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Dec 13 06:55:19.021203 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 06:55:19.021216 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 13 06:55:19.021228 kernel: TCP bind hash table entries: 16384 (order: 6, 262144 bytes, linear) Dec 13 06:55:19.021241 kernel: TCP: Hash tables configured (established 16384 bind 16384) Dec 13 06:55:19.021253 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 13 06:55:19.021266 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 13 06:55:19.021283 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 06:55:19.021295 kernel: NET: Registered PF_XDP protocol family Dec 13 06:55:19.021447 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01-02] add_size 1000 Dec 13 06:55:19.021603 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Dec 13 06:55:19.021768 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Dec 13 06:55:19.021951 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Dec 13 06:55:19.022106 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Dec 13 06:55:19.022273 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Dec 13 06:55:19.022423 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Dec 13 06:55:19.022575 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Dec 13 06:55:19.022727 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Dec 13 06:55:19.022925 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Dec 13 06:55:19.023082 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Dec 13 06:55:19.023242 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Dec 13 06:55:19.023395 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Dec 13 06:55:19.023548 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Dec 13 06:55:19.023700 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Dec 13 06:55:19.023880 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Dec 13 06:55:19.024045 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Dec 13 06:55:19.024206 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Dec 13 06:55:19.024360 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Dec 13 06:55:19.024512 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Dec 13 06:55:19.024670 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Dec 13 06:55:19.024857 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Dec 13 06:55:19.025038 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Dec 13 06:55:19.025193 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Dec 13 06:55:19.025365 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Dec 13 06:55:19.025520 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Dec 13 06:55:19.025674 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Dec 13 06:55:19.025854 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Dec 13 06:55:19.026015 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Dec 13 06:55:19.026170 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Dec 13 06:55:19.026324 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Dec 13 06:55:19.026490 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Dec 13 06:55:19.026647 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Dec 13 06:55:19.030878 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Dec 13 06:55:19.031053 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Dec 13 06:55:19.031220 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Dec 13 06:55:19.031376 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Dec 13 06:55:19.031530 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Dec 13 06:55:19.031684 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Dec 13 06:55:19.038301 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Dec 13 06:55:19.038481 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Dec 13 06:55:19.038644 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Dec 13 06:55:19.038857 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Dec 13 06:55:19.039027 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Dec 13 06:55:19.039182 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Dec 13 06:55:19.039337 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Dec 13 06:55:19.039495 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Dec 13 06:55:19.039656 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Dec 13 06:55:19.039858 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Dec 13 06:55:19.040014 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Dec 13 06:55:19.040164 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 13 06:55:19.040305 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 13 06:55:19.040444 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 13 06:55:19.040582 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Dec 13 06:55:19.040722 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Dec 13 06:55:19.040895 kernel: pci_bus 0000:00: resource 9 [mem 0x20c0000000-0x28bfffffff window] Dec 13 06:55:19.041075 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Dec 13 06:55:19.041228 kernel: pci_bus 0000:01: resource 1 [mem 0xfd800000-0xfdbfffff] Dec 13 06:55:19.041385 kernel: pci_bus 0000:01: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Dec 13 06:55:19.041551 kernel: pci_bus 0000:02: resource 1 [mem 0xfd800000-0xfd9fffff] Dec 13 06:55:19.041719 kernel: pci_bus 0000:03: resource 0 [io 0x2000-0x2fff] Dec 13 06:55:19.041902 kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff] Dec 13 06:55:19.042052 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Dec 13 06:55:19.042226 kernel: pci_bus 0000:04: resource 0 [io 0x3000-0x3fff] Dec 13 06:55:19.042377 kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff] Dec 13 06:55:19.042527 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Dec 13 06:55:19.042691 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Dec 13 06:55:19.042901 kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff] Dec 13 06:55:19.043053 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Dec 13 06:55:19.043218 kernel: pci_bus 0000:06: resource 0 [io 0x5000-0x5fff] Dec 13 06:55:19.043374 kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff] Dec 13 06:55:19.043524 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Dec 13 06:55:19.043709 kernel: pci_bus 0000:07: resource 0 [io 0x6000-0x6fff] Dec 13 06:55:19.043888 kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff] Dec 13 06:55:19.044040 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Dec 13 06:55:19.044239 kernel: pci_bus 0000:08: resource 0 [io 0x7000-0x7fff] Dec 13 06:55:19.044400 kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff] Dec 13 06:55:19.044549 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Dec 13 06:55:19.044719 kernel: pci_bus 0000:09: resource 0 [io 0x8000-0x8fff] Dec 13 06:55:19.044917 kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff] Dec 13 06:55:19.045081 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Dec 13 06:55:19.045102 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Dec 13 06:55:19.045117 kernel: PCI: CLS 0 bytes, default 64 Dec 13 06:55:19.045130 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Dec 13 06:55:19.045150 kernel: software IO TLB: mapped [mem 0x0000000079800000-0x000000007d800000] (64MB) Dec 13 06:55:19.045164 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Dec 13 06:55:19.045178 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Dec 13 06:55:19.045192 kernel: Initialise system trusted keyrings Dec 13 06:55:19.045206 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Dec 13 06:55:19.045219 kernel: Key type asymmetric registered Dec 13 06:55:19.045232 kernel: Asymmetric key parser 'x509' registered Dec 13 06:55:19.045245 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Dec 13 06:55:19.045258 kernel: io scheduler mq-deadline registered Dec 13 06:55:19.045276 kernel: io scheduler kyber registered Dec 13 06:55:19.045289 kernel: io scheduler bfq registered Dec 13 06:55:19.045458 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Dec 13 06:55:19.045644 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Dec 13 06:55:19.045845 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 06:55:19.046018 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Dec 13 06:55:19.046194 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Dec 13 06:55:19.046381 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 06:55:19.046566 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Dec 13 06:55:19.046734 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Dec 13 06:55:19.051996 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 06:55:19.052168 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Dec 13 06:55:19.052327 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Dec 13 06:55:19.052507 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 06:55:19.052679 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Dec 13 06:55:19.052884 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Dec 13 06:55:19.053055 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 06:55:19.053233 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Dec 13 06:55:19.053402 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Dec 13 06:55:19.053576 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 06:55:19.053775 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Dec 13 06:55:19.053963 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Dec 13 06:55:19.054119 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 06:55:19.054290 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Dec 13 06:55:19.054459 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Dec 13 06:55:19.054634 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 06:55:19.054656 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 13 06:55:19.054671 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Dec 13 06:55:19.054684 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Dec 13 06:55:19.054698 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 06:55:19.054712 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 13 06:55:19.054735 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 13 06:55:19.054748 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 13 06:55:19.054781 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 13 06:55:19.054804 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 13 06:55:19.055003 kernel: rtc_cmos 00:03: RTC can wake from S4 Dec 13 06:55:19.055166 kernel: rtc_cmos 00:03: registered as rtc0 Dec 13 06:55:19.055325 kernel: rtc_cmos 00:03: setting system clock to 2024-12-13T06:55:18 UTC (1734072918) Dec 13 06:55:19.055482 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Dec 13 06:55:19.055502 kernel: intel_pstate: CPU model not supported Dec 13 06:55:19.055522 kernel: NET: Registered PF_INET6 protocol family Dec 13 06:55:19.055536 kernel: Segment Routing with IPv6 Dec 13 06:55:19.055550 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 06:55:19.055563 kernel: NET: Registered PF_PACKET protocol family Dec 13 06:55:19.055577 kernel: Key type dns_resolver registered Dec 13 06:55:19.055590 kernel: IPI shorthand broadcast: enabled Dec 13 06:55:19.055603 kernel: sched_clock: Marking stable (1015555483, 222548792)->(1541137829, -303033554) Dec 13 06:55:19.055616 kernel: registered taskstats version 1 Dec 13 06:55:19.055630 kernel: Loading compiled-in X.509 certificates Dec 13 06:55:19.055643 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.173-flatcar: d9defb0205602bee9bb670636cbe5c74194fdb5e' Dec 13 06:55:19.055660 kernel: Key type .fscrypt registered Dec 13 06:55:19.055681 kernel: Key type fscrypt-provisioning registered Dec 13 06:55:19.055694 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 06:55:19.055707 kernel: ima: Allocated hash algorithm: sha1 Dec 13 06:55:19.055721 kernel: ima: No architecture policies found Dec 13 06:55:19.055743 kernel: clk: Disabling unused clocks Dec 13 06:55:19.055757 kernel: Freeing unused kernel image (initmem) memory: 47476K Dec 13 06:55:19.059853 kernel: Write protecting the kernel read-only data: 28672k Dec 13 06:55:19.059876 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Dec 13 06:55:19.059891 kernel: Freeing unused kernel image (rodata/data gap) memory: 620K Dec 13 06:55:19.059904 kernel: Run /init as init process Dec 13 06:55:19.059918 kernel: with arguments: Dec 13 06:55:19.059932 kernel: /init Dec 13 06:55:19.059944 kernel: with environment: Dec 13 06:55:19.059957 kernel: HOME=/ Dec 13 06:55:19.059970 kernel: TERM=linux Dec 13 06:55:19.059983 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 06:55:19.060008 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 06:55:19.060033 systemd[1]: Detected virtualization kvm. Dec 13 06:55:19.060047 systemd[1]: Detected architecture x86-64. Dec 13 06:55:19.060061 systemd[1]: Running in initrd. Dec 13 06:55:19.060075 systemd[1]: No hostname configured, using default hostname. Dec 13 06:55:19.060089 systemd[1]: Hostname set to . Dec 13 06:55:19.060104 systemd[1]: Initializing machine ID from VM UUID. Dec 13 06:55:19.060122 systemd[1]: Queued start job for default target initrd.target. Dec 13 06:55:19.060136 systemd[1]: Started systemd-ask-password-console.path. Dec 13 06:55:19.060162 systemd[1]: Reached target cryptsetup.target. Dec 13 06:55:19.060175 systemd[1]: Reached target paths.target. Dec 13 06:55:19.060188 systemd[1]: Reached target slices.target. Dec 13 06:55:19.060206 systemd[1]: Reached target swap.target. Dec 13 06:55:19.060219 systemd[1]: Reached target timers.target. Dec 13 06:55:19.060234 systemd[1]: Listening on iscsid.socket. Dec 13 06:55:19.060251 systemd[1]: Listening on iscsiuio.socket. Dec 13 06:55:19.060264 systemd[1]: Listening on systemd-journald-audit.socket. Dec 13 06:55:19.060282 systemd[1]: Listening on systemd-journald-dev-log.socket. Dec 13 06:55:19.060296 systemd[1]: Listening on systemd-journald.socket. Dec 13 06:55:19.060309 systemd[1]: Listening on systemd-networkd.socket. Dec 13 06:55:19.060323 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 06:55:19.060337 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 06:55:19.060350 systemd[1]: Reached target sockets.target. Dec 13 06:55:19.060364 systemd[1]: Starting kmod-static-nodes.service... Dec 13 06:55:19.060382 systemd[1]: Finished network-cleanup.service. Dec 13 06:55:19.060408 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 06:55:19.060422 systemd[1]: Starting systemd-journald.service... Dec 13 06:55:19.060435 systemd[1]: Starting systemd-modules-load.service... Dec 13 06:55:19.060457 systemd[1]: Starting systemd-resolved.service... Dec 13 06:55:19.060471 systemd[1]: Starting systemd-vconsole-setup.service... Dec 13 06:55:19.060485 systemd[1]: Finished kmod-static-nodes.service. Dec 13 06:55:19.060499 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 06:55:19.060528 systemd-journald[201]: Journal started Dec 13 06:55:19.060626 systemd-journald[201]: Runtime Journal (/run/log/journal/2d6cb47126d8426c97e0636c0ac6a329) is 4.7M, max 38.1M, 33.3M free. Dec 13 06:55:18.980831 systemd-modules-load[202]: Inserted module 'overlay' Dec 13 06:55:19.076518 kernel: Bridge firewalling registered Dec 13 06:55:19.030341 systemd-resolved[203]: Positive Trust Anchors: Dec 13 06:55:19.090704 systemd[1]: Started systemd-resolved.service. Dec 13 06:55:19.090745 kernel: audit: type=1130 audit(1734072919.076:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:55:19.090801 kernel: audit: type=1130 audit(1734072919.083:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:55:19.090832 systemd[1]: Started systemd-journald.service. Dec 13 06:55:19.076000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:55:19.083000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:55:19.030363 systemd-resolved[203]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 06:55:19.030408 systemd-resolved[203]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 06:55:19.097324 kernel: SCSI subsystem initialized Dec 13 06:55:19.097351 kernel: audit: type=1130 audit(1734072919.096:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:55:19.096000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:55:19.034629 systemd-resolved[203]: Defaulting to hostname 'linux'. Dec 13 06:55:19.112084 kernel: audit: type=1130 audit(1734072919.101:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:55:19.101000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:55:19.064668 systemd-modules-load[202]: Inserted module 'br_netfilter' Dec 13 06:55:19.127533 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 06:55:19.127568 kernel: device-mapper: uevent: version 1.0.3 Dec 13 06:55:19.127597 kernel: audit: type=1130 audit(1734072919.102:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:55:19.127616 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Dec 13 06:55:19.102000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:55:19.097259 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 06:55:19.102941 systemd[1]: Finished systemd-vconsole-setup.service. Dec 13 06:55:19.103780 systemd[1]: Reached target nss-lookup.target. Dec 13 06:55:19.105700 systemd[1]: Starting dracut-cmdline-ask.service... Dec 13 06:55:19.112339 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Dec 13 06:55:19.135083 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Dec 13 06:55:19.144063 kernel: audit: type=1130 audit(1734072919.137:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:55:19.137000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:55:19.137185 systemd-modules-load[202]: Inserted module 'dm_multipath' Dec 13 06:55:19.138505 systemd[1]: Finished systemd-modules-load.service. Dec 13 06:55:19.145000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:55:19.151789 kernel: audit: type=1130 audit(1734072919.145:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:55:19.152991 systemd[1]: Starting systemd-sysctl.service... Dec 13 06:55:19.158355 systemd[1]: Finished dracut-cmdline-ask.service. Dec 13 06:55:19.159000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:55:19.165858 systemd[1]: Starting dracut-cmdline.service... Dec 13 06:55:19.167732 kernel: audit: type=1130 audit(1734072919.159:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:55:19.168249 systemd[1]: Finished systemd-sysctl.service. Dec 13 06:55:19.169000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:55:19.189840 kernel: audit: type=1130 audit(1734072919.169:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:55:19.198978 dracut-cmdline[224]: dracut-dracut-053 Dec 13 06:55:19.202443 dracut-cmdline[224]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=66bd2580285375a2ba5b0e34ba63606314bcd90aaed1de1996371bdcb032485c Dec 13 06:55:19.289828 kernel: Loading iSCSI transport class v2.0-870. Dec 13 06:55:19.311785 kernel: iscsi: registered transport (tcp) Dec 13 06:55:19.340460 kernel: iscsi: registered transport (qla4xxx) Dec 13 06:55:19.340517 kernel: QLogic iSCSI HBA Driver Dec 13 06:55:19.388714 systemd[1]: Finished dracut-cmdline.service. Dec 13 06:55:19.388000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:55:19.391945 systemd[1]: Starting dracut-pre-udev.service... Dec 13 06:55:19.451866 kernel: raid6: sse2x4 gen() 7546 MB/s Dec 13 06:55:19.469836 kernel: raid6: sse2x4 xor() 4742 MB/s Dec 13 06:55:19.487877 kernel: raid6: sse2x2 gen() 5252 MB/s Dec 13 06:55:19.505840 kernel: raid6: sse2x2 xor() 7608 MB/s Dec 13 06:55:19.523838 kernel: raid6: sse2x1 gen() 5323 MB/s Dec 13 06:55:19.542635 kernel: raid6: sse2x1 xor() 6892 MB/s Dec 13 06:55:19.542672 kernel: raid6: using algorithm sse2x4 gen() 7546 MB/s Dec 13 06:55:19.542702 kernel: raid6: .... xor() 4742 MB/s, rmw enabled Dec 13 06:55:19.543867 kernel: raid6: using ssse3x2 recovery algorithm Dec 13 06:55:19.561878 kernel: xor: automatically using best checksumming function avx Dec 13 06:55:19.682870 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Dec 13 06:55:19.696615 systemd[1]: Finished dracut-pre-udev.service. Dec 13 06:55:19.696000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:55:19.697000 audit: BPF prog-id=7 op=LOAD Dec 13 06:55:19.697000 audit: BPF prog-id=8 op=LOAD Dec 13 06:55:19.698745 systemd[1]: Starting systemd-udevd.service... Dec 13 06:55:19.717814 systemd-udevd[402]: Using default interface naming scheme 'v252'. Dec 13 06:55:19.727035 systemd[1]: Started systemd-udevd.service. Dec 13 06:55:19.726000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:55:19.728865 systemd[1]: Starting dracut-pre-trigger.service... Dec 13 06:55:19.745183 dracut-pre-trigger[403]: rd.md=0: removing MD RAID activation Dec 13 06:55:19.785868 systemd[1]: Finished dracut-pre-trigger.service. Dec 13 06:55:19.785000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:55:19.787920 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 06:55:19.882083 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 06:55:19.882000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:55:20.010838 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 06:55:20.022822 kernel: ACPI: bus type USB registered Dec 13 06:55:20.031332 kernel: usbcore: registered new interface driver usbfs Dec 13 06:55:20.031365 kernel: usbcore: registered new interface driver hub Dec 13 06:55:20.037114 kernel: virtio_blk virtio1: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Dec 13 06:55:20.065114 kernel: usbcore: registered new device driver usb Dec 13 06:55:20.065146 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 06:55:20.065164 kernel: GPT:17805311 != 125829119 Dec 13 06:55:20.065185 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 06:55:20.065202 kernel: GPT:17805311 != 125829119 Dec 13 06:55:20.065218 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 06:55:20.065235 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 06:55:20.071795 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Dec 13 06:55:20.074385 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 1 Dec 13 06:55:20.074579 kernel: xhci_hcd 0000:03:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Dec 13 06:55:20.074814 kernel: AVX version of gcm_enc/dec engaged. Dec 13 06:55:20.074836 kernel: AES CTR mode by8 optimization enabled Dec 13 06:55:20.074860 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Dec 13 06:55:20.075044 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 2 Dec 13 06:55:20.075227 kernel: xhci_hcd 0000:03:00.0: Host supports USB 3.0 SuperSpeed Dec 13 06:55:20.075391 kernel: hub 1-0:1.0: USB hub found Dec 13 06:55:20.075597 kernel: hub 1-0:1.0: 4 ports detected Dec 13 06:55:20.075883 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Dec 13 06:55:20.076102 kernel: hub 2-0:1.0: USB hub found Dec 13 06:55:20.076311 kernel: hub 2-0:1.0: 4 ports detected Dec 13 06:55:20.085815 kernel: libata version 3.00 loaded. Dec 13 06:55:20.109808 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (457) Dec 13 06:55:20.113616 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Dec 13 06:55:20.230508 kernel: ahci 0000:00:1f.2: version 3.0 Dec 13 06:55:20.230851 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Dec 13 06:55:20.230874 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Dec 13 06:55:20.231055 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Dec 13 06:55:20.231236 kernel: scsi host0: ahci Dec 13 06:55:20.231459 kernel: scsi host1: ahci Dec 13 06:55:20.231657 kernel: scsi host2: ahci Dec 13 06:55:20.231900 kernel: scsi host3: ahci Dec 13 06:55:20.232091 kernel: scsi host4: ahci Dec 13 06:55:20.232306 kernel: scsi host5: ahci Dec 13 06:55:20.232501 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b100 irq 41 Dec 13 06:55:20.232522 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b180 irq 41 Dec 13 06:55:20.232551 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b200 irq 41 Dec 13 06:55:20.232569 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b280 irq 41 Dec 13 06:55:20.232586 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b300 irq 41 Dec 13 06:55:20.232603 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b380 irq 41 Dec 13 06:55:20.233877 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Dec 13 06:55:20.234685 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Dec 13 06:55:20.243329 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Dec 13 06:55:20.252319 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Dec 13 06:55:20.254456 systemd[1]: Starting disk-uuid.service... Dec 13 06:55:20.266801 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 06:55:20.271742 disk-uuid[528]: Primary Header is updated. Dec 13 06:55:20.271742 disk-uuid[528]: Secondary Entries is updated. Dec 13 06:55:20.271742 disk-uuid[528]: Secondary Header is updated. Dec 13 06:55:20.319801 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Dec 13 06:55:20.449804 kernel: ata3: SATA link down (SStatus 0 SControl 300) Dec 13 06:55:20.449888 kernel: ata1: SATA link down (SStatus 0 SControl 300) Dec 13 06:55:20.452847 kernel: ata2: SATA link down (SStatus 0 SControl 300) Dec 13 06:55:20.460927 kernel: ata6: SATA link down (SStatus 0 SControl 300) Dec 13 06:55:20.460977 kernel: ata5: SATA link down (SStatus 0 SControl 300) Dec 13 06:55:20.461819 kernel: ata4: SATA link down (SStatus 0 SControl 300) Dec 13 06:55:20.477832 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 13 06:55:20.485309 kernel: usbcore: registered new interface driver usbhid Dec 13 06:55:20.485353 kernel: usbhid: USB HID core driver Dec 13 06:55:20.495376 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:03:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Dec 13 06:55:20.495430 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:03:00.0-1/input0 Dec 13 06:55:21.284102 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 06:55:21.285080 disk-uuid[529]: The operation has completed successfully. Dec 13 06:55:21.346200 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 06:55:21.346352 systemd[1]: Finished disk-uuid.service. Dec 13 06:55:21.346000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:55:21.346000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:55:21.348411 systemd[1]: Starting verity-setup.service... Dec 13 06:55:21.368826 kernel: device-mapper: verity: sha256 using implementation "sha256-avx" Dec 13 06:55:21.427641 systemd[1]: Found device dev-mapper-usr.device. Dec 13 06:55:21.429656 systemd[1]: Mounting sysusr-usr.mount... Dec 13 06:55:21.431486 systemd[1]: Finished verity-setup.service. Dec 13 06:55:21.431000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:55:21.527815 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Dec 13 06:55:21.528337 systemd[1]: Mounted sysusr-usr.mount. Dec 13 06:55:21.529260 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Dec 13 06:55:21.530442 systemd[1]: Starting ignition-setup.service... Dec 13 06:55:21.532792 systemd[1]: Starting parse-ip-for-networkd.service... Dec 13 06:55:21.554342 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 06:55:21.554400 kernel: BTRFS info (device vda6): using free space tree Dec 13 06:55:21.554427 kernel: BTRFS info (device vda6): has skinny extents Dec 13 06:55:21.574207 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 06:55:21.582442 systemd[1]: Finished ignition-setup.service. Dec 13 06:55:21.582000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:55:21.584336 systemd[1]: Starting ignition-fetch-offline.service... Dec 13 06:55:21.660754 systemd[1]: Finished parse-ip-for-networkd.service. Dec 13 06:55:21.660000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:55:21.662000 audit: BPF prog-id=9 op=LOAD Dec 13 06:55:21.664211 systemd[1]: Starting systemd-networkd.service... Dec 13 06:55:21.697482 systemd-networkd[709]: lo: Link UP Dec 13 06:55:21.697508 systemd-networkd[709]: lo: Gained carrier Dec 13 06:55:21.699007 systemd-networkd[709]: Enumeration completed Dec 13 06:55:21.699828 systemd-networkd[709]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 06:55:21.702000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:55:21.702118 systemd[1]: Started systemd-networkd.service. Dec 13 06:55:21.702238 systemd-networkd[709]: eth0: Link UP Dec 13 06:55:21.702245 systemd-networkd[709]: eth0: Gained carrier Dec 13 06:55:21.703442 systemd[1]: Reached target network.target. Dec 13 06:55:21.723000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:55:21.705611 systemd[1]: Starting iscsiuio.service... Dec 13 06:55:21.715993 systemd[1]: Started iscsiuio.service. Dec 13 06:55:21.725534 systemd[1]: Starting iscsid.service... Dec 13 06:55:21.731450 iscsid[714]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Dec 13 06:55:21.731450 iscsid[714]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Dec 13 06:55:21.731450 iscsid[714]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Dec 13 06:55:21.731450 iscsid[714]: If using hardware iscsi like qla4xxx this message can be ignored. Dec 13 06:55:21.739411 iscsid[714]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Dec 13 06:55:21.739411 iscsid[714]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Dec 13 06:55:21.737000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:55:21.735564 systemd[1]: Started iscsid.service. Dec 13 06:55:21.735972 systemd-networkd[709]: eth0: DHCPv4 address 10.230.34.74/30, gateway 10.230.34.73 acquired from 10.230.34.73 Dec 13 06:55:21.739997 systemd[1]: Starting dracut-initqueue.service... Dec 13 06:55:21.760861 systemd[1]: Finished dracut-initqueue.service. Dec 13 06:55:21.760000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:55:21.761713 systemd[1]: Reached target remote-fs-pre.target. Dec 13 06:55:21.762970 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 06:55:21.763597 systemd[1]: Reached target remote-fs.target. Dec 13 06:55:21.766597 systemd[1]: Starting dracut-pre-mount.service... Dec 13 06:55:21.786566 systemd[1]: Finished dracut-pre-mount.service. Dec 13 06:55:21.787000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:55:21.811398 ignition[643]: Ignition 2.14.0 Dec 13 06:55:21.811425 ignition[643]: Stage: fetch-offline Dec 13 06:55:21.811595 ignition[643]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 06:55:21.811671 ignition[643]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Dec 13 06:55:21.813495 ignition[643]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 06:55:21.813714 ignition[643]: parsed url from cmdline: "" Dec 13 06:55:21.813721 ignition[643]: no config URL provided Dec 13 06:55:21.813731 ignition[643]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 06:55:21.815987 systemd[1]: Finished ignition-fetch-offline.service. Dec 13 06:55:21.813774 ignition[643]: no config at "/usr/lib/ignition/user.ign" Dec 13 06:55:21.816000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:55:21.813803 ignition[643]: failed to fetch config: resource requires networking Dec 13 06:55:21.818700 systemd[1]: Starting ignition-fetch.service... Dec 13 06:55:21.814599 ignition[643]: Ignition finished successfully Dec 13 06:55:21.830429 ignition[728]: Ignition 2.14.0 Dec 13 06:55:21.830452 ignition[728]: Stage: fetch Dec 13 06:55:21.830641 ignition[728]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 06:55:21.830686 ignition[728]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Dec 13 06:55:21.832271 ignition[728]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 06:55:21.832437 ignition[728]: parsed url from cmdline: "" Dec 13 06:55:21.832443 ignition[728]: no config URL provided Dec 13 06:55:21.832452 ignition[728]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 06:55:21.832479 ignition[728]: no config at "/usr/lib/ignition/user.ign" Dec 13 06:55:21.838983 ignition[728]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Dec 13 06:55:21.839046 ignition[728]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Dec 13 06:55:21.839641 ignition[728]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Dec 13 06:55:21.854907 ignition[728]: GET result: OK Dec 13 06:55:21.855419 ignition[728]: parsing config with SHA512: 99e16eafb74b4d825e8c365d124f17e6f87df95faddcef413f8e1a9d829db14380d8d64a6c77241322c543d0ef72b243f5dacb4651baa94351ceccdfbdd89687 Dec 13 06:55:21.863343 unknown[728]: fetched base config from "system" Dec 13 06:55:21.863389 unknown[728]: fetched base config from "system" Dec 13 06:55:21.863818 ignition[728]: fetch: fetch complete Dec 13 06:55:21.863404 unknown[728]: fetched user config from "openstack" Dec 13 06:55:21.866000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:55:21.863827 ignition[728]: fetch: fetch passed Dec 13 06:55:21.865472 systemd[1]: Finished ignition-fetch.service. Dec 13 06:55:21.863888 ignition[728]: Ignition finished successfully Dec 13 06:55:21.868236 systemd[1]: Starting ignition-kargs.service... Dec 13 06:55:21.882498 ignition[734]: Ignition 2.14.0 Dec 13 06:55:21.882519 ignition[734]: Stage: kargs Dec 13 06:55:21.882683 ignition[734]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 06:55:21.882717 ignition[734]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Dec 13 06:55:21.884290 ignition[734]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 06:55:21.887000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:55:21.886946 systemd[1]: Finished ignition-kargs.service. Dec 13 06:55:21.885581 ignition[734]: kargs: kargs passed Dec 13 06:55:21.889294 systemd[1]: Starting ignition-disks.service... Dec 13 06:55:21.885662 ignition[734]: Ignition finished successfully Dec 13 06:55:21.900619 ignition[740]: Ignition 2.14.0 Dec 13 06:55:21.900639 ignition[740]: Stage: disks Dec 13 06:55:21.900883 ignition[740]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 06:55:21.900938 ignition[740]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Dec 13 06:55:21.902299 ignition[740]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 06:55:21.904000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:55:21.904690 systemd[1]: Finished ignition-disks.service. Dec 13 06:55:21.903558 ignition[740]: disks: disks passed Dec 13 06:55:21.905853 systemd[1]: Reached target initrd-root-device.target. Dec 13 06:55:21.903623 ignition[740]: Ignition finished successfully Dec 13 06:55:21.906554 systemd[1]: Reached target local-fs-pre.target. Dec 13 06:55:21.907916 systemd[1]: Reached target local-fs.target. Dec 13 06:55:21.909144 systemd[1]: Reached target sysinit.target. Dec 13 06:55:21.910504 systemd[1]: Reached target basic.target. Dec 13 06:55:21.913138 systemd[1]: Starting systemd-fsck-root.service... Dec 13 06:55:21.934281 systemd-fsck[748]: ROOT: clean, 621/1628000 files, 124058/1617920 blocks Dec 13 06:55:21.937979 systemd[1]: Finished systemd-fsck-root.service. Dec 13 06:55:21.937000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:55:21.939806 systemd[1]: Mounting sysroot.mount... Dec 13 06:55:21.953111 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Dec 13 06:55:21.954392 systemd[1]: Mounted sysroot.mount. Dec 13 06:55:21.955260 systemd[1]: Reached target initrd-root-fs.target. Dec 13 06:55:21.958211 systemd[1]: Mounting sysroot-usr.mount... Dec 13 06:55:21.959485 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Dec 13 06:55:21.960687 systemd[1]: Starting flatcar-openstack-hostname.service... Dec 13 06:55:21.961534 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 06:55:21.961598 systemd[1]: Reached target ignition-diskful.target. Dec 13 06:55:21.970108 systemd[1]: Mounted sysroot-usr.mount. Dec 13 06:55:21.974527 systemd[1]: Starting initrd-setup-root.service... Dec 13 06:55:21.982662 initrd-setup-root[759]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 06:55:21.998857 initrd-setup-root[767]: cut: /sysroot/etc/group: No such file or directory Dec 13 06:55:22.007803 initrd-setup-root[775]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 06:55:22.017900 initrd-setup-root[784]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 06:55:22.088221 systemd[1]: Finished initrd-setup-root.service. Dec 13 06:55:22.088000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:55:22.090184 systemd[1]: Starting ignition-mount.service... Dec 13 06:55:22.091938 systemd[1]: Starting sysroot-boot.service... Dec 13 06:55:22.107887 bash[802]: umount: /sysroot/usr/share/oem: not mounted. Dec 13 06:55:22.126063 coreos-metadata[754]: Dec 13 06:55:22.125 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Dec 13 06:55:22.131276 ignition[804]: INFO : Ignition 2.14.0 Dec 13 06:55:22.135047 ignition[804]: INFO : Stage: mount Dec 13 06:55:22.135047 ignition[804]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 06:55:22.135047 ignition[804]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Dec 13 06:55:22.135047 ignition[804]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 06:55:22.135047 ignition[804]: INFO : mount: mount passed Dec 13 06:55:22.135047 ignition[804]: INFO : Ignition finished successfully Dec 13 06:55:22.134000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:55:22.139000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:55:22.134536 systemd[1]: Finished sysroot-boot.service. Dec 13 06:55:22.138527 systemd[1]: Finished ignition-mount.service. Dec 13 06:55:22.152055 coreos-metadata[754]: Dec 13 06:55:22.151 INFO Fetch successful Dec 13 06:55:22.153012 coreos-metadata[754]: Dec 13 06:55:22.152 INFO wrote hostname srv-avpje.gb1.brightbox.com to /sysroot/etc/hostname Dec 13 06:55:22.156309 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Dec 13 06:55:22.156495 systemd[1]: Finished flatcar-openstack-hostname.service. Dec 13 06:55:22.157000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:55:22.157000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:55:22.452055 systemd[1]: Mounting sysroot-usr-share-oem.mount... Dec 13 06:55:22.476112 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (811) Dec 13 06:55:22.476280 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 06:55:22.476359 kernel: BTRFS info (device vda6): using free space tree Dec 13 06:55:22.476381 kernel: BTRFS info (device vda6): has skinny extents Dec 13 06:55:22.481250 systemd[1]: Mounted sysroot-usr-share-oem.mount. Dec 13 06:55:22.483062 systemd[1]: Starting ignition-files.service... Dec 13 06:55:22.505036 ignition[831]: INFO : Ignition 2.14.0 Dec 13 06:55:22.505036 ignition[831]: INFO : Stage: files Dec 13 06:55:22.506796 ignition[831]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 06:55:22.506796 ignition[831]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Dec 13 06:55:22.506796 ignition[831]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 06:55:22.511231 ignition[831]: DEBUG : files: compiled without relabeling support, skipping Dec 13 06:55:22.512609 ignition[831]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 06:55:22.512609 ignition[831]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 06:55:22.517368 ignition[831]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 06:55:22.518364 ignition[831]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 06:55:22.519288 ignition[831]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 06:55:22.518989 unknown[831]: wrote ssh authorized keys file for user: core Dec 13 06:55:22.521204 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Dec 13 06:55:22.521204 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 06:55:22.521204 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 06:55:22.521204 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 06:55:22.521204 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Dec 13 06:55:22.521204 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Dec 13 06:55:22.521204 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Dec 13 06:55:22.521204 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Dec 13 06:55:23.075004 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Dec 13 06:55:23.745457 systemd-networkd[709]: eth0: Gained IPv6LL Dec 13 06:55:24.412871 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Dec 13 06:55:24.412871 ignition[831]: INFO : files: op(7): [started] processing unit "coreos-metadata-sshkeys@.service" Dec 13 06:55:24.412871 ignition[831]: INFO : files: op(7): [finished] processing unit "coreos-metadata-sshkeys@.service" Dec 13 06:55:24.421582 ignition[831]: INFO : files: op(8): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Dec 13 06:55:24.421582 ignition[831]: INFO : files: op(8): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Dec 13 06:55:24.426145 ignition[831]: INFO : files: createResultFile: createFiles: op(9): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 06:55:24.427973 ignition[831]: INFO : files: createResultFile: createFiles: op(9): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 06:55:24.429270 ignition[831]: INFO : files: files passed Dec 13 06:55:24.429270 ignition[831]: INFO : Ignition finished successfully Dec 13 06:55:24.432148 systemd[1]: Finished ignition-files.service. Dec 13 06:55:24.440511 kernel: kauditd_printk_skb: 28 callbacks suppressed Dec 13 06:55:24.440574 kernel: audit: type=1130 audit(1734072924.432:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:55:24.432000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:55:24.436086 systemd[1]: Starting initrd-setup-root-after-ignition.service... Dec 13 06:55:24.441601 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Dec 13 06:55:24.446071 systemd[1]: Starting ignition-quench.service... Dec 13 06:55:24.448727 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 06:55:24.448904 systemd[1]: Finished ignition-quench.service. Dec 13 06:55:24.460431 kernel: audit: type=1130 audit(1734072924.448:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:55:24.460466 kernel: audit: type=1131 audit(1734072924.448:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:55:24.448000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:55:24.448000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:55:24.462072 initrd-setup-root-after-ignition[856]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 06:55:24.462990 systemd[1]: Finished initrd-setup-root-after-ignition.service. Dec 13 06:55:24.469897 kernel: audit: type=1130 audit(1734072924.463:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:55:24.463000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:55:24.464391 systemd[1]: Reached target ignition-complete.target. Dec 13 06:55:24.471744 systemd[1]: Starting initrd-parse-etc.service... Dec 13 06:55:24.493643 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 06:55:24.494805 systemd[1]: Finished initrd-parse-etc.service. Dec 13 06:55:24.495000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:55:24.497988 systemd[1]: Reached target initrd-fs.target. Dec 13 06:55:24.508562 kernel: audit: type=1130 audit(1734072924.495:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:55:24.508600 kernel: audit: type=1131 audit(1734072924.497:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:55:24.497000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:55:24.506754 systemd[1]: Reached target initrd.target. Dec 13 06:55:24.507448 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Dec 13 06:55:24.508783 systemd[1]: Starting dracut-pre-pivot.service... Dec 13 06:55:24.532510 kernel: audit: type=1130 audit(1734072924.525:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:55:24.525000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:55:24.525856 systemd[1]: Finished dracut-pre-pivot.service. Dec 13 06:55:24.527650 systemd[1]: Starting initrd-cleanup.service... Dec 13 06:55:24.540582 systemd[1]: Stopped target nss-lookup.target. Dec 13 06:55:24.541437 systemd[1]: Stopped target remote-cryptsetup.target. Dec 13 06:55:24.542938 systemd[1]: Stopped target timers.target. Dec 13 06:55:24.544116 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 06:55:24.550445 kernel: audit: type=1131 audit(1734072924.544:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:55:24.544000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:55:24.544318 systemd[1]: Stopped dracut-pre-pivot.service. Dec 13 06:55:24.545520 systemd[1]: Stopped target initrd.target. Dec 13 06:55:24.551322 systemd[1]: Stopped target basic.target. Dec 13 06:55:24.552531 systemd[1]: Stopped target ignition-complete.target. Dec 13 06:55:24.553813 systemd[1]: Stopped target ignition-diskful.target. Dec 13 06:55:24.555060 systemd[1]: Stopped target initrd-root-device.target. Dec 13 06:55:24.556312 systemd[1]: Stopped target remote-fs.target. Dec 13 06:55:24.557555 systemd[1]: Stopped target remote-fs-pre.target. Dec 13 06:55:24.558847 systemd[1]: Stopped target sysinit.target. Dec 13 06:55:24.560211 systemd[1]: Stopped target local-fs.target. Dec 13 06:55:24.561343 systemd[1]: Stopped target local-fs-pre.target. Dec 13 06:55:24.562540 systemd[1]: Stopped target swap.target. Dec 13 06:55:24.569968 kernel: audit: type=1131 audit(1734072924.564:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:55:24.564000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:55:24.563632 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 06:55:24.563880 systemd[1]: Stopped dracut-pre-mount.service. Dec 13 06:55:24.577112 kernel: audit: type=1131 audit(1734072924.571:48): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:55:24.571000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:55:24.565084 systemd[1]: Stopped target cryptsetup.target. Dec 13 06:55:24.577000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:55:24.570691 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 06:55:24.578000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:55:24.570929 systemd[1]: Stopped dracut-initqueue.service. Dec 13 06:55:24.599179 ignition[870]: INFO : Ignition 2.14.0 Dec 13 06:55:24.599179 ignition[870]: INFO : Stage: umount Dec 13 06:55:24.599179 ignition[870]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 06:55:24.599179 ignition[870]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Dec 13 06:55:24.599179 ignition[870]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 06:55:24.599179 ignition[870]: INFO : umount: umount passed Dec 13 06:55:24.599179 ignition[870]: INFO : Ignition finished successfully Dec 13 06:55:24.600000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:55:24.608086 iscsid[714]: iscsid shutting down. Dec 13 06:55:24.572109 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 06:55:24.612000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:55:24.572325 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Dec 13 06:55:24.578035 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 06:55:24.578245 systemd[1]: Stopped ignition-files.service. Dec 13 06:55:24.580576 systemd[1]: Stopping ignition-mount.service... Dec 13 06:55:24.594505 systemd[1]: Stopping iscsid.service... Dec 13 06:55:24.599753 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 06:55:24.599992 systemd[1]: Stopped kmod-static-nodes.service. Dec 13 06:55:24.602374 systemd[1]: Stopping sysroot-boot.service... Dec 13 06:55:24.622000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:55:24.607382 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 06:55:24.608890 systemd[1]: Stopped systemd-udev-trigger.service. Dec 13 06:55:24.613029 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 06:55:24.613259 systemd[1]: Stopped dracut-pre-trigger.service. Dec 13 06:55:24.627915 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 06:55:24.629035 systemd[1]: iscsid.service: Deactivated successfully. Dec 13 06:55:24.629000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:55:24.629197 systemd[1]: Stopped iscsid.service. Dec 13 06:55:24.630741 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 06:55:24.631000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:55:24.630924 systemd[1]: Stopped ignition-mount.service. Dec 13 06:55:24.633394 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 06:55:24.633520 systemd[1]: Finished initrd-cleanup.service. Dec 13 06:55:24.634000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:55:24.634000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:55:24.635911 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 06:55:24.635000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:55:24.636011 systemd[1]: Stopped ignition-disks.service. Dec 13 06:55:24.637000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:55:24.636731 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 06:55:24.638000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:55:24.636850 systemd[1]: Stopped ignition-kargs.service. Dec 13 06:55:24.639000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:55:24.638130 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 13 06:55:24.638188 systemd[1]: Stopped ignition-fetch.service. Dec 13 06:55:24.639388 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 06:55:24.639447 systemd[1]: Stopped ignition-fetch-offline.service. Dec 13 06:55:24.640847 systemd[1]: Stopped target paths.target. Dec 13 06:55:24.641922 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 06:55:24.645828 systemd[1]: Stopped systemd-ask-password-console.path. Dec 13 06:55:24.647053 systemd[1]: Stopped target slices.target. Dec 13 06:55:24.649200 systemd[1]: Stopped target sockets.target. Dec 13 06:55:24.650434 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 06:55:24.650489 systemd[1]: Closed iscsid.socket. Dec 13 06:55:24.651000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:55:24.651561 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 06:55:24.651626 systemd[1]: Stopped ignition-setup.service. Dec 13 06:55:24.657000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:55:24.652942 systemd[1]: Stopping iscsiuio.service... Dec 13 06:55:24.657598 systemd[1]: iscsiuio.service: Deactivated successfully. Dec 13 06:55:24.657779 systemd[1]: Stopped iscsiuio.service. Dec 13 06:55:24.658899 systemd[1]: Stopped target network.target. Dec 13 06:55:24.659995 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 06:55:24.660068 systemd[1]: Closed iscsiuio.socket. Dec 13 06:55:24.662238 systemd[1]: Stopping systemd-networkd.service... Dec 13 06:55:24.663200 systemd[1]: Stopping systemd-resolved.service... Dec 13 06:55:24.666395 systemd-networkd[709]: eth0: DHCPv6 lease lost Dec 13 06:55:24.670000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:55:24.669976 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 06:55:24.670157 systemd[1]: Stopped systemd-networkd.service. Dec 13 06:55:24.673000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:55:24.672264 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 06:55:24.674000 audit: BPF prog-id=9 op=UNLOAD Dec 13 06:55:24.675000 audit: BPF prog-id=6 op=UNLOAD Dec 13 06:55:24.672425 systemd[1]: Stopped systemd-resolved.service. Dec 13 06:55:24.675549 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 06:55:24.675598 systemd[1]: Closed systemd-networkd.socket. Dec 13 06:55:24.677806 systemd[1]: Stopping network-cleanup.service... Dec 13 06:55:24.681419 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 06:55:24.681500 systemd[1]: Stopped parse-ip-for-networkd.service. Dec 13 06:55:24.682000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:55:24.683738 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 06:55:24.684000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:55:24.683831 systemd[1]: Stopped systemd-sysctl.service. Dec 13 06:55:24.685000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:55:24.685532 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 06:55:24.685591 systemd[1]: Stopped systemd-modules-load.service. Dec 13 06:55:24.693043 systemd[1]: Stopping systemd-udevd.service... Dec 13 06:55:24.696500 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 13 06:55:24.699349 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 06:55:24.699593 systemd[1]: Stopped systemd-udevd.service. Dec 13 06:55:24.700000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:55:24.702812 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 06:55:24.704000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:55:24.702892 systemd[1]: Closed systemd-udevd-control.socket. Dec 13 06:55:24.706000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:55:24.703577 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 06:55:24.707000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:55:24.703624 systemd[1]: Closed systemd-udevd-kernel.socket. Dec 13 06:55:24.704404 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 06:55:24.704463 systemd[1]: Stopped dracut-pre-udev.service. Dec 13 06:55:24.705536 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 06:55:24.705594 systemd[1]: Stopped dracut-cmdline.service. Dec 13 06:55:24.706950 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 06:55:24.707006 systemd[1]: Stopped dracut-cmdline-ask.service. Dec 13 06:55:24.709229 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Dec 13 06:55:24.717512 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 06:55:24.717599 systemd[1]: Stopped systemd-vconsole-setup.service. Dec 13 06:55:24.719000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:55:24.721308 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 06:55:24.721481 systemd[1]: Stopped network-cleanup.service. Dec 13 06:55:24.722000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:55:24.723383 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 06:55:24.723553 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Dec 13 06:55:24.724000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:55:24.724000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:55:24.755774 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 06:55:24.755953 systemd[1]: Stopped sysroot-boot.service. Dec 13 06:55:24.756000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:55:24.757923 systemd[1]: Reached target initrd-switch-root.target. Dec 13 06:55:24.758870 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 06:55:24.759000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:55:24.758939 systemd[1]: Stopped initrd-setup-root.service. Dec 13 06:55:24.761301 systemd[1]: Starting initrd-switch-root.service... Dec 13 06:55:24.777227 systemd[1]: Switching root. Dec 13 06:55:24.797365 systemd-journald[201]: Journal stopped Dec 13 06:55:28.959218 systemd-journald[201]: Received SIGTERM from PID 1 (systemd). Dec 13 06:55:28.959390 kernel: SELinux: Class mctp_socket not defined in policy. Dec 13 06:55:28.959425 kernel: SELinux: Class anon_inode not defined in policy. Dec 13 06:55:28.959455 kernel: SELinux: the above unknown classes and permissions will be allowed Dec 13 06:55:28.959483 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 06:55:28.959503 kernel: SELinux: policy capability open_perms=1 Dec 13 06:55:28.959531 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 06:55:28.959577 kernel: SELinux: policy capability always_check_network=0 Dec 13 06:55:28.959608 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 06:55:28.959629 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 06:55:28.959661 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 06:55:28.959689 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 06:55:28.959711 systemd[1]: Successfully loaded SELinux policy in 66.485ms. Dec 13 06:55:28.959773 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 16.839ms. Dec 13 06:55:28.959800 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 06:55:28.959823 systemd[1]: Detected virtualization kvm. Dec 13 06:55:28.962629 systemd[1]: Detected architecture x86-64. Dec 13 06:55:28.962669 systemd[1]: Detected first boot. Dec 13 06:55:28.962707 systemd[1]: Hostname set to . Dec 13 06:55:28.962732 systemd[1]: Initializing machine ID from VM UUID. Dec 13 06:55:28.962754 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Dec 13 06:55:28.962801 systemd[1]: Populated /etc with preset unit settings. Dec 13 06:55:28.962832 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 06:55:28.962862 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 06:55:28.962892 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 06:55:28.962938 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 13 06:55:28.963573 systemd[1]: Stopped initrd-switch-root.service. Dec 13 06:55:28.963603 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 13 06:55:28.963625 systemd[1]: Created slice system-addon\x2dconfig.slice. Dec 13 06:55:28.963648 systemd[1]: Created slice system-addon\x2drun.slice. Dec 13 06:55:28.963679 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Dec 13 06:55:28.963702 systemd[1]: Created slice system-getty.slice. Dec 13 06:55:28.963750 systemd[1]: Created slice system-modprobe.slice. Dec 13 06:55:28.963798 systemd[1]: Created slice system-serial\x2dgetty.slice. Dec 13 06:55:28.963848 systemd[1]: Created slice system-system\x2dcloudinit.slice. Dec 13 06:55:28.963872 systemd[1]: Created slice system-systemd\x2dfsck.slice. Dec 13 06:55:28.963905 systemd[1]: Created slice user.slice. Dec 13 06:55:28.963925 systemd[1]: Started systemd-ask-password-console.path. Dec 13 06:55:28.963956 systemd[1]: Started systemd-ask-password-wall.path. Dec 13 06:55:28.963978 systemd[1]: Set up automount boot.automount. Dec 13 06:55:28.964010 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Dec 13 06:55:28.964044 systemd[1]: Stopped target initrd-switch-root.target. Dec 13 06:55:28.964065 systemd[1]: Stopped target initrd-fs.target. Dec 13 06:55:28.964086 systemd[1]: Stopped target initrd-root-fs.target. Dec 13 06:55:28.964119 systemd[1]: Reached target integritysetup.target. Dec 13 06:55:28.964141 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 06:55:28.964168 systemd[1]: Reached target remote-fs.target. Dec 13 06:55:28.964196 systemd[1]: Reached target slices.target. Dec 13 06:55:28.964218 systemd[1]: Reached target swap.target. Dec 13 06:55:28.964238 systemd[1]: Reached target torcx.target. Dec 13 06:55:28.964266 systemd[1]: Reached target veritysetup.target. Dec 13 06:55:28.964288 systemd[1]: Listening on systemd-coredump.socket. Dec 13 06:55:28.964329 systemd[1]: Listening on systemd-initctl.socket. Dec 13 06:55:28.964362 systemd[1]: Listening on systemd-networkd.socket. Dec 13 06:55:28.964391 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 06:55:28.964414 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 06:55:28.964435 systemd[1]: Listening on systemd-userdbd.socket. Dec 13 06:55:28.964456 systemd[1]: Mounting dev-hugepages.mount... Dec 13 06:55:28.964477 systemd[1]: Mounting dev-mqueue.mount... Dec 13 06:55:28.964498 systemd[1]: Mounting media.mount... Dec 13 06:55:28.964520 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 06:55:28.964566 systemd[1]: Mounting sys-kernel-debug.mount... Dec 13 06:55:28.964590 systemd[1]: Mounting sys-kernel-tracing.mount... Dec 13 06:55:28.964629 systemd[1]: Mounting tmp.mount... Dec 13 06:55:28.964659 systemd[1]: Starting flatcar-tmpfiles.service... Dec 13 06:55:28.964682 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 06:55:28.964704 systemd[1]: Starting kmod-static-nodes.service... Dec 13 06:55:28.964732 systemd[1]: Starting modprobe@configfs.service... Dec 13 06:55:28.964755 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 06:55:28.964789 systemd[1]: Starting modprobe@drm.service... Dec 13 06:55:28.964821 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 06:55:28.964843 systemd[1]: Starting modprobe@fuse.service... Dec 13 06:55:28.964880 systemd[1]: Starting modprobe@loop.service... Dec 13 06:55:28.964904 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 06:55:28.964933 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 13 06:55:28.964956 systemd[1]: Stopped systemd-fsck-root.service. Dec 13 06:55:28.964990 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 13 06:55:28.965014 systemd[1]: Stopped systemd-fsck-usr.service. Dec 13 06:55:28.965035 systemd[1]: Stopped systemd-journald.service. Dec 13 06:55:28.965057 systemd[1]: Starting systemd-journald.service... Dec 13 06:55:28.965079 systemd[1]: Starting systemd-modules-load.service... Dec 13 06:55:28.965111 systemd[1]: Starting systemd-network-generator.service... Dec 13 06:55:28.965139 systemd[1]: Starting systemd-remount-fs.service... Dec 13 06:55:28.965160 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 06:55:28.965200 systemd[1]: verity-setup.service: Deactivated successfully. Dec 13 06:55:28.965221 systemd[1]: Stopped verity-setup.service. Dec 13 06:55:28.965242 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 06:55:28.965275 systemd[1]: Mounted dev-hugepages.mount. Dec 13 06:55:28.965296 kernel: fuse: init (API version 7.34) Dec 13 06:55:28.965328 systemd[1]: Mounted dev-mqueue.mount. Dec 13 06:55:28.965361 systemd[1]: Mounted media.mount. Dec 13 06:55:28.965397 systemd[1]: Mounted sys-kernel-debug.mount. Dec 13 06:55:28.965425 systemd[1]: Mounted sys-kernel-tracing.mount. Dec 13 06:55:28.965447 systemd[1]: Mounted tmp.mount. Dec 13 06:55:28.965468 systemd[1]: Finished kmod-static-nodes.service. Dec 13 06:55:28.965489 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 06:55:28.965515 systemd-journald[980]: Journal started Dec 13 06:55:28.965625 systemd-journald[980]: Runtime Journal (/run/log/journal/2d6cb47126d8426c97e0636c0ac6a329) is 4.7M, max 38.1M, 33.3M free. Dec 13 06:55:24.973000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 06:55:28.971990 systemd[1]: Finished modprobe@configfs.service. Dec 13 06:55:28.972031 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 06:55:25.069000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 06:55:28.974405 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 06:55:25.069000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 06:55:25.069000 audit: BPF prog-id=10 op=LOAD Dec 13 06:55:25.069000 audit: BPF prog-id=10 op=UNLOAD Dec 13 06:55:25.069000 audit: BPF prog-id=11 op=LOAD Dec 13 06:55:25.069000 audit: BPF prog-id=11 op=UNLOAD Dec 13 06:55:25.214000 audit[903]: AVC avc: denied { associate } for pid=903 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Dec 13 06:55:25.214000 audit[903]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c00014d8a2 a1=c0000cede0 a2=c0000d70c0 a3=32 items=0 ppid=886 pid=903 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 06:55:25.214000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Dec 13 06:55:25.218000 audit[903]: AVC avc: denied { associate } for pid=903 comm="torcx-generator" name="bin" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Dec 13 06:55:25.218000 audit[903]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c00014d979 a2=1ed a3=0 items=2 ppid=886 pid=903 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 06:55:25.218000 audit: CWD cwd="/" Dec 13 06:55:25.218000 audit: PATH item=0 name=(null) inode=2 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:55:25.218000 audit: PATH item=1 name=(null) inode=3 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:55:25.218000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Dec 13 06:55:28.702000 audit: BPF prog-id=12 op=LOAD Dec 13 06:55:28.702000 audit: BPF prog-id=3 op=UNLOAD Dec 13 06:55:28.703000 audit: BPF prog-id=13 op=LOAD Dec 13 06:55:28.703000 audit: BPF prog-id=14 op=LOAD Dec 13 06:55:28.703000 audit: BPF prog-id=4 op=UNLOAD Dec 13 06:55:28.703000 audit: BPF prog-id=5 op=UNLOAD Dec 13 06:55:28.706000 audit: BPF prog-id=15 op=LOAD Dec 13 06:55:28.706000 audit: BPF prog-id=12 op=UNLOAD Dec 13 06:55:28.706000 audit: BPF prog-id=16 op=LOAD Dec 13 06:55:28.706000 audit: BPF prog-id=17 op=LOAD Dec 13 06:55:28.706000 audit: BPF prog-id=13 op=UNLOAD Dec 13 06:55:28.706000 audit: BPF prog-id=14 op=UNLOAD Dec 13 06:55:28.707000 audit: BPF prog-id=18 op=LOAD Dec 13 06:55:28.707000 audit: BPF prog-id=15 op=UNLOAD Dec 13 06:55:28.707000 audit: BPF prog-id=19 op=LOAD Dec 13 06:55:28.707000 audit: BPF prog-id=20 op=LOAD Dec 13 06:55:28.707000 audit: BPF prog-id=16 op=UNLOAD Dec 13 06:55:28.707000 audit: BPF prog-id=17 op=UNLOAD Dec 13 06:55:28.980193 systemd[1]: Started systemd-journald.service. Dec 13 06:55:28.709000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:55:28.714000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:55:28.714000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:55:28.718000 audit: BPF prog-id=18 op=UNLOAD Dec 13 06:55:28.880000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:55:28.884000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:55:28.889000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:55:28.889000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:55:28.891000 audit: BPF prog-id=21 op=LOAD Dec 13 06:55:28.891000 audit: BPF prog-id=22 op=LOAD Dec 13 06:55:28.891000 audit: BPF prog-id=23 op=LOAD Dec 13 06:55:28.891000 audit: BPF prog-id=19 op=UNLOAD Dec 13 06:55:28.891000 audit: BPF prog-id=20 op=UNLOAD Dec 13 06:55:28.926000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:55:28.955000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Dec 13 06:55:28.955000 audit[980]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=5 a1=7fff3a386a30 a2=4000 a3=7fff3a386acc items=0 ppid=1 pid=980 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 06:55:28.955000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Dec 13 06:55:28.962000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:55:28.967000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:55:28.967000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:55:28.973000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:55:28.973000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:55:28.976000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:55:28.978000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:55:28.986861 kernel: loop: module loaded Dec 13 06:55:28.978000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:55:28.980000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:55:28.980000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:55:28.982000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:55:28.982000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:55:28.984000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:55:28.986000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:55:28.700586 systemd[1]: Queued start job for default target multi-user.target. Dec 13 06:55:25.210626 /usr/lib/systemd/system-generators/torcx-generator[903]: time="2024-12-13T06:55:25Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 06:55:28.700612 systemd[1]: Unnecessary job was removed for dev-vda6.device. Dec 13 06:55:25.211299 /usr/lib/systemd/system-generators/torcx-generator[903]: time="2024-12-13T06:55:25Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Dec 13 06:55:28.709369 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 13 06:55:25.211339 /usr/lib/systemd/system-generators/torcx-generator[903]: time="2024-12-13T06:55:25Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Dec 13 06:55:28.978001 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 06:55:25.211397 /usr/lib/systemd/system-generators/torcx-generator[903]: time="2024-12-13T06:55:25Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Dec 13 06:55:28.978251 systemd[1]: Finished modprobe@drm.service. Dec 13 06:55:25.211416 /usr/lib/systemd/system-generators/torcx-generator[903]: time="2024-12-13T06:55:25Z" level=debug msg="skipped missing lower profile" missing profile=oem Dec 13 06:55:28.980026 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 06:55:25.211480 /usr/lib/systemd/system-generators/torcx-generator[903]: time="2024-12-13T06:55:25Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Dec 13 06:55:28.980376 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 06:55:25.211503 /usr/lib/systemd/system-generators/torcx-generator[903]: time="2024-12-13T06:55:25Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Dec 13 06:55:28.982473 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 06:55:25.211926 /usr/lib/systemd/system-generators/torcx-generator[903]: time="2024-12-13T06:55:25Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Dec 13 06:55:28.982710 systemd[1]: Finished modprobe@fuse.service. Dec 13 06:55:25.212010 /usr/lib/systemd/system-generators/torcx-generator[903]: time="2024-12-13T06:55:25Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Dec 13 06:55:28.983951 systemd[1]: Finished systemd-network-generator.service. Dec 13 06:55:25.212040 /usr/lib/systemd/system-generators/torcx-generator[903]: time="2024-12-13T06:55:25Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Dec 13 06:55:28.986177 systemd[1]: Finished systemd-remount-fs.service. Dec 13 06:55:25.214297 /usr/lib/systemd/system-generators/torcx-generator[903]: time="2024-12-13T06:55:25Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Dec 13 06:55:28.988284 systemd[1]: Reached target network-pre.target. Dec 13 06:55:25.214381 /usr/lib/systemd/system-generators/torcx-generator[903]: time="2024-12-13T06:55:25Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Dec 13 06:55:28.991989 systemd[1]: Mounting sys-fs-fuse-connections.mount... Dec 13 06:55:25.214414 /usr/lib/systemd/system-generators/torcx-generator[903]: time="2024-12-13T06:55:25Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.6: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.6 Dec 13 06:55:28.999020 systemd[1]: Mounting sys-kernel-config.mount... Dec 13 06:55:25.214441 /usr/lib/systemd/system-generators/torcx-generator[903]: time="2024-12-13T06:55:25Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Dec 13 06:55:29.004941 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 06:55:25.214473 /usr/lib/systemd/system-generators/torcx-generator[903]: time="2024-12-13T06:55:25Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.6: no such file or directory" path=/var/lib/torcx/store/3510.3.6 Dec 13 06:55:25.214498 /usr/lib/systemd/system-generators/torcx-generator[903]: time="2024-12-13T06:55:25Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Dec 13 06:55:28.051747 /usr/lib/systemd/system-generators/torcx-generator[903]: time="2024-12-13T06:55:28Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 06:55:28.052213 /usr/lib/systemd/system-generators/torcx-generator[903]: time="2024-12-13T06:55:28Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 06:55:28.052410 /usr/lib/systemd/system-generators/torcx-generator[903]: time="2024-12-13T06:55:28Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 06:55:28.052824 /usr/lib/systemd/system-generators/torcx-generator[903]: time="2024-12-13T06:55:28Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 06:55:28.052931 /usr/lib/systemd/system-generators/torcx-generator[903]: time="2024-12-13T06:55:28Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Dec 13 06:55:28.053075 /usr/lib/systemd/system-generators/torcx-generator[903]: time="2024-12-13T06:55:28Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Dec 13 06:55:29.009402 systemd[1]: Starting systemd-hwdb-update.service... Dec 13 06:55:29.022000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:55:29.022000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:55:29.011897 systemd[1]: Starting systemd-journal-flush.service... Dec 13 06:55:29.014165 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 06:55:29.024586 systemd-journald[980]: Time spent on flushing to /var/log/journal/2d6cb47126d8426c97e0636c0ac6a329 is 75.562ms for 1265 entries. Dec 13 06:55:29.024586 systemd-journald[980]: System Journal (/var/log/journal/2d6cb47126d8426c97e0636c0ac6a329) is 8.0M, max 584.8M, 576.8M free. Dec 13 06:55:29.134355 systemd-journald[980]: Received client request to flush runtime journal. Dec 13 06:55:29.028000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:55:29.060000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:55:29.102000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:55:29.115000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:55:29.130000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:55:29.016434 systemd[1]: Starting systemd-random-seed.service... Dec 13 06:55:29.020928 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 06:55:29.135000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:55:29.022108 systemd[1]: Finished modprobe@loop.service. Dec 13 06:55:29.028116 systemd[1]: Finished systemd-modules-load.service. Dec 13 06:55:29.029240 systemd[1]: Mounted sys-fs-fuse-connections.mount. Dec 13 06:55:29.044151 systemd[1]: Mounted sys-kernel-config.mount. Dec 13 06:55:29.045596 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 06:55:29.048064 systemd[1]: Starting systemd-sysctl.service... Dec 13 06:55:29.060860 systemd[1]: Finished flatcar-tmpfiles.service. Dec 13 06:55:29.063584 systemd[1]: Starting systemd-sysusers.service... Dec 13 06:55:29.102562 systemd[1]: Finished systemd-random-seed.service. Dec 13 06:55:29.103465 systemd[1]: Reached target first-boot-complete.target. Dec 13 06:55:29.115375 systemd[1]: Finished systemd-sysctl.service. Dec 13 06:55:29.131096 systemd[1]: Finished systemd-sysusers.service. Dec 13 06:55:29.135403 systemd[1]: Finished systemd-journal-flush.service. Dec 13 06:55:29.188938 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 06:55:29.188000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:55:29.191828 systemd[1]: Starting systemd-udev-settle.service... Dec 13 06:55:29.205657 udevadm[1012]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Dec 13 06:55:29.858431 systemd[1]: Finished systemd-hwdb-update.service. Dec 13 06:55:29.869625 kernel: kauditd_printk_skb: 106 callbacks suppressed Dec 13 06:55:29.869829 kernel: audit: type=1130 audit(1734072929.862:146): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:55:29.862000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:55:29.868000 audit: BPF prog-id=24 op=LOAD Dec 13 06:55:29.868000 audit: BPF prog-id=25 op=LOAD Dec 13 06:55:29.873519 systemd[1]: Starting systemd-udevd.service... Dec 13 06:55:29.874162 kernel: audit: type=1334 audit(1734072929.868:147): prog-id=24 op=LOAD Dec 13 06:55:29.874231 kernel: audit: type=1334 audit(1734072929.868:148): prog-id=25 op=LOAD Dec 13 06:55:29.874276 kernel: audit: type=1334 audit(1734072929.868:149): prog-id=7 op=UNLOAD Dec 13 06:55:29.874325 kernel: audit: type=1334 audit(1734072929.868:150): prog-id=8 op=UNLOAD Dec 13 06:55:29.868000 audit: BPF prog-id=7 op=UNLOAD Dec 13 06:55:29.868000 audit: BPF prog-id=8 op=UNLOAD Dec 13 06:55:29.904886 systemd-udevd[1013]: Using default interface naming scheme 'v252'. Dec 13 06:55:29.939101 systemd[1]: Started systemd-udevd.service. Dec 13 06:55:29.939000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:55:29.950789 kernel: audit: type=1130 audit(1734072929.939:151): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:55:29.951491 systemd[1]: Starting systemd-networkd.service... Dec 13 06:55:29.941000 audit: BPF prog-id=26 op=LOAD Dec 13 06:55:29.957810 kernel: audit: type=1334 audit(1734072929.941:152): prog-id=26 op=LOAD Dec 13 06:55:29.961000 audit: BPF prog-id=27 op=LOAD Dec 13 06:55:29.964797 kernel: audit: type=1334 audit(1734072929.961:153): prog-id=27 op=LOAD Dec 13 06:55:29.964000 audit: BPF prog-id=28 op=LOAD Dec 13 06:55:29.964000 audit: BPF prog-id=29 op=LOAD Dec 13 06:55:29.969612 kernel: audit: type=1334 audit(1734072929.964:154): prog-id=28 op=LOAD Dec 13 06:55:29.969680 kernel: audit: type=1334 audit(1734072929.964:155): prog-id=29 op=LOAD Dec 13 06:55:29.969970 systemd[1]: Starting systemd-userdbd.service... Dec 13 06:55:30.019912 systemd[1]: Started systemd-userdbd.service. Dec 13 06:55:30.019000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:55:30.037090 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Dec 13 06:55:30.129855 systemd-networkd[1022]: lo: Link UP Dec 13 06:55:30.130447 systemd-networkd[1022]: lo: Gained carrier Dec 13 06:55:30.131540 systemd-networkd[1022]: Enumeration completed Dec 13 06:55:30.131834 systemd[1]: Started systemd-networkd.service. Dec 13 06:55:30.132008 systemd-networkd[1022]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 06:55:30.131000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:55:30.136538 systemd-networkd[1022]: eth0: Link UP Dec 13 06:55:30.136684 systemd-networkd[1022]: eth0: Gained carrier Dec 13 06:55:30.153981 systemd-networkd[1022]: eth0: DHCPv4 address 10.230.34.74/30, gateway 10.230.34.73 acquired from 10.230.34.73 Dec 13 06:55:30.159279 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Dec 13 06:55:30.175818 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 06:55:30.189834 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Dec 13 06:55:30.196794 kernel: ACPI: button: Power Button [PWRF] Dec 13 06:55:30.248000 audit[1027]: AVC avc: denied { confidentiality } for pid=1027 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Dec 13 06:55:30.248000 audit[1027]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=560a2d3870a0 a1=337fc a2=7f66d7c52bc5 a3=5 items=110 ppid=1013 pid=1027 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 06:55:30.248000 audit: CWD cwd="/" Dec 13 06:55:30.248000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:55:30.248000 audit: PATH item=1 name=(null) inode=16391 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:55:30.248000 audit: PATH item=2 name=(null) inode=16391 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:55:30.248000 audit: PATH item=3 name=(null) inode=16392 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:55:30.248000 audit: PATH item=4 name=(null) inode=16391 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:55:30.248000 audit: PATH item=5 name=(null) inode=16393 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:55:30.248000 audit: PATH item=6 name=(null) inode=16391 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:55:30.248000 audit: PATH item=7 name=(null) inode=16394 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:55:30.248000 audit: PATH item=8 name=(null) inode=16394 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:55:30.248000 audit: PATH item=9 name=(null) inode=16395 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:55:30.248000 audit: PATH item=10 name=(null) inode=16394 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:55:30.248000 audit: PATH item=11 name=(null) inode=16396 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:55:30.248000 audit: PATH item=12 name=(null) inode=16394 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:55:30.248000 audit: PATH item=13 name=(null) inode=16397 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:55:30.248000 audit: PATH item=14 name=(null) inode=16394 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:55:30.248000 audit: PATH item=15 name=(null) inode=16398 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:55:30.248000 audit: PATH item=16 name=(null) inode=16394 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:55:30.248000 audit: PATH item=17 name=(null) inode=16399 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:55:30.248000 audit: PATH item=18 name=(null) inode=16391 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:55:30.248000 audit: PATH item=19 name=(null) inode=16400 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:55:30.248000 audit: PATH item=20 name=(null) inode=16400 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:55:30.248000 audit: PATH item=21 name=(null) inode=16401 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:55:30.248000 audit: PATH item=22 name=(null) inode=16400 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:55:30.248000 audit: PATH item=23 name=(null) inode=16402 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:55:30.248000 audit: PATH item=24 name=(null) inode=16400 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:55:30.248000 audit: PATH item=25 name=(null) inode=16403 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:55:30.248000 audit: PATH item=26 name=(null) inode=16400 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:55:30.248000 audit: PATH item=27 name=(null) inode=16404 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:55:30.248000 audit: PATH item=28 name=(null) inode=16400 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:55:30.248000 audit: PATH item=29 name=(null) inode=16405 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:55:30.248000 audit: PATH item=30 name=(null) inode=16391 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:55:30.248000 audit: PATH item=31 name=(null) inode=16406 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:55:30.248000 audit: PATH item=32 name=(null) inode=16406 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:55:30.248000 audit: PATH item=33 name=(null) inode=16407 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:55:30.248000 audit: PATH item=34 name=(null) inode=16406 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:55:30.248000 audit: PATH item=35 name=(null) inode=16408 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:55:30.248000 audit: PATH item=36 name=(null) inode=16406 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:55:30.248000 audit: PATH item=37 name=(null) inode=16409 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:55:30.248000 audit: PATH item=38 name=(null) inode=16406 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:55:30.248000 audit: PATH item=39 name=(null) inode=16410 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:55:30.248000 audit: PATH item=40 name=(null) inode=16406 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:55:30.248000 audit: PATH item=41 name=(null) inode=16411 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:55:30.248000 audit: PATH item=42 name=(null) inode=16391 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:55:30.248000 audit: PATH item=43 name=(null) inode=16412 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:55:30.248000 audit: PATH item=44 name=(null) inode=16412 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:55:30.248000 audit: PATH item=45 name=(null) inode=16413 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:55:30.248000 audit: PATH item=46 name=(null) inode=16412 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:55:30.248000 audit: PATH item=47 name=(null) inode=16414 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:55:30.248000 audit: PATH item=48 name=(null) inode=16412 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:55:30.248000 audit: PATH item=49 name=(null) inode=16415 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:55:30.248000 audit: PATH item=50 name=(null) inode=16412 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:55:30.248000 audit: PATH item=51 name=(null) inode=16416 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:55:30.248000 audit: PATH item=52 name=(null) inode=16412 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:55:30.248000 audit: PATH item=53 name=(null) inode=16417 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:55:30.248000 audit: PATH item=54 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:55:30.248000 audit: PATH item=55 name=(null) inode=16418 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:55:30.248000 audit: PATH item=56 name=(null) inode=16418 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:55:30.248000 audit: PATH item=57 name=(null) inode=16419 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:55:30.248000 audit: PATH item=58 name=(null) inode=16418 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:55:30.248000 audit: PATH item=59 name=(null) inode=16420 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:55:30.248000 audit: PATH item=60 name=(null) inode=16418 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:55:30.248000 audit: PATH item=61 name=(null) inode=16421 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:55:30.248000 audit: PATH item=62 name=(null) inode=16421 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:55:30.248000 audit: PATH item=63 name=(null) inode=16422 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:55:30.248000 audit: PATH item=64 name=(null) inode=16421 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:55:30.248000 audit: PATH item=65 name=(null) inode=16423 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:55:30.248000 audit: PATH item=66 name=(null) inode=16421 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:55:30.248000 audit: PATH item=67 name=(null) inode=16424 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:55:30.248000 audit: PATH item=68 name=(null) inode=16421 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:55:30.248000 audit: PATH item=69 name=(null) inode=16425 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:55:30.248000 audit: PATH item=70 name=(null) inode=16421 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:55:30.248000 audit: PATH item=71 name=(null) inode=16426 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:55:30.248000 audit: PATH item=72 name=(null) inode=16418 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:55:30.248000 audit: PATH item=73 name=(null) inode=16427 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:55:30.248000 audit: PATH item=74 name=(null) inode=16427 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:55:30.248000 audit: PATH item=75 name=(null) inode=16428 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:55:30.248000 audit: PATH item=76 name=(null) inode=16427 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:55:30.248000 audit: PATH item=77 name=(null) inode=16431 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:55:30.248000 audit: PATH item=78 name=(null) inode=16427 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:55:30.248000 audit: PATH item=79 name=(null) inode=16432 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:55:30.248000 audit: PATH item=80 name=(null) inode=16427 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:55:30.248000 audit: PATH item=81 name=(null) inode=16433 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:55:30.248000 audit: PATH item=82 name=(null) inode=16427 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:55:30.248000 audit: PATH item=83 name=(null) inode=16434 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:55:30.248000 audit: PATH item=84 name=(null) inode=16418 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:55:30.248000 audit: PATH item=85 name=(null) inode=16435 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:55:30.248000 audit: PATH item=86 name=(null) inode=16435 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:55:30.248000 audit: PATH item=87 name=(null) inode=16436 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:55:30.248000 audit: PATH item=88 name=(null) inode=16435 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:55:30.248000 audit: PATH item=89 name=(null) inode=16437 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:55:30.248000 audit: PATH item=90 name=(null) inode=16435 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:55:30.248000 audit: PATH item=91 name=(null) inode=16438 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:55:30.248000 audit: PATH item=92 name=(null) inode=16435 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:55:30.248000 audit: PATH item=93 name=(null) inode=16439 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:55:30.248000 audit: PATH item=94 name=(null) inode=16435 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:55:30.248000 audit: PATH item=95 name=(null) inode=16440 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:55:30.248000 audit: PATH item=96 name=(null) inode=16418 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:55:30.248000 audit: PATH item=97 name=(null) inode=16441 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:55:30.248000 audit: PATH item=98 name=(null) inode=16441 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:55:30.248000 audit: PATH item=99 name=(null) inode=16442 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:55:30.248000 audit: PATH item=100 name=(null) inode=16441 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:55:30.248000 audit: PATH item=101 name=(null) inode=16443 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:55:30.248000 audit: PATH item=102 name=(null) inode=16441 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:55:30.248000 audit: PATH item=103 name=(null) inode=16444 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:55:30.248000 audit: PATH item=104 name=(null) inode=16441 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:55:30.248000 audit: PATH item=105 name=(null) inode=16445 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:55:30.248000 audit: PATH item=106 name=(null) inode=16441 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:55:30.248000 audit: PATH item=107 name=(null) inode=16446 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:55:30.248000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:55:30.248000 audit: PATH item=109 name=(null) inode=16447 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:55:30.248000 audit: PROCTITLE proctitle="(udev-worker)" Dec 13 06:55:30.298805 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Dec 13 06:55:30.337133 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Dec 13 06:55:30.337397 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Dec 13 06:55:30.337621 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Dec 13 06:55:30.469437 systemd[1]: Finished systemd-udev-settle.service. Dec 13 06:55:30.469000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:55:30.472208 systemd[1]: Starting lvm2-activation-early.service... Dec 13 06:55:30.505849 lvm[1042]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 06:55:30.539938 systemd[1]: Finished lvm2-activation-early.service. Dec 13 06:55:30.540000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:55:30.541015 systemd[1]: Reached target cryptsetup.target. Dec 13 06:55:30.543482 systemd[1]: Starting lvm2-activation.service... Dec 13 06:55:30.550545 lvm[1043]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 06:55:30.582358 systemd[1]: Finished lvm2-activation.service. Dec 13 06:55:30.583351 systemd[1]: Reached target local-fs-pre.target. Dec 13 06:55:30.582000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:55:30.584080 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 06:55:30.584128 systemd[1]: Reached target local-fs.target. Dec 13 06:55:30.584888 systemd[1]: Reached target machines.target. Dec 13 06:55:30.587629 systemd[1]: Starting ldconfig.service... Dec 13 06:55:30.588893 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 06:55:30.588960 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 06:55:30.591118 systemd[1]: Starting systemd-boot-update.service... Dec 13 06:55:30.595899 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Dec 13 06:55:30.602849 systemd[1]: Starting systemd-machine-id-commit.service... Dec 13 06:55:30.605798 systemd[1]: Starting systemd-sysext.service... Dec 13 06:55:30.607148 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1045 (bootctl) Dec 13 06:55:30.610059 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Dec 13 06:55:30.629068 systemd[1]: Unmounting usr-share-oem.mount... Dec 13 06:55:30.669990 systemd[1]: usr-share-oem.mount: Deactivated successfully. Dec 13 06:55:30.670275 systemd[1]: Unmounted usr-share-oem.mount. Dec 13 06:55:30.702831 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Dec 13 06:55:30.702000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:55:30.781795 kernel: loop0: detected capacity change from 0 to 205544 Dec 13 06:55:30.791812 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 06:55:30.794347 systemd[1]: Finished systemd-machine-id-commit.service. Dec 13 06:55:30.794000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:55:30.811242 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 06:55:30.835007 kernel: loop1: detected capacity change from 0 to 205544 Dec 13 06:55:30.850571 (sd-sysext)[1058]: Using extensions 'kubernetes'. Dec 13 06:55:30.852684 (sd-sysext)[1058]: Merged extensions into '/usr'. Dec 13 06:55:30.894176 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 06:55:30.901868 systemd[1]: Mounting usr-share-oem.mount... Dec 13 06:55:30.902838 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 06:55:30.905904 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 06:55:30.909243 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 06:55:30.909799 systemd-fsck[1055]: fsck.fat 4.2 (2021-01-31) Dec 13 06:55:30.909799 systemd-fsck[1055]: /dev/vda1: 789 files, 119291/258078 clusters Dec 13 06:55:30.914541 systemd[1]: Starting modprobe@loop.service... Dec 13 06:55:30.916107 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 06:55:30.916589 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 06:55:30.917123 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 06:55:30.923328 systemd[1]: Mounted usr-share-oem.mount. Dec 13 06:55:30.924438 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 06:55:30.924697 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 06:55:30.924000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:55:30.925000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:55:30.928531 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Dec 13 06:55:30.928000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:55:30.930217 systemd[1]: Finished systemd-sysext.service. Dec 13 06:55:30.930000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:55:30.931373 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 06:55:30.931566 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 06:55:30.931000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:55:30.931000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:55:30.933292 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 06:55:30.933549 systemd[1]: Finished modprobe@loop.service. Dec 13 06:55:30.933000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:55:30.933000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:55:30.941472 systemd[1]: Mounting boot.mount... Dec 13 06:55:30.943956 systemd[1]: Starting ensure-sysext.service... Dec 13 06:55:30.945308 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 06:55:30.945440 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 06:55:30.947933 systemd[1]: Starting systemd-tmpfiles-setup.service... Dec 13 06:55:30.960156 systemd[1]: Reloading. Dec 13 06:55:30.978955 systemd-tmpfiles[1066]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Dec 13 06:55:30.985156 systemd-tmpfiles[1066]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 06:55:30.996548 systemd-tmpfiles[1066]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 06:55:31.119679 /usr/lib/systemd/system-generators/torcx-generator[1088]: time="2024-12-13T06:55:31Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 06:55:31.120874 /usr/lib/systemd/system-generators/torcx-generator[1088]: time="2024-12-13T06:55:31Z" level=info msg="torcx already run" Dec 13 06:55:31.192468 ldconfig[1044]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 06:55:31.262854 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 06:55:31.263518 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 06:55:31.291610 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 06:55:31.376000 audit: BPF prog-id=30 op=LOAD Dec 13 06:55:31.377000 audit: BPF prog-id=26 op=UNLOAD Dec 13 06:55:31.379000 audit: BPF prog-id=31 op=LOAD Dec 13 06:55:31.379000 audit: BPF prog-id=27 op=UNLOAD Dec 13 06:55:31.379000 audit: BPF prog-id=32 op=LOAD Dec 13 06:55:31.380000 audit: BPF prog-id=33 op=LOAD Dec 13 06:55:31.380000 audit: BPF prog-id=28 op=UNLOAD Dec 13 06:55:31.380000 audit: BPF prog-id=29 op=UNLOAD Dec 13 06:55:31.383000 audit: BPF prog-id=34 op=LOAD Dec 13 06:55:31.383000 audit: BPF prog-id=21 op=UNLOAD Dec 13 06:55:31.383000 audit: BPF prog-id=35 op=LOAD Dec 13 06:55:31.383000 audit: BPF prog-id=36 op=LOAD Dec 13 06:55:31.384000 audit: BPF prog-id=22 op=UNLOAD Dec 13 06:55:31.384000 audit: BPF prog-id=23 op=UNLOAD Dec 13 06:55:31.384000 audit: BPF prog-id=37 op=LOAD Dec 13 06:55:31.384000 audit: BPF prog-id=38 op=LOAD Dec 13 06:55:31.384000 audit: BPF prog-id=24 op=UNLOAD Dec 13 06:55:31.384000 audit: BPF prog-id=25 op=UNLOAD Dec 13 06:55:31.399606 systemd[1]: Finished ldconfig.service. Dec 13 06:55:31.399000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:55:31.403022 systemd[1]: Mounted boot.mount. Dec 13 06:55:31.422362 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 06:55:31.424330 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 06:55:31.439924 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 06:55:31.444165 systemd[1]: Starting modprobe@loop.service... Dec 13 06:55:31.444938 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 06:55:31.445127 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 06:55:31.446887 systemd[1]: Finished systemd-boot-update.service. Dec 13 06:55:31.446000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:55:31.448170 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 06:55:31.448435 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 06:55:31.449000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:55:31.449000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:55:31.450663 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 06:55:31.450878 systemd[1]: Finished modprobe@loop.service. Dec 13 06:55:31.450000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:55:31.450000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:55:31.452073 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 06:55:31.454498 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 06:55:31.454686 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 06:55:31.454000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:55:31.454000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:55:31.456337 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 06:55:31.458197 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 06:55:31.462071 systemd[1]: Starting modprobe@loop.service... Dec 13 06:55:31.464234 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 06:55:31.464415 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 06:55:31.464613 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 06:55:31.465926 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 06:55:31.466163 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 06:55:31.466000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:55:31.466000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:55:31.467422 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 06:55:31.467617 systemd[1]: Finished modprobe@loop.service. Dec 13 06:55:31.467000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:55:31.467000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:55:31.472397 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 06:55:31.474196 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 06:55:31.477917 systemd[1]: Starting modprobe@drm.service... Dec 13 06:55:31.481420 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 06:55:31.485225 systemd[1]: Starting modprobe@loop.service... Dec 13 06:55:31.487346 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 06:55:31.487549 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 06:55:31.489424 systemd[1]: Starting systemd-networkd-wait-online.service... Dec 13 06:55:31.496332 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 06:55:31.496567 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 06:55:31.496000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:55:31.496000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:55:31.498979 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 06:55:31.499175 systemd[1]: Finished modprobe@drm.service. Dec 13 06:55:31.499000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:55:31.499000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:55:31.500411 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 06:55:31.500607 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 06:55:31.501000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:55:31.501000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:55:31.502755 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 06:55:31.502954 systemd[1]: Finished modprobe@loop.service. Dec 13 06:55:31.503000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:55:31.503000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:55:31.506251 systemd[1]: Finished ensure-sysext.service. Dec 13 06:55:31.506000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:55:31.508889 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 06:55:31.508955 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 06:55:31.553728 systemd-networkd[1022]: eth0: Gained IPv6LL Dec 13 06:55:31.560045 systemd[1]: Finished systemd-networkd-wait-online.service. Dec 13 06:55:31.560000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:55:31.578311 systemd[1]: Finished systemd-tmpfiles-setup.service. Dec 13 06:55:31.578000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:55:31.581323 systemd[1]: Starting audit-rules.service... Dec 13 06:55:31.583868 systemd[1]: Starting clean-ca-certificates.service... Dec 13 06:55:31.587569 systemd[1]: Starting systemd-journal-catalog-update.service... Dec 13 06:55:31.591000 audit: BPF prog-id=39 op=LOAD Dec 13 06:55:31.594007 systemd[1]: Starting systemd-resolved.service... Dec 13 06:55:31.597000 audit: BPF prog-id=40 op=LOAD Dec 13 06:55:31.599673 systemd[1]: Starting systemd-timesyncd.service... Dec 13 06:55:31.604027 systemd[1]: Starting systemd-update-utmp.service... Dec 13 06:55:31.618070 systemd[1]: Finished clean-ca-certificates.service. Dec 13 06:55:31.618000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:55:31.618941 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 06:55:31.622000 audit[1151]: SYSTEM_BOOT pid=1151 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Dec 13 06:55:31.625939 systemd[1]: Finished systemd-update-utmp.service. Dec 13 06:55:31.625000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:55:31.643380 systemd[1]: Finished systemd-journal-catalog-update.service. Dec 13 06:55:31.643000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:55:31.646367 systemd[1]: Starting systemd-update-done.service... Dec 13 06:55:31.656000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:55:31.656285 systemd[1]: Finished systemd-update-done.service. Dec 13 06:55:31.668234 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 06:55:31.668285 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 06:55:31.706512 systemd[1]: Started systemd-timesyncd.service. Dec 13 06:55:31.706000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:55:31.707569 systemd[1]: Reached target time-set.target. Dec 13 06:55:31.708000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Dec 13 06:55:31.708000 audit[1164]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe9be9de80 a2=420 a3=0 items=0 ppid=1143 pid=1164 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 06:55:31.708000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 13 06:55:31.709948 augenrules[1164]: No rules Dec 13 06:55:31.710292 systemd[1]: Finished audit-rules.service. Dec 13 06:55:31.725389 systemd-resolved[1147]: Positive Trust Anchors: Dec 13 06:55:31.725844 systemd-resolved[1147]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 06:55:31.726015 systemd-resolved[1147]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 06:55:31.734419 systemd-resolved[1147]: Using system hostname 'srv-avpje.gb1.brightbox.com'. Dec 13 06:55:31.737374 systemd[1]: Started systemd-resolved.service. Dec 13 06:55:31.738266 systemd[1]: Reached target network.target. Dec 13 06:55:31.738917 systemd[1]: Reached target network-online.target. Dec 13 06:55:31.739586 systemd[1]: Reached target nss-lookup.target. Dec 13 06:55:31.740277 systemd[1]: Reached target sysinit.target. Dec 13 06:55:31.741062 systemd[1]: Started motdgen.path. Dec 13 06:55:31.741717 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Dec 13 06:55:31.742782 systemd[1]: Started logrotate.timer. Dec 13 06:55:31.743527 systemd[1]: Started mdadm.timer. Dec 13 06:55:31.744127 systemd[1]: Started systemd-tmpfiles-clean.timer. Dec 13 06:55:31.744799 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 06:55:31.744844 systemd[1]: Reached target paths.target. Dec 13 06:55:31.745457 systemd[1]: Reached target timers.target. Dec 13 06:55:31.747135 systemd[1]: Listening on dbus.socket. Dec 13 06:55:31.749560 systemd[1]: Starting docker.socket... Dec 13 06:55:31.754273 systemd[1]: Listening on sshd.socket. Dec 13 06:55:31.755065 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 06:55:31.755697 systemd[1]: Listening on docker.socket. Dec 13 06:55:31.756464 systemd[1]: Reached target sockets.target. Dec 13 06:55:31.757128 systemd[1]: Reached target basic.target. Dec 13 06:55:31.757870 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 06:55:31.757927 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 06:55:31.759642 systemd[1]: Starting containerd.service... Dec 13 06:55:31.761853 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Dec 13 06:55:31.764872 systemd[1]: Starting dbus.service... Dec 13 06:55:31.767690 systemd[1]: Starting enable-oem-cloudinit.service... Dec 13 06:55:31.771957 systemd[1]: Starting extend-filesystems.service... Dec 13 06:55:31.773573 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Dec 13 06:55:31.780470 systemd[1]: Starting kubelet.service... Dec 13 06:55:31.783944 jq[1177]: false Dec 13 06:55:31.787137 systemd[1]: Starting motdgen.service... Dec 13 06:55:31.791675 systemd[1]: Starting ssh-key-proc-cmdline.service... Dec 13 06:55:31.797044 systemd[1]: Starting sshd-keygen.service... Dec 13 06:55:31.805239 systemd[1]: Starting systemd-logind.service... Dec 13 06:55:31.806157 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 06:55:31.806338 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 06:55:31.807198 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 06:55:31.808576 systemd[1]: Starting update-engine.service... Dec 13 06:55:31.814344 systemd[1]: Starting update-ssh-keys-after-ignition.service... Dec 13 06:55:31.821382 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 06:55:31.822491 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Dec 13 06:55:31.839699 jq[1191]: true Dec 13 06:55:31.858238 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 06:55:31.858520 systemd[1]: Finished ssh-key-proc-cmdline.service. Dec 13 06:55:31.871018 dbus-daemon[1174]: [system] SELinux support is enabled Dec 13 06:55:31.871361 systemd[1]: Started dbus.service. Dec 13 06:55:31.875189 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 06:55:31.875252 systemd[1]: Reached target system-config.target. Dec 13 06:55:31.876027 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 06:55:31.876082 systemd[1]: Reached target user-config.target. Dec 13 06:55:31.883193 extend-filesystems[1178]: Found loop1 Dec 13 06:55:31.887962 dbus-daemon[1174]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1022 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Dec 13 06:55:31.888840 jq[1200]: true Dec 13 06:55:31.892705 dbus-daemon[1174]: [system] Successfully activated service 'org.freedesktop.systemd1' Dec 13 06:55:31.901226 extend-filesystems[1178]: Found vda Dec 13 06:55:31.901226 extend-filesystems[1178]: Found vda1 Dec 13 06:55:31.901226 extend-filesystems[1178]: Found vda2 Dec 13 06:55:31.901226 extend-filesystems[1178]: Found vda3 Dec 13 06:55:31.901226 extend-filesystems[1178]: Found usr Dec 13 06:55:31.901226 extend-filesystems[1178]: Found vda4 Dec 13 06:55:31.901226 extend-filesystems[1178]: Found vda6 Dec 13 06:55:31.901226 extend-filesystems[1178]: Found vda7 Dec 13 06:55:31.901226 extend-filesystems[1178]: Found vda9 Dec 13 06:55:31.901226 extend-filesystems[1178]: Checking size of /dev/vda9 Dec 13 06:55:31.903101 systemd[1]: Starting systemd-hostnamed.service... Dec 13 06:55:31.920496 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 06:55:31.920796 systemd[1]: Finished motdgen.service. Dec 13 06:55:31.962867 extend-filesystems[1178]: Resized partition /dev/vda9 Dec 13 06:55:31.976335 extend-filesystems[1221]: resize2fs 1.46.5 (30-Dec-2021) Dec 13 06:55:32.957968 systemd-timesyncd[1150]: Contacted time server 185.177.149.33:123 (0.flatcar.pool.ntp.org). Dec 13 06:55:32.958046 systemd-timesyncd[1150]: Initial clock synchronization to Fri 2024-12-13 06:55:32.957739 UTC. Dec 13 06:55:32.958821 systemd-resolved[1147]: Clock change detected. Flushing caches. Dec 13 06:55:32.961714 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 15121403 blocks Dec 13 06:55:32.963051 update_engine[1189]: I1213 06:55:32.959383 1189 main.cc:92] Flatcar Update Engine starting Dec 13 06:55:32.966552 systemd[1]: Started update-engine.service. Dec 13 06:55:32.966923 update_engine[1189]: I1213 06:55:32.966562 1189 update_check_scheduler.cc:74] Next update check in 7m37s Dec 13 06:55:32.970430 systemd[1]: Started locksmithd.service. Dec 13 06:55:33.034908 bash[1228]: Updated "/home/core/.ssh/authorized_keys" Dec 13 06:55:33.035464 systemd[1]: Finished update-ssh-keys-after-ignition.service. Dec 13 06:55:33.050712 env[1197]: time="2024-12-13T06:55:33.048615728Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Dec 13 06:55:33.060746 systemd-logind[1187]: Watching system buttons on /dev/input/event2 (Power Button) Dec 13 06:55:33.060802 systemd-logind[1187]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 13 06:55:33.061105 systemd-logind[1187]: New seat seat0. Dec 13 06:55:33.063526 systemd[1]: Started systemd-logind.service. Dec 13 06:55:33.158599 dbus-daemon[1174]: [system] Successfully activated service 'org.freedesktop.hostname1' Dec 13 06:55:33.158822 systemd[1]: Started systemd-hostnamed.service. Dec 13 06:55:33.168866 dbus-daemon[1174]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1206 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Dec 13 06:55:33.175236 systemd[1]: Starting polkit.service... Dec 13 06:55:33.198990 env[1197]: time="2024-12-13T06:55:33.198101086Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 06:55:33.198990 env[1197]: time="2024-12-13T06:55:33.198473117Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 06:55:33.206078 env[1197]: time="2024-12-13T06:55:33.205798681Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.173-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 06:55:33.206078 env[1197]: time="2024-12-13T06:55:33.205850838Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 06:55:33.206238 env[1197]: time="2024-12-13T06:55:33.206200707Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 06:55:33.206335 env[1197]: time="2024-12-13T06:55:33.206236475Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 06:55:33.206335 env[1197]: time="2024-12-13T06:55:33.206265359Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Dec 13 06:55:33.206335 env[1197]: time="2024-12-13T06:55:33.206288535Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 06:55:33.206483 env[1197]: time="2024-12-13T06:55:33.206441523Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 06:55:33.209041 env[1197]: time="2024-12-13T06:55:33.208350467Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 06:55:33.209963 env[1197]: time="2024-12-13T06:55:33.209923726Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 06:55:33.209963 env[1197]: time="2024-12-13T06:55:33.209960061Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 06:55:33.210084 env[1197]: time="2024-12-13T06:55:33.210049727Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Dec 13 06:55:33.210187 env[1197]: time="2024-12-13T06:55:33.210084412Z" level=info msg="metadata content store policy set" policy=shared Dec 13 06:55:33.218733 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Dec 13 06:55:33.225048 polkitd[1233]: Started polkitd version 121 Dec 13 06:55:33.243990 extend-filesystems[1221]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Dec 13 06:55:33.243990 extend-filesystems[1221]: old_desc_blocks = 1, new_desc_blocks = 8 Dec 13 06:55:33.243990 extend-filesystems[1221]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Dec 13 06:55:33.279429 extend-filesystems[1178]: Resized filesystem in /dev/vda9 Dec 13 06:55:33.269919 polkitd[1233]: Loading rules from directory /etc/polkit-1/rules.d Dec 13 06:55:33.280560 env[1197]: time="2024-12-13T06:55:33.246002931Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 06:55:33.280560 env[1197]: time="2024-12-13T06:55:33.246089827Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 06:55:33.280560 env[1197]: time="2024-12-13T06:55:33.246144108Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 06:55:33.280560 env[1197]: time="2024-12-13T06:55:33.246251934Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 06:55:33.280560 env[1197]: time="2024-12-13T06:55:33.246304160Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 06:55:33.280560 env[1197]: time="2024-12-13T06:55:33.246355947Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 06:55:33.280560 env[1197]: time="2024-12-13T06:55:33.246388631Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 06:55:33.280560 env[1197]: time="2024-12-13T06:55:33.246462049Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 06:55:33.280560 env[1197]: time="2024-12-13T06:55:33.246495947Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Dec 13 06:55:33.280560 env[1197]: time="2024-12-13T06:55:33.246547856Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 06:55:33.280560 env[1197]: time="2024-12-13T06:55:33.246599264Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 06:55:33.280560 env[1197]: time="2024-12-13T06:55:33.246632224Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 06:55:33.280560 env[1197]: time="2024-12-13T06:55:33.246959780Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 06:55:33.280560 env[1197]: time="2024-12-13T06:55:33.247213154Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 06:55:33.251983 systemd[1]: Started containerd.service. Dec 13 06:55:33.270015 polkitd[1233]: Loading rules from directory /usr/share/polkit-1/rules.d Dec 13 06:55:33.281955 env[1197]: time="2024-12-13T06:55:33.247874502Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 06:55:33.281955 env[1197]: time="2024-12-13T06:55:33.247962714Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 06:55:33.281955 env[1197]: time="2024-12-13T06:55:33.248018761Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 06:55:33.281955 env[1197]: time="2024-12-13T06:55:33.248184823Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 06:55:33.281955 env[1197]: time="2024-12-13T06:55:33.248216083Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 06:55:33.281955 env[1197]: time="2024-12-13T06:55:33.248265017Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 06:55:33.281955 env[1197]: time="2024-12-13T06:55:33.248293796Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 06:55:33.281955 env[1197]: time="2024-12-13T06:55:33.248345490Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 06:55:33.281955 env[1197]: time="2024-12-13T06:55:33.248381662Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 06:55:33.281955 env[1197]: time="2024-12-13T06:55:33.248437544Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 06:55:33.281955 env[1197]: time="2024-12-13T06:55:33.248461110Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 06:55:33.281955 env[1197]: time="2024-12-13T06:55:33.248490200Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 06:55:33.281955 env[1197]: time="2024-12-13T06:55:33.248831705Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 06:55:33.281955 env[1197]: time="2024-12-13T06:55:33.248865194Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 06:55:33.281955 env[1197]: time="2024-12-13T06:55:33.248888973Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 06:55:33.274188 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 06:55:33.276014 polkitd[1233]: Finished loading, compiling and executing 2 rules Dec 13 06:55:33.282814 env[1197]: time="2024-12-13T06:55:33.248924419Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 06:55:33.282814 env[1197]: time="2024-12-13T06:55:33.248959856Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Dec 13 06:55:33.282814 env[1197]: time="2024-12-13T06:55:33.248979693Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 06:55:33.282814 env[1197]: time="2024-12-13T06:55:33.249068033Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Dec 13 06:55:33.282814 env[1197]: time="2024-12-13T06:55:33.249185253Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 06:55:33.274457 systemd[1]: Finished extend-filesystems.service. Dec 13 06:55:33.277149 dbus-daemon[1174]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Dec 13 06:55:33.277612 systemd[1]: Started polkit.service. Dec 13 06:55:33.278736 polkitd[1233]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Dec 13 06:55:33.283425 env[1197]: time="2024-12-13T06:55:33.249658465Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 06:55:33.283425 env[1197]: time="2024-12-13T06:55:33.249789542Z" level=info msg="Connect containerd service" Dec 13 06:55:33.283425 env[1197]: time="2024-12-13T06:55:33.249907506Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 06:55:33.283425 env[1197]: time="2024-12-13T06:55:33.251045313Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 06:55:33.283425 env[1197]: time="2024-12-13T06:55:33.251652512Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 06:55:33.283425 env[1197]: time="2024-12-13T06:55:33.251745235Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 06:55:33.283425 env[1197]: time="2024-12-13T06:55:33.252041167Z" level=info msg="containerd successfully booted in 0.226341s" Dec 13 06:55:33.283425 env[1197]: time="2024-12-13T06:55:33.259757422Z" level=info msg="Start subscribing containerd event" Dec 13 06:55:33.283425 env[1197]: time="2024-12-13T06:55:33.259993953Z" level=info msg="Start recovering state" Dec 13 06:55:33.283425 env[1197]: time="2024-12-13T06:55:33.260150746Z" level=info msg="Start event monitor" Dec 13 06:55:33.283425 env[1197]: time="2024-12-13T06:55:33.260197288Z" level=info msg="Start snapshots syncer" Dec 13 06:55:33.283425 env[1197]: time="2024-12-13T06:55:33.260230687Z" level=info msg="Start cni network conf syncer for default" Dec 13 06:55:33.283425 env[1197]: time="2024-12-13T06:55:33.260247219Z" level=info msg="Start streaming server" Dec 13 06:55:33.312924 systemd-hostnamed[1206]: Hostname set to (static) Dec 13 06:55:33.321200 systemd-networkd[1022]: eth0: Ignoring DHCPv6 address 2a02:1348:179:8892:24:19ff:fee6:224a/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:8892:24:19ff:fee6:224a/64 assigned by NDisc. Dec 13 06:55:33.321213 systemd-networkd[1022]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Dec 13 06:55:33.472584 locksmithd[1225]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 06:55:34.067994 systemd[1]: Created slice system-sshd.slice. Dec 13 06:55:34.122792 systemd[1]: Started kubelet.service. Dec 13 06:55:34.802346 kubelet[1250]: E1213 06:55:34.802274 1250 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 06:55:34.804099 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 06:55:34.804346 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 06:55:34.804809 systemd[1]: kubelet.service: Consumed 1.059s CPU time. Dec 13 06:55:34.968572 sshd_keygen[1202]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 06:55:34.995970 systemd[1]: Finished sshd-keygen.service. Dec 13 06:55:34.999188 systemd[1]: Starting issuegen.service... Dec 13 06:55:35.001676 systemd[1]: Started sshd@0-10.230.34.74:22-139.178.89.65:53918.service. Dec 13 06:55:35.008744 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 06:55:35.009000 systemd[1]: Finished issuegen.service. Dec 13 06:55:35.011919 systemd[1]: Starting systemd-user-sessions.service... Dec 13 06:55:35.024041 systemd[1]: Finished systemd-user-sessions.service. Dec 13 06:55:35.027244 systemd[1]: Started getty@tty1.service. Dec 13 06:55:35.030425 systemd[1]: Started serial-getty@ttyS0.service. Dec 13 06:55:35.031924 systemd[1]: Reached target getty.target. Dec 13 06:55:35.927633 sshd[1264]: Accepted publickey for core from 139.178.89.65 port 53918 ssh2: RSA SHA256:dQnQ6z9Pj/RNX8sNR4TqdGn8nHqynNIoEP6sXMH78jY Dec 13 06:55:35.930731 sshd[1264]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 06:55:35.946953 systemd[1]: Created slice user-500.slice. Dec 13 06:55:35.950125 systemd[1]: Starting user-runtime-dir@500.service... Dec 13 06:55:35.955838 systemd-logind[1187]: New session 1 of user core. Dec 13 06:55:35.967300 systemd[1]: Finished user-runtime-dir@500.service. Dec 13 06:55:35.971732 systemd[1]: Starting user@500.service... Dec 13 06:55:35.977287 (systemd)[1273]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 06:55:36.083142 systemd[1273]: Queued start job for default target default.target. Dec 13 06:55:36.085048 systemd[1273]: Reached target paths.target. Dec 13 06:55:36.085380 systemd[1273]: Reached target sockets.target. Dec 13 06:55:36.085731 systemd[1273]: Reached target timers.target. Dec 13 06:55:36.085890 systemd[1273]: Reached target basic.target. Dec 13 06:55:36.086134 systemd[1273]: Reached target default.target. Dec 13 06:55:36.086243 systemd[1]: Started user@500.service. Dec 13 06:55:36.086435 systemd[1273]: Startup finished in 99ms. Dec 13 06:55:36.091092 systemd[1]: Started session-1.scope. Dec 13 06:55:36.722122 systemd[1]: Started sshd@1-10.230.34.74:22-139.178.89.65:53932.service. Dec 13 06:55:37.614663 sshd[1282]: Accepted publickey for core from 139.178.89.65 port 53932 ssh2: RSA SHA256:dQnQ6z9Pj/RNX8sNR4TqdGn8nHqynNIoEP6sXMH78jY Dec 13 06:55:37.616803 sshd[1282]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 06:55:37.624942 systemd[1]: Started session-2.scope. Dec 13 06:55:37.626167 systemd-logind[1187]: New session 2 of user core. Dec 13 06:55:38.234558 sshd[1282]: pam_unix(sshd:session): session closed for user core Dec 13 06:55:38.238668 systemd[1]: sshd@1-10.230.34.74:22-139.178.89.65:53932.service: Deactivated successfully. Dec 13 06:55:38.239719 systemd[1]: session-2.scope: Deactivated successfully. Dec 13 06:55:38.240566 systemd-logind[1187]: Session 2 logged out. Waiting for processes to exit. Dec 13 06:55:38.241772 systemd-logind[1187]: Removed session 2. Dec 13 06:55:38.381657 systemd[1]: Started sshd@2-10.230.34.74:22-139.178.89.65:40748.service. Dec 13 06:55:39.271721 sshd[1288]: Accepted publickey for core from 139.178.89.65 port 40748 ssh2: RSA SHA256:dQnQ6z9Pj/RNX8sNR4TqdGn8nHqynNIoEP6sXMH78jY Dec 13 06:55:39.273847 sshd[1288]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 06:55:39.281519 systemd-logind[1187]: New session 3 of user core. Dec 13 06:55:39.281874 systemd[1]: Started session-3.scope. Dec 13 06:55:39.888412 sshd[1288]: pam_unix(sshd:session): session closed for user core Dec 13 06:55:39.892889 systemd-logind[1187]: Session 3 logged out. Waiting for processes to exit. Dec 13 06:55:39.895298 systemd[1]: sshd@2-10.230.34.74:22-139.178.89.65:40748.service: Deactivated successfully. Dec 13 06:55:39.896247 systemd[1]: session-3.scope: Deactivated successfully. Dec 13 06:55:39.898337 systemd-logind[1187]: Removed session 3. Dec 13 06:55:39.902378 coreos-metadata[1173]: Dec 13 06:55:39.902 WARN failed to locate config-drive, using the metadata service API instead Dec 13 06:55:39.956527 coreos-metadata[1173]: Dec 13 06:55:39.956 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Dec 13 06:55:39.981728 coreos-metadata[1173]: Dec 13 06:55:39.981 INFO Fetch successful Dec 13 06:55:39.982167 coreos-metadata[1173]: Dec 13 06:55:39.981 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Dec 13 06:55:40.017454 coreos-metadata[1173]: Dec 13 06:55:40.017 INFO Fetch successful Dec 13 06:55:40.019765 unknown[1173]: wrote ssh authorized keys file for user: core Dec 13 06:55:40.056395 update-ssh-keys[1295]: Updated "/home/core/.ssh/authorized_keys" Dec 13 06:55:40.058196 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Dec 13 06:55:40.058977 systemd[1]: Reached target multi-user.target. Dec 13 06:55:40.061807 systemd[1]: Starting systemd-update-utmp-runlevel.service... Dec 13 06:55:40.076944 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Dec 13 06:55:40.077240 systemd[1]: Finished systemd-update-utmp-runlevel.service. Dec 13 06:55:40.080841 systemd[1]: Startup finished in 1.219s (kernel) + 6.176s (initrd) + 14.210s (userspace) = 21.606s. Dec 13 06:55:44.807050 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 06:55:44.807391 systemd[1]: Stopped kubelet.service. Dec 13 06:55:44.807469 systemd[1]: kubelet.service: Consumed 1.059s CPU time. Dec 13 06:55:44.810025 systemd[1]: Starting kubelet.service... Dec 13 06:55:44.964536 systemd[1]: Started kubelet.service. Dec 13 06:55:45.054519 kubelet[1301]: E1213 06:55:45.054446 1301 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 06:55:45.059038 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 06:55:45.059313 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 06:55:50.042132 systemd[1]: Started sshd@3-10.230.34.74:22-139.178.89.65:53712.service. Dec 13 06:55:50.948080 sshd[1307]: Accepted publickey for core from 139.178.89.65 port 53712 ssh2: RSA SHA256:dQnQ6z9Pj/RNX8sNR4TqdGn8nHqynNIoEP6sXMH78jY Dec 13 06:55:50.950424 sshd[1307]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 06:55:50.959274 systemd-logind[1187]: New session 4 of user core. Dec 13 06:55:50.960630 systemd[1]: Started session-4.scope. Dec 13 06:55:51.573662 sshd[1307]: pam_unix(sshd:session): session closed for user core Dec 13 06:55:51.577722 systemd-logind[1187]: Session 4 logged out. Waiting for processes to exit. Dec 13 06:55:51.578123 systemd[1]: sshd@3-10.230.34.74:22-139.178.89.65:53712.service: Deactivated successfully. Dec 13 06:55:51.579154 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 06:55:51.580178 systemd-logind[1187]: Removed session 4. Dec 13 06:55:51.721833 systemd[1]: Started sshd@4-10.230.34.74:22-139.178.89.65:53720.service. Dec 13 06:55:52.622595 sshd[1313]: Accepted publickey for core from 139.178.89.65 port 53720 ssh2: RSA SHA256:dQnQ6z9Pj/RNX8sNR4TqdGn8nHqynNIoEP6sXMH78jY Dec 13 06:55:52.625424 sshd[1313]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 06:55:52.633409 systemd-logind[1187]: New session 5 of user core. Dec 13 06:55:52.634187 systemd[1]: Started session-5.scope. Dec 13 06:55:53.242237 sshd[1313]: pam_unix(sshd:session): session closed for user core Dec 13 06:55:53.246308 systemd[1]: sshd@4-10.230.34.74:22-139.178.89.65:53720.service: Deactivated successfully. Dec 13 06:55:53.247481 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 06:55:53.248408 systemd-logind[1187]: Session 5 logged out. Waiting for processes to exit. Dec 13 06:55:53.250554 systemd-logind[1187]: Removed session 5. Dec 13 06:55:53.391902 systemd[1]: Started sshd@5-10.230.34.74:22-139.178.89.65:53728.service. Dec 13 06:55:54.290023 sshd[1319]: Accepted publickey for core from 139.178.89.65 port 53728 ssh2: RSA SHA256:dQnQ6z9Pj/RNX8sNR4TqdGn8nHqynNIoEP6sXMH78jY Dec 13 06:55:54.293017 sshd[1319]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 06:55:54.301290 systemd[1]: Started session-6.scope. Dec 13 06:55:54.301774 systemd-logind[1187]: New session 6 of user core. Dec 13 06:55:54.912973 sshd[1319]: pam_unix(sshd:session): session closed for user core Dec 13 06:55:54.917365 systemd[1]: sshd@5-10.230.34.74:22-139.178.89.65:53728.service: Deactivated successfully. Dec 13 06:55:54.918511 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 06:55:54.919390 systemd-logind[1187]: Session 6 logged out. Waiting for processes to exit. Dec 13 06:55:54.920750 systemd-logind[1187]: Removed session 6. Dec 13 06:55:55.060038 systemd[1]: Started sshd@6-10.230.34.74:22-139.178.89.65:53740.service. Dec 13 06:55:55.061121 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 06:55:55.061340 systemd[1]: Stopped kubelet.service. Dec 13 06:55:55.063981 systemd[1]: Starting kubelet.service... Dec 13 06:55:55.201391 systemd[1]: Started kubelet.service. Dec 13 06:55:55.294784 kubelet[1331]: E1213 06:55:55.294664 1331 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 06:55:55.297117 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 06:55:55.297400 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 06:55:55.955613 sshd[1325]: Accepted publickey for core from 139.178.89.65 port 53740 ssh2: RSA SHA256:dQnQ6z9Pj/RNX8sNR4TqdGn8nHqynNIoEP6sXMH78jY Dec 13 06:55:55.957774 sshd[1325]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 06:55:55.965258 systemd-logind[1187]: New session 7 of user core. Dec 13 06:55:55.966364 systemd[1]: Started session-7.scope. Dec 13 06:55:56.444997 sudo[1337]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 06:55:56.446103 sudo[1337]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Dec 13 06:55:56.469005 systemd[1]: Starting coreos-metadata.service... Dec 13 06:56:03.345331 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Dec 13 06:56:03.522167 coreos-metadata[1341]: Dec 13 06:56:03.522 WARN failed to locate config-drive, using the metadata service API instead Dec 13 06:56:03.576000 coreos-metadata[1341]: Dec 13 06:56:03.575 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Dec 13 06:56:03.577383 coreos-metadata[1341]: Dec 13 06:56:03.577 INFO Fetch successful Dec 13 06:56:03.577676 coreos-metadata[1341]: Dec 13 06:56:03.577 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Dec 13 06:56:03.590181 coreos-metadata[1341]: Dec 13 06:56:03.589 INFO Fetch successful Dec 13 06:56:03.590475 coreos-metadata[1341]: Dec 13 06:56:03.590 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Dec 13 06:56:03.603426 coreos-metadata[1341]: Dec 13 06:56:03.603 INFO Fetch successful Dec 13 06:56:03.603972 coreos-metadata[1341]: Dec 13 06:56:03.603 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Dec 13 06:56:03.618738 coreos-metadata[1341]: Dec 13 06:56:03.618 INFO Fetch successful Dec 13 06:56:03.619021 coreos-metadata[1341]: Dec 13 06:56:03.618 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Dec 13 06:56:03.636532 coreos-metadata[1341]: Dec 13 06:56:03.636 INFO Fetch successful Dec 13 06:56:03.648230 systemd[1]: Finished coreos-metadata.service. Dec 13 06:56:04.407422 systemd[1]: Stopped kubelet.service. Dec 13 06:56:04.411860 systemd[1]: Starting kubelet.service... Dec 13 06:56:04.449958 systemd[1]: Reloading. Dec 13 06:56:04.605065 /usr/lib/systemd/system-generators/torcx-generator[1402]: time="2024-12-13T06:56:04Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 06:56:04.606926 /usr/lib/systemd/system-generators/torcx-generator[1402]: time="2024-12-13T06:56:04Z" level=info msg="torcx already run" Dec 13 06:56:04.718206 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 06:56:04.718633 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 06:56:04.748028 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 06:56:04.893746 systemd[1]: Started kubelet.service. Dec 13 06:56:04.896352 systemd[1]: Stopping kubelet.service... Dec 13 06:56:04.897081 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 06:56:04.897482 systemd[1]: Stopped kubelet.service. Dec 13 06:56:04.900428 systemd[1]: Starting kubelet.service... Dec 13 06:56:05.035315 systemd[1]: Started kubelet.service. Dec 13 06:56:05.129769 kubelet[1453]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 06:56:05.129769 kubelet[1453]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 06:56:05.129769 kubelet[1453]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 06:56:05.131283 kubelet[1453]: I1213 06:56:05.131225 1453 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 06:56:06.419721 kubelet[1453]: I1213 06:56:06.419649 1453 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Dec 13 06:56:06.420590 kubelet[1453]: I1213 06:56:06.420550 1453 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 06:56:06.421097 kubelet[1453]: I1213 06:56:06.421070 1453 server.go:929] "Client rotation is on, will bootstrap in background" Dec 13 06:56:06.476198 kubelet[1453]: I1213 06:56:06.476142 1453 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 06:56:06.485879 kubelet[1453]: E1213 06:56:06.485828 1453 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Dec 13 06:56:06.486007 kubelet[1453]: I1213 06:56:06.485905 1453 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Dec 13 06:56:06.493406 kubelet[1453]: I1213 06:56:06.493376 1453 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 06:56:06.495112 kubelet[1453]: I1213 06:56:06.495040 1453 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Dec 13 06:56:06.495375 kubelet[1453]: I1213 06:56:06.495319 1453 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 06:56:06.495666 kubelet[1453]: I1213 06:56:06.495372 1453 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.230.34.74","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 13 06:56:06.495666 kubelet[1453]: I1213 06:56:06.495654 1453 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 06:56:06.495666 kubelet[1453]: I1213 06:56:06.495672 1453 container_manager_linux.go:300] "Creating device plugin manager" Dec 13 06:56:06.496075 kubelet[1453]: I1213 06:56:06.495828 1453 state_mem.go:36] "Initialized new in-memory state store" Dec 13 06:56:06.498380 kubelet[1453]: I1213 06:56:06.498337 1453 kubelet.go:408] "Attempting to sync node with API server" Dec 13 06:56:06.498380 kubelet[1453]: I1213 06:56:06.498373 1453 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 06:56:06.498526 kubelet[1453]: I1213 06:56:06.498432 1453 kubelet.go:314] "Adding apiserver pod source" Dec 13 06:56:06.498526 kubelet[1453]: I1213 06:56:06.498462 1453 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 06:56:06.516739 kubelet[1453]: E1213 06:56:06.516674 1453 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:56:06.517770 kubelet[1453]: E1213 06:56:06.517742 1453 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:56:06.518052 kubelet[1453]: I1213 06:56:06.517761 1453 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Dec 13 06:56:06.520848 kubelet[1453]: I1213 06:56:06.520816 1453 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 06:56:06.521996 kubelet[1453]: W1213 06:56:06.521965 1453 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 06:56:06.523137 kubelet[1453]: I1213 06:56:06.523113 1453 server.go:1269] "Started kubelet" Dec 13 06:56:06.532951 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Dec 13 06:56:06.534960 kubelet[1453]: I1213 06:56:06.533669 1453 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 06:56:06.537908 kubelet[1453]: I1213 06:56:06.537833 1453 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 06:56:06.541708 kubelet[1453]: I1213 06:56:06.541659 1453 server.go:460] "Adding debug handlers to kubelet server" Dec 13 06:56:06.552329 kubelet[1453]: I1213 06:56:06.550484 1453 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 06:56:06.552329 kubelet[1453]: I1213 06:56:06.551535 1453 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 06:56:06.552329 kubelet[1453]: I1213 06:56:06.551997 1453 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 13 06:56:06.556549 kubelet[1453]: I1213 06:56:06.555576 1453 volume_manager.go:289] "Starting Kubelet Volume Manager" Dec 13 06:56:06.556549 kubelet[1453]: E1213 06:56:06.555791 1453 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.230.34.74\" not found" Dec 13 06:56:06.562422 kubelet[1453]: I1213 06:56:06.559278 1453 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 13 06:56:06.562422 kubelet[1453]: I1213 06:56:06.559465 1453 reconciler.go:26] "Reconciler: start to sync state" Dec 13 06:56:06.562422 kubelet[1453]: I1213 06:56:06.561112 1453 factory.go:221] Registration of the systemd container factory successfully Dec 13 06:56:06.562422 kubelet[1453]: I1213 06:56:06.561351 1453 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 06:56:06.570510 kubelet[1453]: I1213 06:56:06.567012 1453 factory.go:221] Registration of the containerd container factory successfully Dec 13 06:56:06.581612 kubelet[1453]: E1213 06:56:06.581532 1453 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.230.34.74\" not found" node="10.230.34.74" Dec 13 06:56:06.594834 kubelet[1453]: I1213 06:56:06.594781 1453 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 06:56:06.594834 kubelet[1453]: I1213 06:56:06.594827 1453 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 06:56:06.595027 kubelet[1453]: I1213 06:56:06.594854 1453 state_mem.go:36] "Initialized new in-memory state store" Dec 13 06:56:06.597213 kubelet[1453]: I1213 06:56:06.597187 1453 policy_none.go:49] "None policy: Start" Dec 13 06:56:06.598127 kubelet[1453]: I1213 06:56:06.598096 1453 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 06:56:06.598127 kubelet[1453]: I1213 06:56:06.598130 1453 state_mem.go:35] "Initializing new in-memory state store" Dec 13 06:56:06.609289 systemd[1]: Created slice kubepods.slice. Dec 13 06:56:06.619419 systemd[1]: Created slice kubepods-burstable.slice. Dec 13 06:56:06.628797 systemd[1]: Created slice kubepods-besteffort.slice. Dec 13 06:56:06.639830 kubelet[1453]: I1213 06:56:06.639801 1453 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 06:56:06.640369 kubelet[1453]: I1213 06:56:06.640325 1453 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 13 06:56:06.640614 kubelet[1453]: I1213 06:56:06.640504 1453 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 06:56:06.645609 kubelet[1453]: E1213 06:56:06.642260 1453 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.230.34.74\" not found" Dec 13 06:56:06.646328 kubelet[1453]: I1213 06:56:06.646309 1453 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 06:56:06.736419 kubelet[1453]: I1213 06:56:06.736158 1453 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 06:56:06.740826 kubelet[1453]: I1213 06:56:06.740770 1453 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 06:56:06.741092 kubelet[1453]: I1213 06:56:06.741068 1453 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 06:56:06.741165 kubelet[1453]: I1213 06:56:06.741121 1453 kubelet.go:2321] "Starting kubelet main sync loop" Dec 13 06:56:06.741268 kubelet[1453]: E1213 06:56:06.741235 1453 kubelet.go:2345] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Dec 13 06:56:06.751451 kubelet[1453]: I1213 06:56:06.751406 1453 kubelet_node_status.go:72] "Attempting to register node" node="10.230.34.74" Dec 13 06:56:06.758173 kubelet[1453]: I1213 06:56:06.758140 1453 kubelet_node_status.go:75] "Successfully registered node" node="10.230.34.74" Dec 13 06:56:06.775001 kubelet[1453]: I1213 06:56:06.774937 1453 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Dec 13 06:56:06.775612 env[1197]: time="2024-12-13T06:56:06.775477313Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 06:56:06.776617 kubelet[1453]: I1213 06:56:06.776593 1453 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Dec 13 06:56:07.085076 sudo[1337]: pam_unix(sudo:session): session closed for user root Dec 13 06:56:07.232138 sshd[1325]: pam_unix(sshd:session): session closed for user core Dec 13 06:56:07.236270 systemd[1]: sshd@6-10.230.34.74:22-139.178.89.65:53740.service: Deactivated successfully. Dec 13 06:56:07.237612 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 06:56:07.238605 systemd-logind[1187]: Session 7 logged out. Waiting for processes to exit. Dec 13 06:56:07.240072 systemd-logind[1187]: Removed session 7. Dec 13 06:56:07.424541 kubelet[1453]: I1213 06:56:07.424032 1453 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Dec 13 06:56:07.425330 kubelet[1453]: W1213 06:56:07.424597 1453 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Dec 13 06:56:07.425330 kubelet[1453]: W1213 06:56:07.424676 1453 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Dec 13 06:56:07.425330 kubelet[1453]: W1213 06:56:07.424742 1453 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Dec 13 06:56:07.510795 kubelet[1453]: I1213 06:56:07.510742 1453 apiserver.go:52] "Watching apiserver" Dec 13 06:56:07.518065 kubelet[1453]: E1213 06:56:07.518036 1453 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:56:07.524181 systemd[1]: Created slice kubepods-besteffort-podf81aa173_8682_42ea_b665_cce6aadf7121.slice. Dec 13 06:56:07.543113 systemd[1]: Created slice kubepods-burstable-poddd1262ca_a278_4bba_959b_8d0d83228369.slice. Dec 13 06:56:07.560224 kubelet[1453]: I1213 06:56:07.560187 1453 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 13 06:56:07.566080 kubelet[1453]: I1213 06:56:07.566024 1453 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/dd1262ca-a278-4bba-959b-8d0d83228369-cilium-config-path\") pod \"cilium-78kmr\" (UID: \"dd1262ca-a278-4bba-959b-8d0d83228369\") " pod="kube-system/cilium-78kmr" Dec 13 06:56:07.566186 kubelet[1453]: I1213 06:56:07.566091 1453 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6l7z\" (UniqueName: \"kubernetes.io/projected/f81aa173-8682-42ea-b665-cce6aadf7121-kube-api-access-v6l7z\") pod \"kube-proxy-nl47l\" (UID: \"f81aa173-8682-42ea-b665-cce6aadf7121\") " pod="kube-system/kube-proxy-nl47l" Dec 13 06:56:07.566186 kubelet[1453]: I1213 06:56:07.566131 1453 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/dd1262ca-a278-4bba-959b-8d0d83228369-bpf-maps\") pod \"cilium-78kmr\" (UID: \"dd1262ca-a278-4bba-959b-8d0d83228369\") " pod="kube-system/cilium-78kmr" Dec 13 06:56:07.566186 kubelet[1453]: I1213 06:56:07.566159 1453 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/dd1262ca-a278-4bba-959b-8d0d83228369-hostproc\") pod \"cilium-78kmr\" (UID: \"dd1262ca-a278-4bba-959b-8d0d83228369\") " pod="kube-system/cilium-78kmr" Dec 13 06:56:07.566186 kubelet[1453]: I1213 06:56:07.566184 1453 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/dd1262ca-a278-4bba-959b-8d0d83228369-cni-path\") pod \"cilium-78kmr\" (UID: \"dd1262ca-a278-4bba-959b-8d0d83228369\") " pod="kube-system/cilium-78kmr" Dec 13 06:56:07.566399 kubelet[1453]: I1213 06:56:07.566209 1453 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/dd1262ca-a278-4bba-959b-8d0d83228369-etc-cni-netd\") pod \"cilium-78kmr\" (UID: \"dd1262ca-a278-4bba-959b-8d0d83228369\") " pod="kube-system/cilium-78kmr" Dec 13 06:56:07.566399 kubelet[1453]: I1213 06:56:07.566234 1453 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dd1262ca-a278-4bba-959b-8d0d83228369-xtables-lock\") pod \"cilium-78kmr\" (UID: \"dd1262ca-a278-4bba-959b-8d0d83228369\") " pod="kube-system/cilium-78kmr" Dec 13 06:56:07.566399 kubelet[1453]: I1213 06:56:07.566258 1453 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f81aa173-8682-42ea-b665-cce6aadf7121-xtables-lock\") pod \"kube-proxy-nl47l\" (UID: \"f81aa173-8682-42ea-b665-cce6aadf7121\") " pod="kube-system/kube-proxy-nl47l" Dec 13 06:56:07.566399 kubelet[1453]: I1213 06:56:07.566282 1453 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f81aa173-8682-42ea-b665-cce6aadf7121-lib-modules\") pod \"kube-proxy-nl47l\" (UID: \"f81aa173-8682-42ea-b665-cce6aadf7121\") " pod="kube-system/kube-proxy-nl47l" Dec 13 06:56:07.566399 kubelet[1453]: I1213 06:56:07.566308 1453 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/dd1262ca-a278-4bba-959b-8d0d83228369-cilium-cgroup\") pod \"cilium-78kmr\" (UID: \"dd1262ca-a278-4bba-959b-8d0d83228369\") " pod="kube-system/cilium-78kmr" Dec 13 06:56:07.566399 kubelet[1453]: I1213 06:56:07.566345 1453 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/dd1262ca-a278-4bba-959b-8d0d83228369-hubble-tls\") pod \"cilium-78kmr\" (UID: \"dd1262ca-a278-4bba-959b-8d0d83228369\") " pod="kube-system/cilium-78kmr" Dec 13 06:56:07.566778 kubelet[1453]: I1213 06:56:07.566376 1453 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f81aa173-8682-42ea-b665-cce6aadf7121-kube-proxy\") pod \"kube-proxy-nl47l\" (UID: \"f81aa173-8682-42ea-b665-cce6aadf7121\") " pod="kube-system/kube-proxy-nl47l" Dec 13 06:56:07.566778 kubelet[1453]: I1213 06:56:07.566403 1453 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dd1262ca-a278-4bba-959b-8d0d83228369-lib-modules\") pod \"cilium-78kmr\" (UID: \"dd1262ca-a278-4bba-959b-8d0d83228369\") " pod="kube-system/cilium-78kmr" Dec 13 06:56:07.566778 kubelet[1453]: I1213 06:56:07.566432 1453 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/dd1262ca-a278-4bba-959b-8d0d83228369-clustermesh-secrets\") pod \"cilium-78kmr\" (UID: \"dd1262ca-a278-4bba-959b-8d0d83228369\") " pod="kube-system/cilium-78kmr" Dec 13 06:56:07.566778 kubelet[1453]: I1213 06:56:07.566459 1453 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/dd1262ca-a278-4bba-959b-8d0d83228369-cilium-run\") pod \"cilium-78kmr\" (UID: \"dd1262ca-a278-4bba-959b-8d0d83228369\") " pod="kube-system/cilium-78kmr" Dec 13 06:56:07.566778 kubelet[1453]: I1213 06:56:07.566485 1453 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/dd1262ca-a278-4bba-959b-8d0d83228369-host-proc-sys-net\") pod \"cilium-78kmr\" (UID: \"dd1262ca-a278-4bba-959b-8d0d83228369\") " pod="kube-system/cilium-78kmr" Dec 13 06:56:07.566778 kubelet[1453]: I1213 06:56:07.566513 1453 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/dd1262ca-a278-4bba-959b-8d0d83228369-host-proc-sys-kernel\") pod \"cilium-78kmr\" (UID: \"dd1262ca-a278-4bba-959b-8d0d83228369\") " pod="kube-system/cilium-78kmr" Dec 13 06:56:07.567058 kubelet[1453]: I1213 06:56:07.566544 1453 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfzh9\" (UniqueName: \"kubernetes.io/projected/dd1262ca-a278-4bba-959b-8d0d83228369-kube-api-access-xfzh9\") pod \"cilium-78kmr\" (UID: \"dd1262ca-a278-4bba-959b-8d0d83228369\") " pod="kube-system/cilium-78kmr" Dec 13 06:56:07.669533 kubelet[1453]: I1213 06:56:07.669438 1453 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Dec 13 06:56:07.841258 env[1197]: time="2024-12-13T06:56:07.841181098Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-nl47l,Uid:f81aa173-8682-42ea-b665-cce6aadf7121,Namespace:kube-system,Attempt:0,}" Dec 13 06:56:07.865130 env[1197]: time="2024-12-13T06:56:07.864458550Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-78kmr,Uid:dd1262ca-a278-4bba-959b-8d0d83228369,Namespace:kube-system,Attempt:0,}" Dec 13 06:56:08.518645 kubelet[1453]: E1213 06:56:08.518543 1453 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:56:08.809684 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4090997719.mount: Deactivated successfully. Dec 13 06:56:08.817247 env[1197]: time="2024-12-13T06:56:08.817108466Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 06:56:08.819166 env[1197]: time="2024-12-13T06:56:08.819084248Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 06:56:08.822757 env[1197]: time="2024-12-13T06:56:08.822705831Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 06:56:08.825995 env[1197]: time="2024-12-13T06:56:08.825939437Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 06:56:08.839727 env[1197]: time="2024-12-13T06:56:08.839637719Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 06:56:08.842885 env[1197]: time="2024-12-13T06:56:08.842836442Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 06:56:08.845120 env[1197]: time="2024-12-13T06:56:08.845067519Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 06:56:08.846826 env[1197]: time="2024-12-13T06:56:08.846784158Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 06:56:08.883723 env[1197]: time="2024-12-13T06:56:08.883551801Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 06:56:08.883869 env[1197]: time="2024-12-13T06:56:08.883743121Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 06:56:08.883869 env[1197]: time="2024-12-13T06:56:08.883813123Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 06:56:08.884184 env[1197]: time="2024-12-13T06:56:08.884118536Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/44f0f6a5cfe880483e94573bea29e37457e028524cef2611ae4520d570290b59 pid=1514 runtime=io.containerd.runc.v2 Dec 13 06:56:08.898912 env[1197]: time="2024-12-13T06:56:08.898781969Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 06:56:08.899182 env[1197]: time="2024-12-13T06:56:08.898952363Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 06:56:08.899182 env[1197]: time="2024-12-13T06:56:08.899034357Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 06:56:08.899491 env[1197]: time="2024-12-13T06:56:08.899435365Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5c917c7566ed15b18c84b7276a4e39c2da4329673d416e0a964fb4aec11ee853 pid=1517 runtime=io.containerd.runc.v2 Dec 13 06:56:08.922218 systemd[1]: Started cri-containerd-44f0f6a5cfe880483e94573bea29e37457e028524cef2611ae4520d570290b59.scope. Dec 13 06:56:08.951016 systemd[1]: Started cri-containerd-5c917c7566ed15b18c84b7276a4e39c2da4329673d416e0a964fb4aec11ee853.scope. Dec 13 06:56:08.996079 env[1197]: time="2024-12-13T06:56:08.995998191Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-78kmr,Uid:dd1262ca-a278-4bba-959b-8d0d83228369,Namespace:kube-system,Attempt:0,} returns sandbox id \"44f0f6a5cfe880483e94573bea29e37457e028524cef2611ae4520d570290b59\"" Dec 13 06:56:08.999649 env[1197]: time="2024-12-13T06:56:08.999608902Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Dec 13 06:56:09.010932 env[1197]: time="2024-12-13T06:56:09.010867600Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-nl47l,Uid:f81aa173-8682-42ea-b665-cce6aadf7121,Namespace:kube-system,Attempt:0,} returns sandbox id \"5c917c7566ed15b18c84b7276a4e39c2da4329673d416e0a964fb4aec11ee853\"" Dec 13 06:56:09.518955 kubelet[1453]: E1213 06:56:09.518878 1453 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:56:10.520147 kubelet[1453]: E1213 06:56:10.520077 1453 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:56:11.521083 kubelet[1453]: E1213 06:56:11.520982 1453 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:56:12.521925 kubelet[1453]: E1213 06:56:12.521843 1453 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:56:13.522205 kubelet[1453]: E1213 06:56:13.522092 1453 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:56:14.523047 kubelet[1453]: E1213 06:56:14.522980 1453 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:56:15.524136 kubelet[1453]: E1213 06:56:15.524056 1453 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:56:16.525092 kubelet[1453]: E1213 06:56:16.525009 1453 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:56:17.315547 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2184377092.mount: Deactivated successfully. Dec 13 06:56:17.526337 kubelet[1453]: E1213 06:56:17.526205 1453 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:56:18.485021 update_engine[1189]: I1213 06:56:18.484019 1189 update_attempter.cc:509] Updating boot flags... Dec 13 06:56:18.526935 kubelet[1453]: E1213 06:56:18.526852 1453 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:56:19.527265 kubelet[1453]: E1213 06:56:19.527185 1453 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:56:20.528158 kubelet[1453]: E1213 06:56:20.528080 1453 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:56:21.528539 kubelet[1453]: E1213 06:56:21.528455 1453 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:56:21.972011 env[1197]: time="2024-12-13T06:56:21.971918472Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 06:56:21.975000 env[1197]: time="2024-12-13T06:56:21.974961730Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 06:56:21.977988 env[1197]: time="2024-12-13T06:56:21.977285040Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 06:56:21.978520 env[1197]: time="2024-12-13T06:56:21.978477918Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Dec 13 06:56:21.983193 env[1197]: time="2024-12-13T06:56:21.983146068Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.4\"" Dec 13 06:56:21.983544 env[1197]: time="2024-12-13T06:56:21.983503872Z" level=info msg="CreateContainer within sandbox \"44f0f6a5cfe880483e94573bea29e37457e028524cef2611ae4520d570290b59\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 06:56:22.002518 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3089507729.mount: Deactivated successfully. Dec 13 06:56:22.011673 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4194075286.mount: Deactivated successfully. Dec 13 06:56:22.015263 env[1197]: time="2024-12-13T06:56:22.015216023Z" level=info msg="CreateContainer within sandbox \"44f0f6a5cfe880483e94573bea29e37457e028524cef2611ae4520d570290b59\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"a249e7c5ceeaeba52b79efa3dc3b754e3665e5ff31a7b6f988042a943e06b4c5\"" Dec 13 06:56:22.016394 env[1197]: time="2024-12-13T06:56:22.016357102Z" level=info msg="StartContainer for \"a249e7c5ceeaeba52b79efa3dc3b754e3665e5ff31a7b6f988042a943e06b4c5\"" Dec 13 06:56:22.046930 systemd[1]: Started cri-containerd-a249e7c5ceeaeba52b79efa3dc3b754e3665e5ff31a7b6f988042a943e06b4c5.scope. Dec 13 06:56:22.096128 env[1197]: time="2024-12-13T06:56:22.096065906Z" level=info msg="StartContainer for \"a249e7c5ceeaeba52b79efa3dc3b754e3665e5ff31a7b6f988042a943e06b4c5\" returns successfully" Dec 13 06:56:22.107031 systemd[1]: cri-containerd-a249e7c5ceeaeba52b79efa3dc3b754e3665e5ff31a7b6f988042a943e06b4c5.scope: Deactivated successfully. Dec 13 06:56:22.389152 env[1197]: time="2024-12-13T06:56:22.389089047Z" level=info msg="shim disconnected" id=a249e7c5ceeaeba52b79efa3dc3b754e3665e5ff31a7b6f988042a943e06b4c5 Dec 13 06:56:22.389441 env[1197]: time="2024-12-13T06:56:22.389192324Z" level=warning msg="cleaning up after shim disconnected" id=a249e7c5ceeaeba52b79efa3dc3b754e3665e5ff31a7b6f988042a943e06b4c5 namespace=k8s.io Dec 13 06:56:22.389441 env[1197]: time="2024-12-13T06:56:22.389229409Z" level=info msg="cleaning up dead shim" Dec 13 06:56:22.402232 env[1197]: time="2024-12-13T06:56:22.402169772Z" level=warning msg="cleanup warnings time=\"2024-12-13T06:56:22Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1649 runtime=io.containerd.runc.v2\n" Dec 13 06:56:22.528914 kubelet[1453]: E1213 06:56:22.528803 1453 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:56:22.778757 env[1197]: time="2024-12-13T06:56:22.778577603Z" level=info msg="CreateContainer within sandbox \"44f0f6a5cfe880483e94573bea29e37457e028524cef2611ae4520d570290b59\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 06:56:22.794873 env[1197]: time="2024-12-13T06:56:22.794821506Z" level=info msg="CreateContainer within sandbox \"44f0f6a5cfe880483e94573bea29e37457e028524cef2611ae4520d570290b59\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"417358521cc24441050b1dc2db3614a321a7729851477aaf4805999a6e2a6545\"" Dec 13 06:56:22.795961 env[1197]: time="2024-12-13T06:56:22.795909054Z" level=info msg="StartContainer for \"417358521cc24441050b1dc2db3614a321a7729851477aaf4805999a6e2a6545\"" Dec 13 06:56:22.823921 systemd[1]: Started cri-containerd-417358521cc24441050b1dc2db3614a321a7729851477aaf4805999a6e2a6545.scope. Dec 13 06:56:22.908972 env[1197]: time="2024-12-13T06:56:22.908907129Z" level=info msg="StartContainer for \"417358521cc24441050b1dc2db3614a321a7729851477aaf4805999a6e2a6545\" returns successfully" Dec 13 06:56:22.925545 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 06:56:22.926139 systemd[1]: Stopped systemd-sysctl.service. Dec 13 06:56:22.927592 systemd[1]: Stopping systemd-sysctl.service... Dec 13 06:56:22.931905 systemd[1]: Starting systemd-sysctl.service... Dec 13 06:56:22.932405 systemd[1]: cri-containerd-417358521cc24441050b1dc2db3614a321a7729851477aaf4805999a6e2a6545.scope: Deactivated successfully. Dec 13 06:56:22.946815 systemd[1]: Finished systemd-sysctl.service. Dec 13 06:56:23.004151 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a249e7c5ceeaeba52b79efa3dc3b754e3665e5ff31a7b6f988042a943e06b4c5-rootfs.mount: Deactivated successfully. Dec 13 06:56:23.011212 env[1197]: time="2024-12-13T06:56:23.011158761Z" level=info msg="shim disconnected" id=417358521cc24441050b1dc2db3614a321a7729851477aaf4805999a6e2a6545 Dec 13 06:56:23.011765 env[1197]: time="2024-12-13T06:56:23.011720978Z" level=warning msg="cleaning up after shim disconnected" id=417358521cc24441050b1dc2db3614a321a7729851477aaf4805999a6e2a6545 namespace=k8s.io Dec 13 06:56:23.011907 env[1197]: time="2024-12-13T06:56:23.011876918Z" level=info msg="cleaning up dead shim" Dec 13 06:56:23.043957 env[1197]: time="2024-12-13T06:56:23.043224755Z" level=warning msg="cleanup warnings time=\"2024-12-13T06:56:23Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1715 runtime=io.containerd.runc.v2\n" Dec 13 06:56:23.529758 kubelet[1453]: E1213 06:56:23.529645 1453 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:56:23.782798 env[1197]: time="2024-12-13T06:56:23.782400120Z" level=info msg="CreateContainer within sandbox \"44f0f6a5cfe880483e94573bea29e37457e028524cef2611ae4520d570290b59\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 06:56:23.813488 env[1197]: time="2024-12-13T06:56:23.813409531Z" level=info msg="CreateContainer within sandbox \"44f0f6a5cfe880483e94573bea29e37457e028524cef2611ae4520d570290b59\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"971c1b0aa28a02c620ff1cb83fd90689598dab78ba21ec8af71c24843b8bba1a\"" Dec 13 06:56:23.819018 env[1197]: time="2024-12-13T06:56:23.818964860Z" level=info msg="StartContainer for \"971c1b0aa28a02c620ff1cb83fd90689598dab78ba21ec8af71c24843b8bba1a\"" Dec 13 06:56:23.849381 systemd[1]: Started cri-containerd-971c1b0aa28a02c620ff1cb83fd90689598dab78ba21ec8af71c24843b8bba1a.scope. Dec 13 06:56:23.905014 systemd[1]: cri-containerd-971c1b0aa28a02c620ff1cb83fd90689598dab78ba21ec8af71c24843b8bba1a.scope: Deactivated successfully. Dec 13 06:56:23.906442 env[1197]: time="2024-12-13T06:56:23.906393658Z" level=info msg="StartContainer for \"971c1b0aa28a02c620ff1cb83fd90689598dab78ba21ec8af71c24843b8bba1a\" returns successfully" Dec 13 06:56:23.969618 env[1197]: time="2024-12-13T06:56:23.969542366Z" level=info msg="shim disconnected" id=971c1b0aa28a02c620ff1cb83fd90689598dab78ba21ec8af71c24843b8bba1a Dec 13 06:56:23.969993 env[1197]: time="2024-12-13T06:56:23.969961473Z" level=warning msg="cleaning up after shim disconnected" id=971c1b0aa28a02c620ff1cb83fd90689598dab78ba21ec8af71c24843b8bba1a namespace=k8s.io Dec 13 06:56:23.970141 env[1197]: time="2024-12-13T06:56:23.970112559Z" level=info msg="cleaning up dead shim" Dec 13 06:56:23.993469 env[1197]: time="2024-12-13T06:56:23.993415428Z" level=warning msg="cleanup warnings time=\"2024-12-13T06:56:23Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1770 runtime=io.containerd.runc.v2\n" Dec 13 06:56:24.000053 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-971c1b0aa28a02c620ff1cb83fd90689598dab78ba21ec8af71c24843b8bba1a-rootfs.mount: Deactivated successfully. Dec 13 06:56:24.477600 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2296566682.mount: Deactivated successfully. Dec 13 06:56:24.530611 kubelet[1453]: E1213 06:56:24.530523 1453 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:56:24.786181 env[1197]: time="2024-12-13T06:56:24.785839280Z" level=info msg="CreateContainer within sandbox \"44f0f6a5cfe880483e94573bea29e37457e028524cef2611ae4520d570290b59\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 06:56:24.802915 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount709205663.mount: Deactivated successfully. Dec 13 06:56:24.813006 env[1197]: time="2024-12-13T06:56:24.812963486Z" level=info msg="CreateContainer within sandbox \"44f0f6a5cfe880483e94573bea29e37457e028524cef2611ae4520d570290b59\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"2b49a76c6d0db9ee3741f24f27d9268a4f834418a0e10f426950afe072c545e9\"" Dec 13 06:56:24.813902 env[1197]: time="2024-12-13T06:56:24.813865988Z" level=info msg="StartContainer for \"2b49a76c6d0db9ee3741f24f27d9268a4f834418a0e10f426950afe072c545e9\"" Dec 13 06:56:24.844577 systemd[1]: Started cri-containerd-2b49a76c6d0db9ee3741f24f27d9268a4f834418a0e10f426950afe072c545e9.scope. Dec 13 06:56:24.896504 systemd[1]: cri-containerd-2b49a76c6d0db9ee3741f24f27d9268a4f834418a0e10f426950afe072c545e9.scope: Deactivated successfully. Dec 13 06:56:24.900807 env[1197]: time="2024-12-13T06:56:24.900634970Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddd1262ca_a278_4bba_959b_8d0d83228369.slice/cri-containerd-2b49a76c6d0db9ee3741f24f27d9268a4f834418a0e10f426950afe072c545e9.scope/memory.events\": no such file or directory" Dec 13 06:56:24.903196 env[1197]: time="2024-12-13T06:56:24.903151226Z" level=info msg="StartContainer for \"2b49a76c6d0db9ee3741f24f27d9268a4f834418a0e10f426950afe072c545e9\" returns successfully" Dec 13 06:56:24.999820 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount290409634.mount: Deactivated successfully. Dec 13 06:56:25.062386 env[1197]: time="2024-12-13T06:56:25.062332178Z" level=info msg="shim disconnected" id=2b49a76c6d0db9ee3741f24f27d9268a4f834418a0e10f426950afe072c545e9 Dec 13 06:56:25.062762 env[1197]: time="2024-12-13T06:56:25.062727644Z" level=warning msg="cleaning up after shim disconnected" id=2b49a76c6d0db9ee3741f24f27d9268a4f834418a0e10f426950afe072c545e9 namespace=k8s.io Dec 13 06:56:25.062893 env[1197]: time="2024-12-13T06:56:25.062864366Z" level=info msg="cleaning up dead shim" Dec 13 06:56:25.084080 env[1197]: time="2024-12-13T06:56:25.084036610Z" level=warning msg="cleanup warnings time=\"2024-12-13T06:56:25Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1827 runtime=io.containerd.runc.v2\n" Dec 13 06:56:25.531490 kubelet[1453]: E1213 06:56:25.530996 1453 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:56:25.678014 env[1197]: time="2024-12-13T06:56:25.677940356Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.31.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 06:56:25.680628 env[1197]: time="2024-12-13T06:56:25.680589509Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ebf80573666f86f115452db568feb34f6f771c3bdc7bfed14b9577f992cfa300,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 06:56:25.685703 env[1197]: time="2024-12-13T06:56:25.685654836Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.31.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 06:56:25.688590 env[1197]: time="2024-12-13T06:56:25.688553683Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:1739b3febca392035bf6edfe31efdfa55226be7b57389b2001ae357f7dcb99cf,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 06:56:25.690322 env[1197]: time="2024-12-13T06:56:25.689548946Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.4\" returns image reference \"sha256:ebf80573666f86f115452db568feb34f6f771c3bdc7bfed14b9577f992cfa300\"" Dec 13 06:56:25.693245 env[1197]: time="2024-12-13T06:56:25.693211578Z" level=info msg="CreateContainer within sandbox \"5c917c7566ed15b18c84b7276a4e39c2da4329673d416e0a964fb4aec11ee853\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 06:56:25.709727 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2614590352.mount: Deactivated successfully. Dec 13 06:56:25.717325 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3154219934.mount: Deactivated successfully. Dec 13 06:56:25.722470 env[1197]: time="2024-12-13T06:56:25.722370148Z" level=info msg="CreateContainer within sandbox \"5c917c7566ed15b18c84b7276a4e39c2da4329673d416e0a964fb4aec11ee853\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"4ee326207039369e0b19147ddec3497dda78597ecab844deb2fc42a6ade60d56\"" Dec 13 06:56:25.723390 env[1197]: time="2024-12-13T06:56:25.723349966Z" level=info msg="StartContainer for \"4ee326207039369e0b19147ddec3497dda78597ecab844deb2fc42a6ade60d56\"" Dec 13 06:56:25.753647 systemd[1]: Started cri-containerd-4ee326207039369e0b19147ddec3497dda78597ecab844deb2fc42a6ade60d56.scope. Dec 13 06:56:25.793109 env[1197]: time="2024-12-13T06:56:25.792312301Z" level=info msg="CreateContainer within sandbox \"44f0f6a5cfe880483e94573bea29e37457e028524cef2611ae4520d570290b59\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 06:56:25.875266 env[1197]: time="2024-12-13T06:56:25.875200290Z" level=info msg="CreateContainer within sandbox \"44f0f6a5cfe880483e94573bea29e37457e028524cef2611ae4520d570290b59\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"50cc485a6ca40ad156bc8c7a6b31367d1a297352db25e75d9cf65c950923d80f\"" Dec 13 06:56:25.877380 env[1197]: time="2024-12-13T06:56:25.877336646Z" level=info msg="StartContainer for \"50cc485a6ca40ad156bc8c7a6b31367d1a297352db25e75d9cf65c950923d80f\"" Dec 13 06:56:25.897617 env[1197]: time="2024-12-13T06:56:25.897558701Z" level=info msg="StartContainer for \"4ee326207039369e0b19147ddec3497dda78597ecab844deb2fc42a6ade60d56\" returns successfully" Dec 13 06:56:25.921082 systemd[1]: Started cri-containerd-50cc485a6ca40ad156bc8c7a6b31367d1a297352db25e75d9cf65c950923d80f.scope. Dec 13 06:56:26.011729 env[1197]: time="2024-12-13T06:56:26.009949744Z" level=info msg="StartContainer for \"50cc485a6ca40ad156bc8c7a6b31367d1a297352db25e75d9cf65c950923d80f\" returns successfully" Dec 13 06:56:26.039047 systemd[1]: run-containerd-runc-k8s.io-50cc485a6ca40ad156bc8c7a6b31367d1a297352db25e75d9cf65c950923d80f-runc.3qRPRJ.mount: Deactivated successfully. Dec 13 06:56:26.175721 kubelet[1453]: I1213 06:56:26.174578 1453 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Dec 13 06:56:26.499316 kubelet[1453]: E1213 06:56:26.499193 1453 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:56:26.532015 kubelet[1453]: E1213 06:56:26.531975 1453 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:56:26.813743 kernel: Initializing XFRM netlink socket Dec 13 06:56:26.832958 kubelet[1453]: I1213 06:56:26.832808 1453 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-78kmr" podStartSLOduration=7.850707791 podStartE2EDuration="20.83273575s" podCreationTimestamp="2024-12-13 06:56:06 +0000 UTC" firstStartedPulling="2024-12-13 06:56:08.998646824 +0000 UTC m=+3.957447009" lastFinishedPulling="2024-12-13 06:56:21.980674777 +0000 UTC m=+16.939474968" observedRunningTime="2024-12-13 06:56:26.832408913 +0000 UTC m=+21.791209125" watchObservedRunningTime="2024-12-13 06:56:26.83273575 +0000 UTC m=+21.791535941" Dec 13 06:56:26.833471 kubelet[1453]: I1213 06:56:26.833413 1453 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-nl47l" podStartSLOduration=4.155108605 podStartE2EDuration="20.833404231s" podCreationTimestamp="2024-12-13 06:56:06 +0000 UTC" firstStartedPulling="2024-12-13 06:56:09.013069266 +0000 UTC m=+3.971869454" lastFinishedPulling="2024-12-13 06:56:25.691364889 +0000 UTC m=+20.650165080" observedRunningTime="2024-12-13 06:56:26.808556863 +0000 UTC m=+21.767357053" watchObservedRunningTime="2024-12-13 06:56:26.833404231 +0000 UTC m=+21.792204434" Dec 13 06:56:27.533607 kubelet[1453]: E1213 06:56:27.533540 1453 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:56:28.533896 kubelet[1453]: E1213 06:56:28.533843 1453 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:56:28.561423 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Dec 13 06:56:28.561661 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Dec 13 06:56:28.560612 systemd-networkd[1022]: cilium_host: Link UP Dec 13 06:56:28.561106 systemd-networkd[1022]: cilium_net: Link UP Dec 13 06:56:28.561428 systemd-networkd[1022]: cilium_net: Gained carrier Dec 13 06:56:28.562856 systemd-networkd[1022]: cilium_host: Gained carrier Dec 13 06:56:28.657929 systemd-networkd[1022]: cilium_host: Gained IPv6LL Dec 13 06:56:28.725334 systemd-networkd[1022]: cilium_vxlan: Link UP Dec 13 06:56:28.725346 systemd-networkd[1022]: cilium_vxlan: Gained carrier Dec 13 06:56:28.897029 systemd-networkd[1022]: cilium_net: Gained IPv6LL Dec 13 06:56:29.134810 kernel: NET: Registered PF_ALG protocol family Dec 13 06:56:29.535045 kubelet[1453]: E1213 06:56:29.534968 1453 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:56:29.745342 systemd-networkd[1022]: cilium_vxlan: Gained IPv6LL Dec 13 06:56:30.173152 systemd-networkd[1022]: lxc_health: Link UP Dec 13 06:56:30.183145 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Dec 13 06:56:30.182289 systemd-networkd[1022]: lxc_health: Gained carrier Dec 13 06:56:30.537276 kubelet[1453]: E1213 06:56:30.536446 1453 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:56:31.077608 systemd[1]: Created slice kubepods-besteffort-pod2fbb16b2_b34b_4b81_bdb2_9e69050c3a75.slice. Dec 13 06:56:31.224065 kubelet[1453]: I1213 06:56:31.223347 1453 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8h8f\" (UniqueName: \"kubernetes.io/projected/2fbb16b2-b34b-4b81-bdb2-9e69050c3a75-kube-api-access-q8h8f\") pod \"nginx-deployment-8587fbcb89-mzhr5\" (UID: \"2fbb16b2-b34b-4b81-bdb2-9e69050c3a75\") " pod="default/nginx-deployment-8587fbcb89-mzhr5" Dec 13 06:56:31.385512 env[1197]: time="2024-12-13T06:56:31.385261989Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-mzhr5,Uid:2fbb16b2-b34b-4b81-bdb2-9e69050c3a75,Namespace:default,Attempt:0,}" Dec 13 06:56:31.537753 kubelet[1453]: E1213 06:56:31.537474 1453 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:56:31.579393 systemd-networkd[1022]: lxc58867811c948: Link UP Dec 13 06:56:31.611744 kernel: eth0: renamed from tmp97b22 Dec 13 06:56:31.629142 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 06:56:31.629326 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc58867811c948: link becomes ready Dec 13 06:56:31.629610 systemd-networkd[1022]: lxc58867811c948: Gained carrier Dec 13 06:56:31.985002 systemd-networkd[1022]: lxc_health: Gained IPv6LL Dec 13 06:56:32.537747 kubelet[1453]: E1213 06:56:32.537667 1453 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:56:32.817063 systemd-networkd[1022]: lxc58867811c948: Gained IPv6LL Dec 13 06:56:33.539617 kubelet[1453]: E1213 06:56:33.539538 1453 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:56:34.540762 kubelet[1453]: E1213 06:56:34.540625 1453 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:56:35.542145 kubelet[1453]: E1213 06:56:35.542068 1453 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:56:36.543973 kubelet[1453]: E1213 06:56:36.543910 1453 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:56:36.781627 env[1197]: time="2024-12-13T06:56:36.781161940Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 06:56:36.781627 env[1197]: time="2024-12-13T06:56:36.781263527Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 06:56:36.781627 env[1197]: time="2024-12-13T06:56:36.781289730Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 06:56:36.782442 env[1197]: time="2024-12-13T06:56:36.781705173Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/97b22922b2cf32c4b05b4cfefb994da1fdfd02a298d6f1ebda7681a9be84d1af pid=2522 runtime=io.containerd.runc.v2 Dec 13 06:56:36.813755 systemd[1]: run-containerd-runc-k8s.io-97b22922b2cf32c4b05b4cfefb994da1fdfd02a298d6f1ebda7681a9be84d1af-runc.rf9oGd.mount: Deactivated successfully. Dec 13 06:56:36.822728 systemd[1]: Started cri-containerd-97b22922b2cf32c4b05b4cfefb994da1fdfd02a298d6f1ebda7681a9be84d1af.scope. Dec 13 06:56:36.890054 env[1197]: time="2024-12-13T06:56:36.889946446Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-mzhr5,Uid:2fbb16b2-b34b-4b81-bdb2-9e69050c3a75,Namespace:default,Attempt:0,} returns sandbox id \"97b22922b2cf32c4b05b4cfefb994da1fdfd02a298d6f1ebda7681a9be84d1af\"" Dec 13 06:56:36.893721 env[1197]: time="2024-12-13T06:56:36.892978784Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Dec 13 06:56:37.545198 kubelet[1453]: E1213 06:56:37.545060 1453 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:56:38.546221 kubelet[1453]: E1213 06:56:38.546124 1453 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:56:39.547389 kubelet[1453]: E1213 06:56:39.547297 1453 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:56:40.548516 kubelet[1453]: E1213 06:56:40.548433 1453 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:56:41.290934 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3679642167.mount: Deactivated successfully. Dec 13 06:56:41.550218 kubelet[1453]: E1213 06:56:41.549606 1453 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:56:42.550648 kubelet[1453]: E1213 06:56:42.550558 1453 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:56:43.551543 kubelet[1453]: E1213 06:56:43.551449 1453 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:56:43.780108 env[1197]: time="2024-12-13T06:56:43.780003472Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 06:56:43.790026 env[1197]: time="2024-12-13T06:56:43.789976360Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 06:56:43.794997 env[1197]: time="2024-12-13T06:56:43.794949068Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 06:56:43.798806 env[1197]: time="2024-12-13T06:56:43.798750483Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx@sha256:e04edf30a4ea4c5a4107110797c72d3ee8a654415f00acd4019be17218afd9a1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 06:56:43.800168 env[1197]: time="2024-12-13T06:56:43.800121185Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3\"" Dec 13 06:56:43.805254 env[1197]: time="2024-12-13T06:56:43.804603788Z" level=info msg="CreateContainer within sandbox \"97b22922b2cf32c4b05b4cfefb994da1fdfd02a298d6f1ebda7681a9be84d1af\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Dec 13 06:56:43.822175 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1935939812.mount: Deactivated successfully. Dec 13 06:56:43.831084 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount880171540.mount: Deactivated successfully. Dec 13 06:56:43.844408 env[1197]: time="2024-12-13T06:56:43.844348023Z" level=info msg="CreateContainer within sandbox \"97b22922b2cf32c4b05b4cfefb994da1fdfd02a298d6f1ebda7681a9be84d1af\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"6fbf9490a63413c425864e2b00ccc945f2e42020a8cc73bfb504079f989509c7\"" Dec 13 06:56:43.845804 env[1197]: time="2024-12-13T06:56:43.845741232Z" level=info msg="StartContainer for \"6fbf9490a63413c425864e2b00ccc945f2e42020a8cc73bfb504079f989509c7\"" Dec 13 06:56:43.881860 systemd[1]: Started cri-containerd-6fbf9490a63413c425864e2b00ccc945f2e42020a8cc73bfb504079f989509c7.scope. Dec 13 06:56:43.938633 env[1197]: time="2024-12-13T06:56:43.938560985Z" level=info msg="StartContainer for \"6fbf9490a63413c425864e2b00ccc945f2e42020a8cc73bfb504079f989509c7\" returns successfully" Dec 13 06:56:44.552834 kubelet[1453]: E1213 06:56:44.552747 1453 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:56:45.553858 kubelet[1453]: E1213 06:56:45.553765 1453 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:56:46.499104 kubelet[1453]: E1213 06:56:46.498994 1453 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:56:46.554105 kubelet[1453]: E1213 06:56:46.554074 1453 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:56:47.555120 kubelet[1453]: E1213 06:56:47.555040 1453 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:56:48.555583 kubelet[1453]: E1213 06:56:48.555478 1453 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:56:49.556581 kubelet[1453]: E1213 06:56:49.556445 1453 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:56:50.557711 kubelet[1453]: E1213 06:56:50.557601 1453 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:56:51.557920 kubelet[1453]: E1213 06:56:51.557833 1453 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:56:52.558766 kubelet[1453]: E1213 06:56:52.558644 1453 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:56:53.166997 kubelet[1453]: I1213 06:56:53.166894 1453 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-8587fbcb89-mzhr5" podStartSLOduration=15.256721540000001 podStartE2EDuration="22.166852568s" podCreationTimestamp="2024-12-13 06:56:31 +0000 UTC" firstStartedPulling="2024-12-13 06:56:36.892423145 +0000 UTC m=+31.851223335" lastFinishedPulling="2024-12-13 06:56:43.802554176 +0000 UTC m=+38.761354363" observedRunningTime="2024-12-13 06:56:44.869871469 +0000 UTC m=+39.828671662" watchObservedRunningTime="2024-12-13 06:56:53.166852568 +0000 UTC m=+48.125652780" Dec 13 06:56:53.176327 systemd[1]: Created slice kubepods-besteffort-podce830a7e_7171_4248_8b5d_0b018d124082.slice. Dec 13 06:56:53.265246 kubelet[1453]: I1213 06:56:53.265129 1453 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/ce830a7e-7171-4248-8b5d-0b018d124082-data\") pod \"nfs-server-provisioner-0\" (UID: \"ce830a7e-7171-4248-8b5d-0b018d124082\") " pod="default/nfs-server-provisioner-0" Dec 13 06:56:53.265696 kubelet[1453]: I1213 06:56:53.265593 1453 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47d92\" (UniqueName: \"kubernetes.io/projected/ce830a7e-7171-4248-8b5d-0b018d124082-kube-api-access-47d92\") pod \"nfs-server-provisioner-0\" (UID: \"ce830a7e-7171-4248-8b5d-0b018d124082\") " pod="default/nfs-server-provisioner-0" Dec 13 06:56:53.483615 env[1197]: time="2024-12-13T06:56:53.483411959Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:ce830a7e-7171-4248-8b5d-0b018d124082,Namespace:default,Attempt:0,}" Dec 13 06:56:53.559501 kubelet[1453]: E1213 06:56:53.559410 1453 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:56:53.568335 systemd-networkd[1022]: lxc547bfdbe7db4: Link UP Dec 13 06:56:53.584078 kernel: eth0: renamed from tmpd255f Dec 13 06:56:53.598416 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 06:56:53.598512 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc547bfdbe7db4: link becomes ready Dec 13 06:56:53.598524 systemd-networkd[1022]: lxc547bfdbe7db4: Gained carrier Dec 13 06:56:53.844137 env[1197]: time="2024-12-13T06:56:53.844010007Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 06:56:53.844405 env[1197]: time="2024-12-13T06:56:53.844097601Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 06:56:53.844405 env[1197]: time="2024-12-13T06:56:53.844117878Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 06:56:53.844788 env[1197]: time="2024-12-13T06:56:53.844662951Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d255f80798a84a546b8cd7af6204e5a0b67bcd296d2adf59e724f3579cb67d75 pid=2655 runtime=io.containerd.runc.v2 Dec 13 06:56:53.868729 systemd[1]: Started cri-containerd-d255f80798a84a546b8cd7af6204e5a0b67bcd296d2adf59e724f3579cb67d75.scope. Dec 13 06:56:53.956064 env[1197]: time="2024-12-13T06:56:53.956000415Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:ce830a7e-7171-4248-8b5d-0b018d124082,Namespace:default,Attempt:0,} returns sandbox id \"d255f80798a84a546b8cd7af6204e5a0b67bcd296d2adf59e724f3579cb67d75\"" Dec 13 06:56:53.958557 env[1197]: time="2024-12-13T06:56:53.958470581Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Dec 13 06:56:54.389419 systemd[1]: run-containerd-runc-k8s.io-d255f80798a84a546b8cd7af6204e5a0b67bcd296d2adf59e724f3579cb67d75-runc.h44dFi.mount: Deactivated successfully. Dec 13 06:56:54.559887 kubelet[1453]: E1213 06:56:54.559817 1453 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:56:55.153405 systemd-networkd[1022]: lxc547bfdbe7db4: Gained IPv6LL Dec 13 06:56:55.560895 kubelet[1453]: E1213 06:56:55.560840 1453 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:56:56.562159 kubelet[1453]: E1213 06:56:56.562031 1453 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:56:57.562468 kubelet[1453]: E1213 06:56:57.562386 1453 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:56:57.777359 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2764834733.mount: Deactivated successfully. Dec 13 06:56:58.563311 kubelet[1453]: E1213 06:56:58.563201 1453 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:56:59.563796 kubelet[1453]: E1213 06:56:59.563659 1453 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:57:00.564741 kubelet[1453]: E1213 06:57:00.564636 1453 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:57:01.529281 env[1197]: time="2024-12-13T06:57:01.529171223Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 06:57:01.535021 env[1197]: time="2024-12-13T06:57:01.534974022Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 06:57:01.537803 env[1197]: time="2024-12-13T06:57:01.537752715Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 06:57:01.539146 env[1197]: time="2024-12-13T06:57:01.539110654Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 06:57:01.540335 env[1197]: time="2024-12-13T06:57:01.540287850Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Dec 13 06:57:01.544982 env[1197]: time="2024-12-13T06:57:01.544938665Z" level=info msg="CreateContainer within sandbox \"d255f80798a84a546b8cd7af6204e5a0b67bcd296d2adf59e724f3579cb67d75\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Dec 13 06:57:01.565666 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3293967612.mount: Deactivated successfully. Dec 13 06:57:01.566929 kubelet[1453]: E1213 06:57:01.566871 1453 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:57:01.574398 env[1197]: time="2024-12-13T06:57:01.574346739Z" level=info msg="CreateContainer within sandbox \"d255f80798a84a546b8cd7af6204e5a0b67bcd296d2adf59e724f3579cb67d75\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"b1f61a8c0e2f937c8871f2dbe8059e2d21fe04dce884c0c572b7557c5c4f9d27\"" Dec 13 06:57:01.575586 env[1197]: time="2024-12-13T06:57:01.575549888Z" level=info msg="StartContainer for \"b1f61a8c0e2f937c8871f2dbe8059e2d21fe04dce884c0c572b7557c5c4f9d27\"" Dec 13 06:57:01.617210 systemd[1]: Started cri-containerd-b1f61a8c0e2f937c8871f2dbe8059e2d21fe04dce884c0c572b7557c5c4f9d27.scope. Dec 13 06:57:01.682941 env[1197]: time="2024-12-13T06:57:01.682882360Z" level=info msg="StartContainer for \"b1f61a8c0e2f937c8871f2dbe8059e2d21fe04dce884c0c572b7557c5c4f9d27\" returns successfully" Dec 13 06:57:01.920679 kubelet[1453]: I1213 06:57:01.920576 1453 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=1.336473672 podStartE2EDuration="8.920542816s" podCreationTimestamp="2024-12-13 06:56:53 +0000 UTC" firstStartedPulling="2024-12-13 06:56:53.958037901 +0000 UTC m=+48.916838092" lastFinishedPulling="2024-12-13 06:57:01.542107048 +0000 UTC m=+56.500907236" observedRunningTime="2024-12-13 06:57:01.919901402 +0000 UTC m=+56.878701608" watchObservedRunningTime="2024-12-13 06:57:01.920542816 +0000 UTC m=+56.879343030" Dec 13 06:57:02.559823 systemd[1]: run-containerd-runc-k8s.io-b1f61a8c0e2f937c8871f2dbe8059e2d21fe04dce884c0c572b7557c5c4f9d27-runc.5oyrIv.mount: Deactivated successfully. Dec 13 06:57:02.567707 kubelet[1453]: E1213 06:57:02.567647 1453 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:57:03.568645 kubelet[1453]: E1213 06:57:03.568555 1453 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:57:04.569188 kubelet[1453]: E1213 06:57:04.569106 1453 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:57:05.570193 kubelet[1453]: E1213 06:57:05.570126 1453 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:57:06.498751 kubelet[1453]: E1213 06:57:06.498636 1453 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:57:06.571164 kubelet[1453]: E1213 06:57:06.571114 1453 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:57:07.572227 kubelet[1453]: E1213 06:57:07.572118 1453 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:57:08.572479 kubelet[1453]: E1213 06:57:08.572402 1453 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:57:09.573487 kubelet[1453]: E1213 06:57:09.573399 1453 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:57:10.574415 kubelet[1453]: E1213 06:57:10.574338 1453 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:57:11.207275 systemd[1]: Created slice kubepods-besteffort-pod07d9b0f8_d39a_4405_b897_3bdb696b12a0.slice. Dec 13 06:57:11.286647 kubelet[1453]: I1213 06:57:11.286562 1453 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-ef9d14f3-46d4-44af-aa42-dfd0f3d16efa\" (UniqueName: \"kubernetes.io/nfs/07d9b0f8-d39a-4405-b897-3bdb696b12a0-pvc-ef9d14f3-46d4-44af-aa42-dfd0f3d16efa\") pod \"test-pod-1\" (UID: \"07d9b0f8-d39a-4405-b897-3bdb696b12a0\") " pod="default/test-pod-1" Dec 13 06:57:11.287046 kubelet[1453]: I1213 06:57:11.286998 1453 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2lbmw\" (UniqueName: \"kubernetes.io/projected/07d9b0f8-d39a-4405-b897-3bdb696b12a0-kube-api-access-2lbmw\") pod \"test-pod-1\" (UID: \"07d9b0f8-d39a-4405-b897-3bdb696b12a0\") " pod="default/test-pod-1" Dec 13 06:57:11.446850 kernel: FS-Cache: Loaded Dec 13 06:57:11.515343 kernel: RPC: Registered named UNIX socket transport module. Dec 13 06:57:11.515571 kernel: RPC: Registered udp transport module. Dec 13 06:57:11.515646 kernel: RPC: Registered tcp transport module. Dec 13 06:57:11.518080 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Dec 13 06:57:11.575353 kubelet[1453]: E1213 06:57:11.575229 1453 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:57:11.602736 kernel: FS-Cache: Netfs 'nfs' registered for caching Dec 13 06:57:11.857299 kernel: NFS: Registering the id_resolver key type Dec 13 06:57:11.857526 kernel: Key type id_resolver registered Dec 13 06:57:11.859193 kernel: Key type id_legacy registered Dec 13 06:57:11.924383 nfsidmap[2779]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'gb1.brightbox.com' Dec 13 06:57:11.933667 nfsidmap[2782]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'gb1.brightbox.com' Dec 13 06:57:12.114396 env[1197]: time="2024-12-13T06:57:12.112938171Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:07d9b0f8-d39a-4405-b897-3bdb696b12a0,Namespace:default,Attempt:0,}" Dec 13 06:57:12.169018 systemd-networkd[1022]: lxcad6aebdcb21e: Link UP Dec 13 06:57:12.178839 kernel: eth0: renamed from tmp55300 Dec 13 06:57:12.185794 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 06:57:12.185910 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcad6aebdcb21e: link becomes ready Dec 13 06:57:12.186005 systemd-networkd[1022]: lxcad6aebdcb21e: Gained carrier Dec 13 06:57:12.429291 env[1197]: time="2024-12-13T06:57:12.428495988Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 06:57:12.429609 env[1197]: time="2024-12-13T06:57:12.429552310Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 06:57:12.429838 env[1197]: time="2024-12-13T06:57:12.429783311Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 06:57:12.430383 env[1197]: time="2024-12-13T06:57:12.430287279Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/553000140bf7cc809fb3d96eb149089830665bdffdcc02fe08a8ea608ef4b738 pid=2820 runtime=io.containerd.runc.v2 Dec 13 06:57:12.454456 systemd[1]: Started cri-containerd-553000140bf7cc809fb3d96eb149089830665bdffdcc02fe08a8ea608ef4b738.scope. Dec 13 06:57:12.533005 env[1197]: time="2024-12-13T06:57:12.532949321Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:07d9b0f8-d39a-4405-b897-3bdb696b12a0,Namespace:default,Attempt:0,} returns sandbox id \"553000140bf7cc809fb3d96eb149089830665bdffdcc02fe08a8ea608ef4b738\"" Dec 13 06:57:12.536311 env[1197]: time="2024-12-13T06:57:12.536274781Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Dec 13 06:57:12.576811 kubelet[1453]: E1213 06:57:12.576731 1453 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:57:12.865404 env[1197]: time="2024-12-13T06:57:12.865354361Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 06:57:12.867246 env[1197]: time="2024-12-13T06:57:12.867208479Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 06:57:12.869578 env[1197]: time="2024-12-13T06:57:12.869542886Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 06:57:12.876025 env[1197]: time="2024-12-13T06:57:12.875791631Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3\"" Dec 13 06:57:12.876875 env[1197]: time="2024-12-13T06:57:12.876819179Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx@sha256:e04edf30a4ea4c5a4107110797c72d3ee8a654415f00acd4019be17218afd9a1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 06:57:12.880435 env[1197]: time="2024-12-13T06:57:12.880365815Z" level=info msg="CreateContainer within sandbox \"553000140bf7cc809fb3d96eb149089830665bdffdcc02fe08a8ea608ef4b738\" for container &ContainerMetadata{Name:test,Attempt:0,}" Dec 13 06:57:12.895349 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount408995406.mount: Deactivated successfully. Dec 13 06:57:12.902877 env[1197]: time="2024-12-13T06:57:12.902832366Z" level=info msg="CreateContainer within sandbox \"553000140bf7cc809fb3d96eb149089830665bdffdcc02fe08a8ea608ef4b738\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"e73386511ce0161fd6bba55bab271bf70e536a1df2dbb600a9dd4cfb77439f50\"" Dec 13 06:57:12.904142 env[1197]: time="2024-12-13T06:57:12.904093881Z" level=info msg="StartContainer for \"e73386511ce0161fd6bba55bab271bf70e536a1df2dbb600a9dd4cfb77439f50\"" Dec 13 06:57:12.929136 systemd[1]: Started cri-containerd-e73386511ce0161fd6bba55bab271bf70e536a1df2dbb600a9dd4cfb77439f50.scope. Dec 13 06:57:12.975090 env[1197]: time="2024-12-13T06:57:12.975033823Z" level=info msg="StartContainer for \"e73386511ce0161fd6bba55bab271bf70e536a1df2dbb600a9dd4cfb77439f50\" returns successfully" Dec 13 06:57:13.393023 systemd-networkd[1022]: lxcad6aebdcb21e: Gained IPv6LL Dec 13 06:57:13.577267 kubelet[1453]: E1213 06:57:13.577179 1453 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:57:13.959719 kubelet[1453]: I1213 06:57:13.959593 1453 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=18.616532111 podStartE2EDuration="18.959542368s" podCreationTimestamp="2024-12-13 06:56:55 +0000 UTC" firstStartedPulling="2024-12-13 06:57:12.535357442 +0000 UTC m=+67.494157632" lastFinishedPulling="2024-12-13 06:57:12.878367698 +0000 UTC m=+67.837167889" observedRunningTime="2024-12-13 06:57:13.959055454 +0000 UTC m=+68.917855666" watchObservedRunningTime="2024-12-13 06:57:13.959542368 +0000 UTC m=+68.918342569" Dec 13 06:57:14.577839 kubelet[1453]: E1213 06:57:14.577765 1453 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:57:15.578340 kubelet[1453]: E1213 06:57:15.578273 1453 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:57:16.580257 kubelet[1453]: E1213 06:57:16.580182 1453 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:57:17.581533 kubelet[1453]: E1213 06:57:17.581454 1453 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:57:18.583209 kubelet[1453]: E1213 06:57:18.583145 1453 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:57:19.584204 kubelet[1453]: E1213 06:57:19.584045 1453 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:57:20.584555 kubelet[1453]: E1213 06:57:20.584482 1453 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:57:20.809962 env[1197]: time="2024-12-13T06:57:20.809860136Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 06:57:20.819158 env[1197]: time="2024-12-13T06:57:20.819119198Z" level=info msg="StopContainer for \"50cc485a6ca40ad156bc8c7a6b31367d1a297352db25e75d9cf65c950923d80f\" with timeout 2 (s)" Dec 13 06:57:20.819731 env[1197]: time="2024-12-13T06:57:20.819674810Z" level=info msg="Stop container \"50cc485a6ca40ad156bc8c7a6b31367d1a297352db25e75d9cf65c950923d80f\" with signal terminated" Dec 13 06:57:20.830349 systemd-networkd[1022]: lxc_health: Link DOWN Dec 13 06:57:20.830358 systemd-networkd[1022]: lxc_health: Lost carrier Dec 13 06:57:20.875321 systemd[1]: cri-containerd-50cc485a6ca40ad156bc8c7a6b31367d1a297352db25e75d9cf65c950923d80f.scope: Deactivated successfully. Dec 13 06:57:20.875911 systemd[1]: cri-containerd-50cc485a6ca40ad156bc8c7a6b31367d1a297352db25e75d9cf65c950923d80f.scope: Consumed 10.094s CPU time. Dec 13 06:57:20.904947 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-50cc485a6ca40ad156bc8c7a6b31367d1a297352db25e75d9cf65c950923d80f-rootfs.mount: Deactivated successfully. Dec 13 06:57:20.919352 env[1197]: time="2024-12-13T06:57:20.919295619Z" level=info msg="shim disconnected" id=50cc485a6ca40ad156bc8c7a6b31367d1a297352db25e75d9cf65c950923d80f Dec 13 06:57:20.919636 env[1197]: time="2024-12-13T06:57:20.919602253Z" level=warning msg="cleaning up after shim disconnected" id=50cc485a6ca40ad156bc8c7a6b31367d1a297352db25e75d9cf65c950923d80f namespace=k8s.io Dec 13 06:57:20.919854 env[1197]: time="2024-12-13T06:57:20.919826618Z" level=info msg="cleaning up dead shim" Dec 13 06:57:20.934106 env[1197]: time="2024-12-13T06:57:20.934062549Z" level=warning msg="cleanup warnings time=\"2024-12-13T06:57:20Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2951 runtime=io.containerd.runc.v2\n" Dec 13 06:57:20.936239 env[1197]: time="2024-12-13T06:57:20.936198391Z" level=info msg="StopContainer for \"50cc485a6ca40ad156bc8c7a6b31367d1a297352db25e75d9cf65c950923d80f\" returns successfully" Dec 13 06:57:20.937537 env[1197]: time="2024-12-13T06:57:20.937466815Z" level=info msg="StopPodSandbox for \"44f0f6a5cfe880483e94573bea29e37457e028524cef2611ae4520d570290b59\"" Dec 13 06:57:20.937635 env[1197]: time="2024-12-13T06:57:20.937577531Z" level=info msg="Container to stop \"417358521cc24441050b1dc2db3614a321a7729851477aaf4805999a6e2a6545\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 06:57:20.937635 env[1197]: time="2024-12-13T06:57:20.937615506Z" level=info msg="Container to stop \"971c1b0aa28a02c620ff1cb83fd90689598dab78ba21ec8af71c24843b8bba1a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 06:57:20.937796 env[1197]: time="2024-12-13T06:57:20.937637506Z" level=info msg="Container to stop \"50cc485a6ca40ad156bc8c7a6b31367d1a297352db25e75d9cf65c950923d80f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 06:57:20.937796 env[1197]: time="2024-12-13T06:57:20.937659308Z" level=info msg="Container to stop \"a249e7c5ceeaeba52b79efa3dc3b754e3665e5ff31a7b6f988042a943e06b4c5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 06:57:20.937796 env[1197]: time="2024-12-13T06:57:20.937714421Z" level=info msg="Container to stop \"2b49a76c6d0db9ee3741f24f27d9268a4f834418a0e10f426950afe072c545e9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 06:57:20.940263 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-44f0f6a5cfe880483e94573bea29e37457e028524cef2611ae4520d570290b59-shm.mount: Deactivated successfully. Dec 13 06:57:20.949246 systemd[1]: cri-containerd-44f0f6a5cfe880483e94573bea29e37457e028524cef2611ae4520d570290b59.scope: Deactivated successfully. Dec 13 06:57:20.985547 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-44f0f6a5cfe880483e94573bea29e37457e028524cef2611ae4520d570290b59-rootfs.mount: Deactivated successfully. Dec 13 06:57:20.990314 env[1197]: time="2024-12-13T06:57:20.990252860Z" level=info msg="shim disconnected" id=44f0f6a5cfe880483e94573bea29e37457e028524cef2611ae4520d570290b59 Dec 13 06:57:20.990736 env[1197]: time="2024-12-13T06:57:20.990664618Z" level=warning msg="cleaning up after shim disconnected" id=44f0f6a5cfe880483e94573bea29e37457e028524cef2611ae4520d570290b59 namespace=k8s.io Dec 13 06:57:20.990902 env[1197]: time="2024-12-13T06:57:20.990859036Z" level=info msg="cleaning up dead shim" Dec 13 06:57:21.002420 env[1197]: time="2024-12-13T06:57:21.002362784Z" level=warning msg="cleanup warnings time=\"2024-12-13T06:57:20Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2984 runtime=io.containerd.runc.v2\n" Dec 13 06:57:21.003175 env[1197]: time="2024-12-13T06:57:21.003125128Z" level=info msg="TearDown network for sandbox \"44f0f6a5cfe880483e94573bea29e37457e028524cef2611ae4520d570290b59\" successfully" Dec 13 06:57:21.003175 env[1197]: time="2024-12-13T06:57:21.003169311Z" level=info msg="StopPodSandbox for \"44f0f6a5cfe880483e94573bea29e37457e028524cef2611ae4520d570290b59\" returns successfully" Dec 13 06:57:21.155972 kubelet[1453]: I1213 06:57:21.155682 1453 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xfzh9\" (UniqueName: \"kubernetes.io/projected/dd1262ca-a278-4bba-959b-8d0d83228369-kube-api-access-xfzh9\") pod \"dd1262ca-a278-4bba-959b-8d0d83228369\" (UID: \"dd1262ca-a278-4bba-959b-8d0d83228369\") " Dec 13 06:57:21.155972 kubelet[1453]: I1213 06:57:21.155839 1453 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/dd1262ca-a278-4bba-959b-8d0d83228369-bpf-maps\") pod \"dd1262ca-a278-4bba-959b-8d0d83228369\" (UID: \"dd1262ca-a278-4bba-959b-8d0d83228369\") " Dec 13 06:57:21.155972 kubelet[1453]: I1213 06:57:21.155920 1453 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/dd1262ca-a278-4bba-959b-8d0d83228369-hubble-tls\") pod \"dd1262ca-a278-4bba-959b-8d0d83228369\" (UID: \"dd1262ca-a278-4bba-959b-8d0d83228369\") " Dec 13 06:57:21.156619 kubelet[1453]: I1213 06:57:21.156587 1453 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/dd1262ca-a278-4bba-959b-8d0d83228369-host-proc-sys-net\") pod \"dd1262ca-a278-4bba-959b-8d0d83228369\" (UID: \"dd1262ca-a278-4bba-959b-8d0d83228369\") " Dec 13 06:57:21.156822 kubelet[1453]: I1213 06:57:21.156796 1453 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/dd1262ca-a278-4bba-959b-8d0d83228369-cni-path\") pod \"dd1262ca-a278-4bba-959b-8d0d83228369\" (UID: \"dd1262ca-a278-4bba-959b-8d0d83228369\") " Dec 13 06:57:21.157038 kubelet[1453]: I1213 06:57:21.157012 1453 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/dd1262ca-a278-4bba-959b-8d0d83228369-host-proc-sys-kernel\") pod \"dd1262ca-a278-4bba-959b-8d0d83228369\" (UID: \"dd1262ca-a278-4bba-959b-8d0d83228369\") " Dec 13 06:57:21.157200 kubelet[1453]: I1213 06:57:21.157174 1453 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/dd1262ca-a278-4bba-959b-8d0d83228369-hostproc\") pod \"dd1262ca-a278-4bba-959b-8d0d83228369\" (UID: \"dd1262ca-a278-4bba-959b-8d0d83228369\") " Dec 13 06:57:21.157365 kubelet[1453]: I1213 06:57:21.157339 1453 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/dd1262ca-a278-4bba-959b-8d0d83228369-etc-cni-netd\") pod \"dd1262ca-a278-4bba-959b-8d0d83228369\" (UID: \"dd1262ca-a278-4bba-959b-8d0d83228369\") " Dec 13 06:57:21.157560 kubelet[1453]: I1213 06:57:21.157522 1453 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dd1262ca-a278-4bba-959b-8d0d83228369-lib-modules\") pod \"dd1262ca-a278-4bba-959b-8d0d83228369\" (UID: \"dd1262ca-a278-4bba-959b-8d0d83228369\") " Dec 13 06:57:21.165344 systemd[1]: var-lib-kubelet-pods-dd1262ca\x2da278\x2d4bba\x2d959b\x2d8d0d83228369-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 06:57:21.167084 kubelet[1453]: I1213 06:57:21.159556 1453 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/dd1262ca-a278-4bba-959b-8d0d83228369-cilium-run\") pod \"dd1262ca-a278-4bba-959b-8d0d83228369\" (UID: \"dd1262ca-a278-4bba-959b-8d0d83228369\") " Dec 13 06:57:21.167193 kubelet[1453]: I1213 06:57:21.167115 1453 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/dd1262ca-a278-4bba-959b-8d0d83228369-cilium-config-path\") pod \"dd1262ca-a278-4bba-959b-8d0d83228369\" (UID: \"dd1262ca-a278-4bba-959b-8d0d83228369\") " Dec 13 06:57:21.167193 kubelet[1453]: I1213 06:57:21.167146 1453 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dd1262ca-a278-4bba-959b-8d0d83228369-xtables-lock\") pod \"dd1262ca-a278-4bba-959b-8d0d83228369\" (UID: \"dd1262ca-a278-4bba-959b-8d0d83228369\") " Dec 13 06:57:21.167193 kubelet[1453]: I1213 06:57:21.167173 1453 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/dd1262ca-a278-4bba-959b-8d0d83228369-cilium-cgroup\") pod \"dd1262ca-a278-4bba-959b-8d0d83228369\" (UID: \"dd1262ca-a278-4bba-959b-8d0d83228369\") " Dec 13 06:57:21.167376 kubelet[1453]: I1213 06:57:21.167201 1453 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/dd1262ca-a278-4bba-959b-8d0d83228369-clustermesh-secrets\") pod \"dd1262ca-a278-4bba-959b-8d0d83228369\" (UID: \"dd1262ca-a278-4bba-959b-8d0d83228369\") " Dec 13 06:57:21.167982 kubelet[1453]: I1213 06:57:21.158104 1453 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dd1262ca-a278-4bba-959b-8d0d83228369-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "dd1262ca-a278-4bba-959b-8d0d83228369" (UID: "dd1262ca-a278-4bba-959b-8d0d83228369"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 06:57:21.168089 kubelet[1453]: I1213 06:57:21.158645 1453 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dd1262ca-a278-4bba-959b-8d0d83228369-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "dd1262ca-a278-4bba-959b-8d0d83228369" (UID: "dd1262ca-a278-4bba-959b-8d0d83228369"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 06:57:21.168089 kubelet[1453]: I1213 06:57:21.159446 1453 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dd1262ca-a278-4bba-959b-8d0d83228369-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "dd1262ca-a278-4bba-959b-8d0d83228369" (UID: "dd1262ca-a278-4bba-959b-8d0d83228369"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 06:57:21.168089 kubelet[1453]: I1213 06:57:21.159470 1453 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dd1262ca-a278-4bba-959b-8d0d83228369-cni-path" (OuterVolumeSpecName: "cni-path") pod "dd1262ca-a278-4bba-959b-8d0d83228369" (UID: "dd1262ca-a278-4bba-959b-8d0d83228369"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 06:57:21.168089 kubelet[1453]: I1213 06:57:21.159488 1453 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dd1262ca-a278-4bba-959b-8d0d83228369-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "dd1262ca-a278-4bba-959b-8d0d83228369" (UID: "dd1262ca-a278-4bba-959b-8d0d83228369"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 06:57:21.168089 kubelet[1453]: I1213 06:57:21.159511 1453 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dd1262ca-a278-4bba-959b-8d0d83228369-hostproc" (OuterVolumeSpecName: "hostproc") pod "dd1262ca-a278-4bba-959b-8d0d83228369" (UID: "dd1262ca-a278-4bba-959b-8d0d83228369"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 06:57:21.168422 kubelet[1453]: I1213 06:57:21.159527 1453 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dd1262ca-a278-4bba-959b-8d0d83228369-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "dd1262ca-a278-4bba-959b-8d0d83228369" (UID: "dd1262ca-a278-4bba-959b-8d0d83228369"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 06:57:21.168422 kubelet[1453]: I1213 06:57:21.168073 1453 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dd1262ca-a278-4bba-959b-8d0d83228369-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "dd1262ca-a278-4bba-959b-8d0d83228369" (UID: "dd1262ca-a278-4bba-959b-8d0d83228369"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 06:57:21.169017 kubelet[1453]: I1213 06:57:21.168978 1453 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd1262ca-a278-4bba-959b-8d0d83228369-kube-api-access-xfzh9" (OuterVolumeSpecName: "kube-api-access-xfzh9") pod "dd1262ca-a278-4bba-959b-8d0d83228369" (UID: "dd1262ca-a278-4bba-959b-8d0d83228369"). InnerVolumeSpecName "kube-api-access-xfzh9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 06:57:21.169306 kubelet[1453]: I1213 06:57:21.169278 1453 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd1262ca-a278-4bba-959b-8d0d83228369-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "dd1262ca-a278-4bba-959b-8d0d83228369" (UID: "dd1262ca-a278-4bba-959b-8d0d83228369"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 06:57:21.169467 kubelet[1453]: I1213 06:57:21.169277 1453 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dd1262ca-a278-4bba-959b-8d0d83228369-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "dd1262ca-a278-4bba-959b-8d0d83228369" (UID: "dd1262ca-a278-4bba-959b-8d0d83228369"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 06:57:21.169598 kubelet[1453]: I1213 06:57:21.169300 1453 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dd1262ca-a278-4bba-959b-8d0d83228369-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "dd1262ca-a278-4bba-959b-8d0d83228369" (UID: "dd1262ca-a278-4bba-959b-8d0d83228369"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 06:57:21.172020 kubelet[1453]: I1213 06:57:21.171986 1453 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd1262ca-a278-4bba-959b-8d0d83228369-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "dd1262ca-a278-4bba-959b-8d0d83228369" (UID: "dd1262ca-a278-4bba-959b-8d0d83228369"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 06:57:21.174016 kubelet[1453]: I1213 06:57:21.173981 1453 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd1262ca-a278-4bba-959b-8d0d83228369-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "dd1262ca-a278-4bba-959b-8d0d83228369" (UID: "dd1262ca-a278-4bba-959b-8d0d83228369"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 06:57:21.268340 kubelet[1453]: I1213 06:57:21.268295 1453 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/dd1262ca-a278-4bba-959b-8d0d83228369-hostproc\") on node \"10.230.34.74\" DevicePath \"\"" Dec 13 06:57:21.268566 kubelet[1453]: I1213 06:57:21.268541 1453 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/dd1262ca-a278-4bba-959b-8d0d83228369-etc-cni-netd\") on node \"10.230.34.74\" DevicePath \"\"" Dec 13 06:57:21.268760 kubelet[1453]: I1213 06:57:21.268738 1453 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dd1262ca-a278-4bba-959b-8d0d83228369-lib-modules\") on node \"10.230.34.74\" DevicePath \"\"" Dec 13 06:57:21.268960 kubelet[1453]: I1213 06:57:21.268937 1453 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/dd1262ca-a278-4bba-959b-8d0d83228369-cilium-run\") on node \"10.230.34.74\" DevicePath \"\"" Dec 13 06:57:21.269121 kubelet[1453]: I1213 06:57:21.269094 1453 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/dd1262ca-a278-4bba-959b-8d0d83228369-cilium-config-path\") on node \"10.230.34.74\" DevicePath \"\"" Dec 13 06:57:21.269329 kubelet[1453]: I1213 06:57:21.269306 1453 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dd1262ca-a278-4bba-959b-8d0d83228369-xtables-lock\") on node \"10.230.34.74\" DevicePath \"\"" Dec 13 06:57:21.269484 kubelet[1453]: I1213 06:57:21.269461 1453 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/dd1262ca-a278-4bba-959b-8d0d83228369-cilium-cgroup\") on node \"10.230.34.74\" DevicePath \"\"" Dec 13 06:57:21.269630 kubelet[1453]: I1213 06:57:21.269607 1453 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/dd1262ca-a278-4bba-959b-8d0d83228369-clustermesh-secrets\") on node \"10.230.34.74\" DevicePath \"\"" Dec 13 06:57:21.269817 kubelet[1453]: I1213 06:57:21.269793 1453 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-xfzh9\" (UniqueName: \"kubernetes.io/projected/dd1262ca-a278-4bba-959b-8d0d83228369-kube-api-access-xfzh9\") on node \"10.230.34.74\" DevicePath \"\"" Dec 13 06:57:21.269990 kubelet[1453]: I1213 06:57:21.269968 1453 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/dd1262ca-a278-4bba-959b-8d0d83228369-bpf-maps\") on node \"10.230.34.74\" DevicePath \"\"" Dec 13 06:57:21.270152 kubelet[1453]: I1213 06:57:21.270129 1453 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/dd1262ca-a278-4bba-959b-8d0d83228369-hubble-tls\") on node \"10.230.34.74\" DevicePath \"\"" Dec 13 06:57:21.270328 kubelet[1453]: I1213 06:57:21.270305 1453 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/dd1262ca-a278-4bba-959b-8d0d83228369-host-proc-sys-net\") on node \"10.230.34.74\" DevicePath \"\"" Dec 13 06:57:21.270474 kubelet[1453]: I1213 06:57:21.270451 1453 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/dd1262ca-a278-4bba-959b-8d0d83228369-cni-path\") on node \"10.230.34.74\" DevicePath \"\"" Dec 13 06:57:21.270612 kubelet[1453]: I1213 06:57:21.270589 1453 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/dd1262ca-a278-4bba-959b-8d0d83228369-host-proc-sys-kernel\") on node \"10.230.34.74\" DevicePath \"\"" Dec 13 06:57:21.585876 kubelet[1453]: E1213 06:57:21.585816 1453 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:57:21.668542 kubelet[1453]: E1213 06:57:21.668457 1453 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 06:57:21.783167 systemd[1]: var-lib-kubelet-pods-dd1262ca\x2da278\x2d4bba\x2d959b\x2d8d0d83228369-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dxfzh9.mount: Deactivated successfully. Dec 13 06:57:21.783320 systemd[1]: var-lib-kubelet-pods-dd1262ca\x2da278\x2d4bba\x2d959b\x2d8d0d83228369-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 06:57:21.986875 kubelet[1453]: I1213 06:57:21.986114 1453 scope.go:117] "RemoveContainer" containerID="50cc485a6ca40ad156bc8c7a6b31367d1a297352db25e75d9cf65c950923d80f" Dec 13 06:57:21.989214 env[1197]: time="2024-12-13T06:57:21.988755350Z" level=info msg="RemoveContainer for \"50cc485a6ca40ad156bc8c7a6b31367d1a297352db25e75d9cf65c950923d80f\"" Dec 13 06:57:21.993645 env[1197]: time="2024-12-13T06:57:21.993514839Z" level=info msg="RemoveContainer for \"50cc485a6ca40ad156bc8c7a6b31367d1a297352db25e75d9cf65c950923d80f\" returns successfully" Dec 13 06:57:21.993818 kubelet[1453]: I1213 06:57:21.993788 1453 scope.go:117] "RemoveContainer" containerID="2b49a76c6d0db9ee3741f24f27d9268a4f834418a0e10f426950afe072c545e9" Dec 13 06:57:21.995207 env[1197]: time="2024-12-13T06:57:21.995155105Z" level=info msg="RemoveContainer for \"2b49a76c6d0db9ee3741f24f27d9268a4f834418a0e10f426950afe072c545e9\"" Dec 13 06:57:21.998655 systemd[1]: Removed slice kubepods-burstable-poddd1262ca_a278_4bba_959b_8d0d83228369.slice. Dec 13 06:57:21.998808 systemd[1]: kubepods-burstable-poddd1262ca_a278_4bba_959b_8d0d83228369.slice: Consumed 10.262s CPU time. Dec 13 06:57:22.001632 env[1197]: time="2024-12-13T06:57:22.001545085Z" level=info msg="RemoveContainer for \"2b49a76c6d0db9ee3741f24f27d9268a4f834418a0e10f426950afe072c545e9\" returns successfully" Dec 13 06:57:22.002393 kubelet[1453]: I1213 06:57:22.002333 1453 scope.go:117] "RemoveContainer" containerID="971c1b0aa28a02c620ff1cb83fd90689598dab78ba21ec8af71c24843b8bba1a" Dec 13 06:57:22.004420 env[1197]: time="2024-12-13T06:57:22.004365818Z" level=info msg="RemoveContainer for \"971c1b0aa28a02c620ff1cb83fd90689598dab78ba21ec8af71c24843b8bba1a\"" Dec 13 06:57:22.007515 env[1197]: time="2024-12-13T06:57:22.007473667Z" level=info msg="RemoveContainer for \"971c1b0aa28a02c620ff1cb83fd90689598dab78ba21ec8af71c24843b8bba1a\" returns successfully" Dec 13 06:57:22.007799 kubelet[1453]: I1213 06:57:22.007765 1453 scope.go:117] "RemoveContainer" containerID="417358521cc24441050b1dc2db3614a321a7729851477aaf4805999a6e2a6545" Dec 13 06:57:22.009374 env[1197]: time="2024-12-13T06:57:22.009335166Z" level=info msg="RemoveContainer for \"417358521cc24441050b1dc2db3614a321a7729851477aaf4805999a6e2a6545\"" Dec 13 06:57:22.013595 env[1197]: time="2024-12-13T06:57:22.013559935Z" level=info msg="RemoveContainer for \"417358521cc24441050b1dc2db3614a321a7729851477aaf4805999a6e2a6545\" returns successfully" Dec 13 06:57:22.013824 kubelet[1453]: I1213 06:57:22.013796 1453 scope.go:117] "RemoveContainer" containerID="a249e7c5ceeaeba52b79efa3dc3b754e3665e5ff31a7b6f988042a943e06b4c5" Dec 13 06:57:22.015025 env[1197]: time="2024-12-13T06:57:22.014988981Z" level=info msg="RemoveContainer for \"a249e7c5ceeaeba52b79efa3dc3b754e3665e5ff31a7b6f988042a943e06b4c5\"" Dec 13 06:57:22.018135 env[1197]: time="2024-12-13T06:57:22.018099754Z" level=info msg="RemoveContainer for \"a249e7c5ceeaeba52b79efa3dc3b754e3665e5ff31a7b6f988042a943e06b4c5\" returns successfully" Dec 13 06:57:22.586767 kubelet[1453]: E1213 06:57:22.586675 1453 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:57:22.746729 kubelet[1453]: I1213 06:57:22.746230 1453 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd1262ca-a278-4bba-959b-8d0d83228369" path="/var/lib/kubelet/pods/dd1262ca-a278-4bba-959b-8d0d83228369/volumes" Dec 13 06:57:23.588732 kubelet[1453]: E1213 06:57:23.588628 1453 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:57:24.589876 kubelet[1453]: E1213 06:57:24.589808 1453 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:57:25.590848 kubelet[1453]: E1213 06:57:25.590784 1453 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:57:25.631242 kubelet[1453]: E1213 06:57:25.631189 1453 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="dd1262ca-a278-4bba-959b-8d0d83228369" containerName="mount-bpf-fs" Dec 13 06:57:25.631565 kubelet[1453]: E1213 06:57:25.631538 1453 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="dd1262ca-a278-4bba-959b-8d0d83228369" containerName="apply-sysctl-overwrites" Dec 13 06:57:25.631735 kubelet[1453]: E1213 06:57:25.631680 1453 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="dd1262ca-a278-4bba-959b-8d0d83228369" containerName="clean-cilium-state" Dec 13 06:57:25.631892 kubelet[1453]: E1213 06:57:25.631869 1453 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="dd1262ca-a278-4bba-959b-8d0d83228369" containerName="cilium-agent" Dec 13 06:57:25.632063 kubelet[1453]: E1213 06:57:25.632040 1453 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="dd1262ca-a278-4bba-959b-8d0d83228369" containerName="mount-cgroup" Dec 13 06:57:25.632242 kubelet[1453]: I1213 06:57:25.632214 1453 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd1262ca-a278-4bba-959b-8d0d83228369" containerName="cilium-agent" Dec 13 06:57:25.639786 systemd[1]: Created slice kubepods-besteffort-pod9d080075_6b45_455e_b534_93b79dc77c17.slice. Dec 13 06:57:25.663242 systemd[1]: Created slice kubepods-burstable-pode38704b8_a542_4850_b3a9_39dcd6dc3145.slice. Dec 13 06:57:25.802727 kubelet[1453]: I1213 06:57:25.802528 1453 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e38704b8-a542-4850-b3a9-39dcd6dc3145-cilium-run\") pod \"cilium-qw2zr\" (UID: \"e38704b8-a542-4850-b3a9-39dcd6dc3145\") " pod="kube-system/cilium-qw2zr" Dec 13 06:57:25.802727 kubelet[1453]: I1213 06:57:25.802707 1453 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e38704b8-a542-4850-b3a9-39dcd6dc3145-bpf-maps\") pod \"cilium-qw2zr\" (UID: \"e38704b8-a542-4850-b3a9-39dcd6dc3145\") " pod="kube-system/cilium-qw2zr" Dec 13 06:57:25.802727 kubelet[1453]: I1213 06:57:25.802743 1453 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e38704b8-a542-4850-b3a9-39dcd6dc3145-hostproc\") pod \"cilium-qw2zr\" (UID: \"e38704b8-a542-4850-b3a9-39dcd6dc3145\") " pod="kube-system/cilium-qw2zr" Dec 13 06:57:25.803195 kubelet[1453]: I1213 06:57:25.802824 1453 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e38704b8-a542-4850-b3a9-39dcd6dc3145-cilium-cgroup\") pod \"cilium-qw2zr\" (UID: \"e38704b8-a542-4850-b3a9-39dcd6dc3145\") " pod="kube-system/cilium-qw2zr" Dec 13 06:57:25.803195 kubelet[1453]: I1213 06:57:25.802877 1453 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e38704b8-a542-4850-b3a9-39dcd6dc3145-xtables-lock\") pod \"cilium-qw2zr\" (UID: \"e38704b8-a542-4850-b3a9-39dcd6dc3145\") " pod="kube-system/cilium-qw2zr" Dec 13 06:57:25.803195 kubelet[1453]: I1213 06:57:25.802947 1453 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e38704b8-a542-4850-b3a9-39dcd6dc3145-cilium-config-path\") pod \"cilium-qw2zr\" (UID: \"e38704b8-a542-4850-b3a9-39dcd6dc3145\") " pod="kube-system/cilium-qw2zr" Dec 13 06:57:25.803195 kubelet[1453]: I1213 06:57:25.802995 1453 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e38704b8-a542-4850-b3a9-39dcd6dc3145-lib-modules\") pod \"cilium-qw2zr\" (UID: \"e38704b8-a542-4850-b3a9-39dcd6dc3145\") " pod="kube-system/cilium-qw2zr" Dec 13 06:57:25.803195 kubelet[1453]: I1213 06:57:25.803059 1453 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e38704b8-a542-4850-b3a9-39dcd6dc3145-host-proc-sys-net\") pod \"cilium-qw2zr\" (UID: \"e38704b8-a542-4850-b3a9-39dcd6dc3145\") " pod="kube-system/cilium-qw2zr" Dec 13 06:57:25.803195 kubelet[1453]: I1213 06:57:25.803128 1453 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e38704b8-a542-4850-b3a9-39dcd6dc3145-host-proc-sys-kernel\") pod \"cilium-qw2zr\" (UID: \"e38704b8-a542-4850-b3a9-39dcd6dc3145\") " pod="kube-system/cilium-qw2zr" Dec 13 06:57:25.803535 kubelet[1453]: I1213 06:57:25.803157 1453 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e38704b8-a542-4850-b3a9-39dcd6dc3145-hubble-tls\") pod \"cilium-qw2zr\" (UID: \"e38704b8-a542-4850-b3a9-39dcd6dc3145\") " pod="kube-system/cilium-qw2zr" Dec 13 06:57:25.803535 kubelet[1453]: I1213 06:57:25.803203 1453 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e38704b8-a542-4850-b3a9-39dcd6dc3145-cni-path\") pod \"cilium-qw2zr\" (UID: \"e38704b8-a542-4850-b3a9-39dcd6dc3145\") " pod="kube-system/cilium-qw2zr" Dec 13 06:57:25.803535 kubelet[1453]: I1213 06:57:25.803240 1453 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e38704b8-a542-4850-b3a9-39dcd6dc3145-etc-cni-netd\") pod \"cilium-qw2zr\" (UID: \"e38704b8-a542-4850-b3a9-39dcd6dc3145\") " pod="kube-system/cilium-qw2zr" Dec 13 06:57:25.803535 kubelet[1453]: I1213 06:57:25.803303 1453 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrz6b\" (UniqueName: \"kubernetes.io/projected/e38704b8-a542-4850-b3a9-39dcd6dc3145-kube-api-access-vrz6b\") pod \"cilium-qw2zr\" (UID: \"e38704b8-a542-4850-b3a9-39dcd6dc3145\") " pod="kube-system/cilium-qw2zr" Dec 13 06:57:25.803535 kubelet[1453]: I1213 06:57:25.803333 1453 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9d080075-6b45-455e-b534-93b79dc77c17-cilium-config-path\") pod \"cilium-operator-5d85765b45-jjnqk\" (UID: \"9d080075-6b45-455e-b534-93b79dc77c17\") " pod="kube-system/cilium-operator-5d85765b45-jjnqk" Dec 13 06:57:25.803878 kubelet[1453]: I1213 06:57:25.803379 1453 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78jbg\" (UniqueName: \"kubernetes.io/projected/9d080075-6b45-455e-b534-93b79dc77c17-kube-api-access-78jbg\") pod \"cilium-operator-5d85765b45-jjnqk\" (UID: \"9d080075-6b45-455e-b534-93b79dc77c17\") " pod="kube-system/cilium-operator-5d85765b45-jjnqk" Dec 13 06:57:25.803878 kubelet[1453]: I1213 06:57:25.803408 1453 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/e38704b8-a542-4850-b3a9-39dcd6dc3145-cilium-ipsec-secrets\") pod \"cilium-qw2zr\" (UID: \"e38704b8-a542-4850-b3a9-39dcd6dc3145\") " pod="kube-system/cilium-qw2zr" Dec 13 06:57:25.803878 kubelet[1453]: I1213 06:57:25.803447 1453 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e38704b8-a542-4850-b3a9-39dcd6dc3145-clustermesh-secrets\") pod \"cilium-qw2zr\" (UID: \"e38704b8-a542-4850-b3a9-39dcd6dc3145\") " pod="kube-system/cilium-qw2zr" Dec 13 06:57:25.975287 env[1197]: time="2024-12-13T06:57:25.974405662Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qw2zr,Uid:e38704b8-a542-4850-b3a9-39dcd6dc3145,Namespace:kube-system,Attempt:0,}" Dec 13 06:57:25.996505 env[1197]: time="2024-12-13T06:57:25.996236585Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 06:57:25.996505 env[1197]: time="2024-12-13T06:57:25.996297307Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 06:57:25.996505 env[1197]: time="2024-12-13T06:57:25.996316188Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 06:57:25.997764 env[1197]: time="2024-12-13T06:57:25.997088515Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4e2fd1dc8394fae96ec54a47e3b89f95ee28ab9a6c80fab2cf39884fb96b14b2 pid=3014 runtime=io.containerd.runc.v2 Dec 13 06:57:26.017218 systemd[1]: Started cri-containerd-4e2fd1dc8394fae96ec54a47e3b89f95ee28ab9a6c80fab2cf39884fb96b14b2.scope. Dec 13 06:57:26.070404 env[1197]: time="2024-12-13T06:57:26.070300690Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qw2zr,Uid:e38704b8-a542-4850-b3a9-39dcd6dc3145,Namespace:kube-system,Attempt:0,} returns sandbox id \"4e2fd1dc8394fae96ec54a47e3b89f95ee28ab9a6c80fab2cf39884fb96b14b2\"" Dec 13 06:57:26.074501 env[1197]: time="2024-12-13T06:57:26.074452509Z" level=info msg="CreateContainer within sandbox \"4e2fd1dc8394fae96ec54a47e3b89f95ee28ab9a6c80fab2cf39884fb96b14b2\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 06:57:26.119232 env[1197]: time="2024-12-13T06:57:26.119153795Z" level=info msg="CreateContainer within sandbox \"4e2fd1dc8394fae96ec54a47e3b89f95ee28ab9a6c80fab2cf39884fb96b14b2\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"25427d594b1a79012034dd247f42fa3cc9bb1831cceab031b5b557cf26411c01\"" Dec 13 06:57:26.120539 env[1197]: time="2024-12-13T06:57:26.120493411Z" level=info msg="StartContainer for \"25427d594b1a79012034dd247f42fa3cc9bb1831cceab031b5b557cf26411c01\"" Dec 13 06:57:26.145771 systemd[1]: Started cri-containerd-25427d594b1a79012034dd247f42fa3cc9bb1831cceab031b5b557cf26411c01.scope. Dec 13 06:57:26.167647 systemd[1]: cri-containerd-25427d594b1a79012034dd247f42fa3cc9bb1831cceab031b5b557cf26411c01.scope: Deactivated successfully. Dec 13 06:57:26.192075 env[1197]: time="2024-12-13T06:57:26.191992151Z" level=info msg="shim disconnected" id=25427d594b1a79012034dd247f42fa3cc9bb1831cceab031b5b557cf26411c01 Dec 13 06:57:26.192421 env[1197]: time="2024-12-13T06:57:26.192388182Z" level=warning msg="cleaning up after shim disconnected" id=25427d594b1a79012034dd247f42fa3cc9bb1831cceab031b5b557cf26411c01 namespace=k8s.io Dec 13 06:57:26.192574 env[1197]: time="2024-12-13T06:57:26.192532709Z" level=info msg="cleaning up dead shim" Dec 13 06:57:26.205278 env[1197]: time="2024-12-13T06:57:26.205189901Z" level=warning msg="cleanup warnings time=\"2024-12-13T06:57:26Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3073 runtime=io.containerd.runc.v2\ntime=\"2024-12-13T06:57:26Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/25427d594b1a79012034dd247f42fa3cc9bb1831cceab031b5b557cf26411c01/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Dec 13 06:57:26.205857 env[1197]: time="2024-12-13T06:57:26.205634433Z" level=error msg="copy shim log" error="read /proc/self/fd/54: file already closed" Dec 13 06:57:26.206190 env[1197]: time="2024-12-13T06:57:26.206122981Z" level=error msg="Failed to pipe stderr of container \"25427d594b1a79012034dd247f42fa3cc9bb1831cceab031b5b557cf26411c01\"" error="reading from a closed fifo" Dec 13 06:57:26.206862 env[1197]: time="2024-12-13T06:57:26.206771005Z" level=error msg="Failed to pipe stdout of container \"25427d594b1a79012034dd247f42fa3cc9bb1831cceab031b5b557cf26411c01\"" error="reading from a closed fifo" Dec 13 06:57:26.208611 env[1197]: time="2024-12-13T06:57:26.208552819Z" level=error msg="StartContainer for \"25427d594b1a79012034dd247f42fa3cc9bb1831cceab031b5b557cf26411c01\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Dec 13 06:57:26.209730 kubelet[1453]: E1213 06:57:26.209077 1453 log.go:32] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="25427d594b1a79012034dd247f42fa3cc9bb1831cceab031b5b557cf26411c01" Dec 13 06:57:26.211208 kubelet[1453]: E1213 06:57:26.211136 1453 kuberuntime_manager.go:1272] "Unhandled Error" err=< Dec 13 06:57:26.211208 kubelet[1453]: init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Dec 13 06:57:26.211208 kubelet[1453]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Dec 13 06:57:26.211208 kubelet[1453]: rm /hostbin/cilium-mount Dec 13 06:57:26.211522 kubelet[1453]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vrz6b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:&AppArmorProfile{Type:Unconfined,LocalhostProfile:nil,},},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-qw2zr_kube-system(e38704b8-a542-4850-b3a9-39dcd6dc3145): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Dec 13 06:57:26.211522 kubelet[1453]: > logger="UnhandledError" Dec 13 06:57:26.212430 kubelet[1453]: E1213 06:57:26.212357 1453 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-qw2zr" podUID="e38704b8-a542-4850-b3a9-39dcd6dc3145" Dec 13 06:57:26.244842 env[1197]: time="2024-12-13T06:57:26.244646208Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-jjnqk,Uid:9d080075-6b45-455e-b534-93b79dc77c17,Namespace:kube-system,Attempt:0,}" Dec 13 06:57:26.282095 env[1197]: time="2024-12-13T06:57:26.281990724Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 06:57:26.282982 env[1197]: time="2024-12-13T06:57:26.282913879Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 06:57:26.283166 env[1197]: time="2024-12-13T06:57:26.283122648Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 06:57:26.283518 env[1197]: time="2024-12-13T06:57:26.283464031Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2e2f4dbc733b10782d470960da65774c4b34f090b8143f278e4adf2e6f1350b6 pid=3095 runtime=io.containerd.runc.v2 Dec 13 06:57:26.303013 systemd[1]: Started cri-containerd-2e2f4dbc733b10782d470960da65774c4b34f090b8143f278e4adf2e6f1350b6.scope. Dec 13 06:57:26.363219 env[1197]: time="2024-12-13T06:57:26.363150002Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-jjnqk,Uid:9d080075-6b45-455e-b534-93b79dc77c17,Namespace:kube-system,Attempt:0,} returns sandbox id \"2e2f4dbc733b10782d470960da65774c4b34f090b8143f278e4adf2e6f1350b6\"" Dec 13 06:57:26.366984 env[1197]: time="2024-12-13T06:57:26.366922296Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Dec 13 06:57:26.499138 kubelet[1453]: E1213 06:57:26.498943 1453 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:57:26.592319 kubelet[1453]: E1213 06:57:26.592240 1453 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:57:26.669835 kubelet[1453]: E1213 06:57:26.669769 1453 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 06:57:27.004017 env[1197]: time="2024-12-13T06:57:27.003861530Z" level=info msg="CreateContainer within sandbox \"4e2fd1dc8394fae96ec54a47e3b89f95ee28ab9a6c80fab2cf39884fb96b14b2\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:1,}" Dec 13 06:57:27.019906 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4195603145.mount: Deactivated successfully. Dec 13 06:57:27.029168 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3223580205.mount: Deactivated successfully. Dec 13 06:57:27.035518 env[1197]: time="2024-12-13T06:57:27.035416154Z" level=info msg="CreateContainer within sandbox \"4e2fd1dc8394fae96ec54a47e3b89f95ee28ab9a6c80fab2cf39884fb96b14b2\" for &ContainerMetadata{Name:mount-cgroup,Attempt:1,} returns container id \"f855330275e9e6ad830e6c5b3b23bd4eb2d8553daafea165043d9cbe8ac6e51b\"" Dec 13 06:57:27.037345 env[1197]: time="2024-12-13T06:57:27.036126048Z" level=info msg="StartContainer for \"f855330275e9e6ad830e6c5b3b23bd4eb2d8553daafea165043d9cbe8ac6e51b\"" Dec 13 06:57:27.060319 systemd[1]: Started cri-containerd-f855330275e9e6ad830e6c5b3b23bd4eb2d8553daafea165043d9cbe8ac6e51b.scope. Dec 13 06:57:27.074748 systemd[1]: cri-containerd-f855330275e9e6ad830e6c5b3b23bd4eb2d8553daafea165043d9cbe8ac6e51b.scope: Deactivated successfully. Dec 13 06:57:27.086992 env[1197]: time="2024-12-13T06:57:27.086901027Z" level=info msg="shim disconnected" id=f855330275e9e6ad830e6c5b3b23bd4eb2d8553daafea165043d9cbe8ac6e51b Dec 13 06:57:27.087258 env[1197]: time="2024-12-13T06:57:27.087193774Z" level=warning msg="cleaning up after shim disconnected" id=f855330275e9e6ad830e6c5b3b23bd4eb2d8553daafea165043d9cbe8ac6e51b namespace=k8s.io Dec 13 06:57:27.087498 env[1197]: time="2024-12-13T06:57:27.087449221Z" level=info msg="cleaning up dead shim" Dec 13 06:57:27.098042 env[1197]: time="2024-12-13T06:57:27.097950714Z" level=warning msg="cleanup warnings time=\"2024-12-13T06:57:27Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3154 runtime=io.containerd.runc.v2\ntime=\"2024-12-13T06:57:27Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/f855330275e9e6ad830e6c5b3b23bd4eb2d8553daafea165043d9cbe8ac6e51b/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Dec 13 06:57:27.098451 env[1197]: time="2024-12-13T06:57:27.098375531Z" level=error msg="copy shim log" error="read /proc/self/fd/72: file already closed" Dec 13 06:57:27.098825 env[1197]: time="2024-12-13T06:57:27.098756571Z" level=error msg="Failed to pipe stdout of container \"f855330275e9e6ad830e6c5b3b23bd4eb2d8553daafea165043d9cbe8ac6e51b\"" error="reading from a closed fifo" Dec 13 06:57:27.099155 env[1197]: time="2024-12-13T06:57:27.099108027Z" level=error msg="Failed to pipe stderr of container \"f855330275e9e6ad830e6c5b3b23bd4eb2d8553daafea165043d9cbe8ac6e51b\"" error="reading from a closed fifo" Dec 13 06:57:27.100798 env[1197]: time="2024-12-13T06:57:27.100742297Z" level=error msg="StartContainer for \"f855330275e9e6ad830e6c5b3b23bd4eb2d8553daafea165043d9cbe8ac6e51b\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Dec 13 06:57:27.102017 kubelet[1453]: E1213 06:57:27.101207 1453 log.go:32] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="f855330275e9e6ad830e6c5b3b23bd4eb2d8553daafea165043d9cbe8ac6e51b" Dec 13 06:57:27.102017 kubelet[1453]: E1213 06:57:27.101432 1453 kuberuntime_manager.go:1272] "Unhandled Error" err=< Dec 13 06:57:27.102017 kubelet[1453]: init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Dec 13 06:57:27.102017 kubelet[1453]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Dec 13 06:57:27.102017 kubelet[1453]: rm /hostbin/cilium-mount Dec 13 06:57:27.102017 kubelet[1453]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vrz6b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:&AppArmorProfile{Type:Unconfined,LocalhostProfile:nil,},},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-qw2zr_kube-system(e38704b8-a542-4850-b3a9-39dcd6dc3145): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Dec 13 06:57:27.102017 kubelet[1453]: > logger="UnhandledError" Dec 13 06:57:27.103166 kubelet[1453]: E1213 06:57:27.103074 1453 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-qw2zr" podUID="e38704b8-a542-4850-b3a9-39dcd6dc3145" Dec 13 06:57:27.593208 kubelet[1453]: E1213 06:57:27.593119 1453 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:57:27.926350 kubelet[1453]: I1213 06:57:27.925213 1453 setters.go:600] "Node became not ready" node="10.230.34.74" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-12-13T06:57:27Z","lastTransitionTime":"2024-12-13T06:57:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Dec 13 06:57:28.006007 kubelet[1453]: I1213 06:57:28.005932 1453 scope.go:117] "RemoveContainer" containerID="25427d594b1a79012034dd247f42fa3cc9bb1831cceab031b5b557cf26411c01" Dec 13 06:57:28.006832 env[1197]: time="2024-12-13T06:57:28.006775292Z" level=info msg="StopPodSandbox for \"4e2fd1dc8394fae96ec54a47e3b89f95ee28ab9a6c80fab2cf39884fb96b14b2\"" Dec 13 06:57:28.007451 env[1197]: time="2024-12-13T06:57:28.006969010Z" level=info msg="Container to stop \"25427d594b1a79012034dd247f42fa3cc9bb1831cceab031b5b557cf26411c01\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 06:57:28.007564 env[1197]: time="2024-12-13T06:57:28.007526497Z" level=info msg="Container to stop \"f855330275e9e6ad830e6c5b3b23bd4eb2d8553daafea165043d9cbe8ac6e51b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 06:57:28.010566 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4e2fd1dc8394fae96ec54a47e3b89f95ee28ab9a6c80fab2cf39884fb96b14b2-shm.mount: Deactivated successfully. Dec 13 06:57:28.012894 env[1197]: time="2024-12-13T06:57:28.012859567Z" level=info msg="RemoveContainer for \"25427d594b1a79012034dd247f42fa3cc9bb1831cceab031b5b557cf26411c01\"" Dec 13 06:57:28.017233 env[1197]: time="2024-12-13T06:57:28.017197414Z" level=info msg="RemoveContainer for \"25427d594b1a79012034dd247f42fa3cc9bb1831cceab031b5b557cf26411c01\" returns successfully" Dec 13 06:57:28.025312 systemd[1]: cri-containerd-4e2fd1dc8394fae96ec54a47e3b89f95ee28ab9a6c80fab2cf39884fb96b14b2.scope: Deactivated successfully. Dec 13 06:57:28.057004 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4e2fd1dc8394fae96ec54a47e3b89f95ee28ab9a6c80fab2cf39884fb96b14b2-rootfs.mount: Deactivated successfully. Dec 13 06:57:28.063919 env[1197]: time="2024-12-13T06:57:28.063852012Z" level=info msg="shim disconnected" id=4e2fd1dc8394fae96ec54a47e3b89f95ee28ab9a6c80fab2cf39884fb96b14b2 Dec 13 06:57:28.064194 env[1197]: time="2024-12-13T06:57:28.064160611Z" level=warning msg="cleaning up after shim disconnected" id=4e2fd1dc8394fae96ec54a47e3b89f95ee28ab9a6c80fab2cf39884fb96b14b2 namespace=k8s.io Dec 13 06:57:28.064350 env[1197]: time="2024-12-13T06:57:28.064321034Z" level=info msg="cleaning up dead shim" Dec 13 06:57:28.077097 env[1197]: time="2024-12-13T06:57:28.077033332Z" level=warning msg="cleanup warnings time=\"2024-12-13T06:57:28Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3185 runtime=io.containerd.runc.v2\n" Dec 13 06:57:28.077608 env[1197]: time="2024-12-13T06:57:28.077557637Z" level=info msg="TearDown network for sandbox \"4e2fd1dc8394fae96ec54a47e3b89f95ee28ab9a6c80fab2cf39884fb96b14b2\" successfully" Dec 13 06:57:28.077608 env[1197]: time="2024-12-13T06:57:28.077598226Z" level=info msg="StopPodSandbox for \"4e2fd1dc8394fae96ec54a47e3b89f95ee28ab9a6c80fab2cf39884fb96b14b2\" returns successfully" Dec 13 06:57:28.233866 kubelet[1453]: I1213 06:57:28.233679 1453 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e38704b8-a542-4850-b3a9-39dcd6dc3145-clustermesh-secrets\") pod \"e38704b8-a542-4850-b3a9-39dcd6dc3145\" (UID: \"e38704b8-a542-4850-b3a9-39dcd6dc3145\") " Dec 13 06:57:28.234568 kubelet[1453]: I1213 06:57:28.234541 1453 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e38704b8-a542-4850-b3a9-39dcd6dc3145-cilium-run\") pod \"e38704b8-a542-4850-b3a9-39dcd6dc3145\" (UID: \"e38704b8-a542-4850-b3a9-39dcd6dc3145\") " Dec 13 06:57:28.234897 kubelet[1453]: I1213 06:57:28.234861 1453 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e38704b8-a542-4850-b3a9-39dcd6dc3145-xtables-lock\") pod \"e38704b8-a542-4850-b3a9-39dcd6dc3145\" (UID: \"e38704b8-a542-4850-b3a9-39dcd6dc3145\") " Dec 13 06:57:28.235097 kubelet[1453]: I1213 06:57:28.235071 1453 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e38704b8-a542-4850-b3a9-39dcd6dc3145-bpf-maps\") pod \"e38704b8-a542-4850-b3a9-39dcd6dc3145\" (UID: \"e38704b8-a542-4850-b3a9-39dcd6dc3145\") " Dec 13 06:57:28.235268 kubelet[1453]: I1213 06:57:28.235242 1453 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e38704b8-a542-4850-b3a9-39dcd6dc3145-hostproc\") pod \"e38704b8-a542-4850-b3a9-39dcd6dc3145\" (UID: \"e38704b8-a542-4850-b3a9-39dcd6dc3145\") " Dec 13 06:57:28.235477 kubelet[1453]: I1213 06:57:28.235452 1453 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e38704b8-a542-4850-b3a9-39dcd6dc3145-cilium-config-path\") pod \"e38704b8-a542-4850-b3a9-39dcd6dc3145\" (UID: \"e38704b8-a542-4850-b3a9-39dcd6dc3145\") " Dec 13 06:57:28.235657 kubelet[1453]: I1213 06:57:28.235620 1453 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e38704b8-a542-4850-b3a9-39dcd6dc3145-host-proc-sys-kernel\") pod \"e38704b8-a542-4850-b3a9-39dcd6dc3145\" (UID: \"e38704b8-a542-4850-b3a9-39dcd6dc3145\") " Dec 13 06:57:28.235852 kubelet[1453]: I1213 06:57:28.235827 1453 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e38704b8-a542-4850-b3a9-39dcd6dc3145-cni-path\") pod \"e38704b8-a542-4850-b3a9-39dcd6dc3145\" (UID: \"e38704b8-a542-4850-b3a9-39dcd6dc3145\") " Dec 13 06:57:28.236042 kubelet[1453]: I1213 06:57:28.236007 1453 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/e38704b8-a542-4850-b3a9-39dcd6dc3145-cilium-ipsec-secrets\") pod \"e38704b8-a542-4850-b3a9-39dcd6dc3145\" (UID: \"e38704b8-a542-4850-b3a9-39dcd6dc3145\") " Dec 13 06:57:28.236190 kubelet[1453]: I1213 06:57:28.236164 1453 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e38704b8-a542-4850-b3a9-39dcd6dc3145-lib-modules\") pod \"e38704b8-a542-4850-b3a9-39dcd6dc3145\" (UID: \"e38704b8-a542-4850-b3a9-39dcd6dc3145\") " Dec 13 06:57:28.236357 kubelet[1453]: I1213 06:57:28.236334 1453 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e38704b8-a542-4850-b3a9-39dcd6dc3145-host-proc-sys-net\") pod \"e38704b8-a542-4850-b3a9-39dcd6dc3145\" (UID: \"e38704b8-a542-4850-b3a9-39dcd6dc3145\") " Dec 13 06:57:28.236506 kubelet[1453]: I1213 06:57:28.236482 1453 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e38704b8-a542-4850-b3a9-39dcd6dc3145-cilium-cgroup\") pod \"e38704b8-a542-4850-b3a9-39dcd6dc3145\" (UID: \"e38704b8-a542-4850-b3a9-39dcd6dc3145\") " Dec 13 06:57:28.236685 kubelet[1453]: I1213 06:57:28.236658 1453 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e38704b8-a542-4850-b3a9-39dcd6dc3145-hubble-tls\") pod \"e38704b8-a542-4850-b3a9-39dcd6dc3145\" (UID: \"e38704b8-a542-4850-b3a9-39dcd6dc3145\") " Dec 13 06:57:28.236863 kubelet[1453]: I1213 06:57:28.236838 1453 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e38704b8-a542-4850-b3a9-39dcd6dc3145-etc-cni-netd\") pod \"e38704b8-a542-4850-b3a9-39dcd6dc3145\" (UID: \"e38704b8-a542-4850-b3a9-39dcd6dc3145\") " Dec 13 06:57:28.237051 kubelet[1453]: I1213 06:57:28.237013 1453 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vrz6b\" (UniqueName: \"kubernetes.io/projected/e38704b8-a542-4850-b3a9-39dcd6dc3145-kube-api-access-vrz6b\") pod \"e38704b8-a542-4850-b3a9-39dcd6dc3145\" (UID: \"e38704b8-a542-4850-b3a9-39dcd6dc3145\") " Dec 13 06:57:28.238520 kubelet[1453]: I1213 06:57:28.236038 1453 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e38704b8-a542-4850-b3a9-39dcd6dc3145-hostproc" (OuterVolumeSpecName: "hostproc") pod "e38704b8-a542-4850-b3a9-39dcd6dc3145" (UID: "e38704b8-a542-4850-b3a9-39dcd6dc3145"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 06:57:28.238520 kubelet[1453]: I1213 06:57:28.236075 1453 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e38704b8-a542-4850-b3a9-39dcd6dc3145-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "e38704b8-a542-4850-b3a9-39dcd6dc3145" (UID: "e38704b8-a542-4850-b3a9-39dcd6dc3145"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 06:57:28.238667 kubelet[1453]: I1213 06:57:28.236099 1453 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e38704b8-a542-4850-b3a9-39dcd6dc3145-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "e38704b8-a542-4850-b3a9-39dcd6dc3145" (UID: "e38704b8-a542-4850-b3a9-39dcd6dc3145"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 06:57:28.238667 kubelet[1453]: I1213 06:57:28.236121 1453 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e38704b8-a542-4850-b3a9-39dcd6dc3145-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "e38704b8-a542-4850-b3a9-39dcd6dc3145" (UID: "e38704b8-a542-4850-b3a9-39dcd6dc3145"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 06:57:28.238667 kubelet[1453]: I1213 06:57:28.236145 1453 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e38704b8-a542-4850-b3a9-39dcd6dc3145-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "e38704b8-a542-4850-b3a9-39dcd6dc3145" (UID: "e38704b8-a542-4850-b3a9-39dcd6dc3145"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 06:57:28.238667 kubelet[1453]: I1213 06:57:28.238113 1453 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e38704b8-a542-4850-b3a9-39dcd6dc3145-cni-path" (OuterVolumeSpecName: "cni-path") pod "e38704b8-a542-4850-b3a9-39dcd6dc3145" (UID: "e38704b8-a542-4850-b3a9-39dcd6dc3145"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 06:57:28.238667 kubelet[1453]: I1213 06:57:28.238470 1453 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e38704b8-a542-4850-b3a9-39dcd6dc3145-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "e38704b8-a542-4850-b3a9-39dcd6dc3145" (UID: "e38704b8-a542-4850-b3a9-39dcd6dc3145"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 06:57:28.238667 kubelet[1453]: I1213 06:57:28.238593 1453 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e38704b8-a542-4850-b3a9-39dcd6dc3145-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "e38704b8-a542-4850-b3a9-39dcd6dc3145" (UID: "e38704b8-a542-4850-b3a9-39dcd6dc3145"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 06:57:28.244519 systemd[1]: var-lib-kubelet-pods-e38704b8\x2da542\x2d4850\x2db3a9\x2d39dcd6dc3145-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 06:57:28.246146 kubelet[1453]: I1213 06:57:28.246105 1453 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e38704b8-a542-4850-b3a9-39dcd6dc3145-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "e38704b8-a542-4850-b3a9-39dcd6dc3145" (UID: "e38704b8-a542-4850-b3a9-39dcd6dc3145"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 06:57:28.249669 systemd[1]: var-lib-kubelet-pods-e38704b8\x2da542\x2d4850\x2db3a9\x2d39dcd6dc3145-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dvrz6b.mount: Deactivated successfully. Dec 13 06:57:28.250840 kubelet[1453]: I1213 06:57:28.250792 1453 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e38704b8-a542-4850-b3a9-39dcd6dc3145-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "e38704b8-a542-4850-b3a9-39dcd6dc3145" (UID: "e38704b8-a542-4850-b3a9-39dcd6dc3145"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 06:57:28.251409 kubelet[1453]: I1213 06:57:28.251366 1453 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e38704b8-a542-4850-b3a9-39dcd6dc3145-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "e38704b8-a542-4850-b3a9-39dcd6dc3145" (UID: "e38704b8-a542-4850-b3a9-39dcd6dc3145"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 06:57:28.252258 kubelet[1453]: I1213 06:57:28.252222 1453 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e38704b8-a542-4850-b3a9-39dcd6dc3145-kube-api-access-vrz6b" (OuterVolumeSpecName: "kube-api-access-vrz6b") pod "e38704b8-a542-4850-b3a9-39dcd6dc3145" (UID: "e38704b8-a542-4850-b3a9-39dcd6dc3145"). InnerVolumeSpecName "kube-api-access-vrz6b". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 06:57:28.252412 kubelet[1453]: I1213 06:57:28.252379 1453 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e38704b8-a542-4850-b3a9-39dcd6dc3145-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "e38704b8-a542-4850-b3a9-39dcd6dc3145" (UID: "e38704b8-a542-4850-b3a9-39dcd6dc3145"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 06:57:28.255377 kubelet[1453]: I1213 06:57:28.255338 1453 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e38704b8-a542-4850-b3a9-39dcd6dc3145-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "e38704b8-a542-4850-b3a9-39dcd6dc3145" (UID: "e38704b8-a542-4850-b3a9-39dcd6dc3145"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 06:57:28.255916 kubelet[1453]: I1213 06:57:28.255878 1453 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e38704b8-a542-4850-b3a9-39dcd6dc3145-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "e38704b8-a542-4850-b3a9-39dcd6dc3145" (UID: "e38704b8-a542-4850-b3a9-39dcd6dc3145"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 06:57:28.338227 kubelet[1453]: I1213 06:57:28.338174 1453 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e38704b8-a542-4850-b3a9-39dcd6dc3145-cilium-cgroup\") on node \"10.230.34.74\" DevicePath \"\"" Dec 13 06:57:28.338525 kubelet[1453]: I1213 06:57:28.338500 1453 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e38704b8-a542-4850-b3a9-39dcd6dc3145-hubble-tls\") on node \"10.230.34.74\" DevicePath \"\"" Dec 13 06:57:28.338827 kubelet[1453]: I1213 06:57:28.338665 1453 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e38704b8-a542-4850-b3a9-39dcd6dc3145-etc-cni-netd\") on node \"10.230.34.74\" DevicePath \"\"" Dec 13 06:57:28.339036 kubelet[1453]: I1213 06:57:28.338979 1453 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-vrz6b\" (UniqueName: \"kubernetes.io/projected/e38704b8-a542-4850-b3a9-39dcd6dc3145-kube-api-access-vrz6b\") on node \"10.230.34.74\" DevicePath \"\"" Dec 13 06:57:28.339188 kubelet[1453]: I1213 06:57:28.339165 1453 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e38704b8-a542-4850-b3a9-39dcd6dc3145-clustermesh-secrets\") on node \"10.230.34.74\" DevicePath \"\"" Dec 13 06:57:28.339373 kubelet[1453]: I1213 06:57:28.339337 1453 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e38704b8-a542-4850-b3a9-39dcd6dc3145-cilium-run\") on node \"10.230.34.74\" DevicePath \"\"" Dec 13 06:57:28.339527 kubelet[1453]: I1213 06:57:28.339505 1453 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e38704b8-a542-4850-b3a9-39dcd6dc3145-xtables-lock\") on node \"10.230.34.74\" DevicePath \"\"" Dec 13 06:57:28.339671 kubelet[1453]: I1213 06:57:28.339648 1453 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e38704b8-a542-4850-b3a9-39dcd6dc3145-bpf-maps\") on node \"10.230.34.74\" DevicePath \"\"" Dec 13 06:57:28.339842 kubelet[1453]: I1213 06:57:28.339821 1453 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e38704b8-a542-4850-b3a9-39dcd6dc3145-hostproc\") on node \"10.230.34.74\" DevicePath \"\"" Dec 13 06:57:28.339975 kubelet[1453]: I1213 06:57:28.339955 1453 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e38704b8-a542-4850-b3a9-39dcd6dc3145-cilium-config-path\") on node \"10.230.34.74\" DevicePath \"\"" Dec 13 06:57:28.340153 kubelet[1453]: I1213 06:57:28.340130 1453 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e38704b8-a542-4850-b3a9-39dcd6dc3145-host-proc-sys-kernel\") on node \"10.230.34.74\" DevicePath \"\"" Dec 13 06:57:28.340310 kubelet[1453]: I1213 06:57:28.340277 1453 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e38704b8-a542-4850-b3a9-39dcd6dc3145-cni-path\") on node \"10.230.34.74\" DevicePath \"\"" Dec 13 06:57:28.340485 kubelet[1453]: I1213 06:57:28.340460 1453 reconciler_common.go:288] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/e38704b8-a542-4850-b3a9-39dcd6dc3145-cilium-ipsec-secrets\") on node \"10.230.34.74\" DevicePath \"\"" Dec 13 06:57:28.340631 kubelet[1453]: I1213 06:57:28.340609 1453 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e38704b8-a542-4850-b3a9-39dcd6dc3145-lib-modules\") on node \"10.230.34.74\" DevicePath \"\"" Dec 13 06:57:28.340816 kubelet[1453]: I1213 06:57:28.340796 1453 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e38704b8-a542-4850-b3a9-39dcd6dc3145-host-proc-sys-net\") on node \"10.230.34.74\" DevicePath \"\"" Dec 13 06:57:28.593780 kubelet[1453]: E1213 06:57:28.593701 1453 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:57:28.751051 systemd[1]: Removed slice kubepods-burstable-pode38704b8_a542_4850_b3a9_39dcd6dc3145.slice. Dec 13 06:57:28.912588 systemd[1]: var-lib-kubelet-pods-e38704b8\x2da542\x2d4850\x2db3a9\x2d39dcd6dc3145-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Dec 13 06:57:28.912783 systemd[1]: var-lib-kubelet-pods-e38704b8\x2da542\x2d4850\x2db3a9\x2d39dcd6dc3145-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 06:57:28.969304 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2469990455.mount: Deactivated successfully. Dec 13 06:57:29.040183 kubelet[1453]: I1213 06:57:29.039258 1453 scope.go:117] "RemoveContainer" containerID="f855330275e9e6ad830e6c5b3b23bd4eb2d8553daafea165043d9cbe8ac6e51b" Dec 13 06:57:29.046519 env[1197]: time="2024-12-13T06:57:29.046444614Z" level=info msg="RemoveContainer for \"f855330275e9e6ad830e6c5b3b23bd4eb2d8553daafea165043d9cbe8ac6e51b\"" Dec 13 06:57:29.052235 env[1197]: time="2024-12-13T06:57:29.052174553Z" level=info msg="RemoveContainer for \"f855330275e9e6ad830e6c5b3b23bd4eb2d8553daafea165043d9cbe8ac6e51b\" returns successfully" Dec 13 06:57:29.091906 kubelet[1453]: E1213 06:57:29.091873 1453 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e38704b8-a542-4850-b3a9-39dcd6dc3145" containerName="mount-cgroup" Dec 13 06:57:29.092093 kubelet[1453]: E1213 06:57:29.092067 1453 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e38704b8-a542-4850-b3a9-39dcd6dc3145" containerName="mount-cgroup" Dec 13 06:57:29.092244 kubelet[1453]: I1213 06:57:29.092218 1453 memory_manager.go:354] "RemoveStaleState removing state" podUID="e38704b8-a542-4850-b3a9-39dcd6dc3145" containerName="mount-cgroup" Dec 13 06:57:29.092403 kubelet[1453]: I1213 06:57:29.092376 1453 memory_manager.go:354] "RemoveStaleState removing state" podUID="e38704b8-a542-4850-b3a9-39dcd6dc3145" containerName="mount-cgroup" Dec 13 06:57:29.100309 systemd[1]: Created slice kubepods-burstable-pod5d7ce8e7_0b47_42d9_8dd5_4856cad835ee.slice. Dec 13 06:57:29.248825 kubelet[1453]: I1213 06:57:29.247898 1453 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5d7ce8e7-0b47-42d9-8dd5-4856cad835ee-cilium-run\") pod \"cilium-cwsp7\" (UID: \"5d7ce8e7-0b47-42d9-8dd5-4856cad835ee\") " pod="kube-system/cilium-cwsp7" Dec 13 06:57:29.250184 kubelet[1453]: I1213 06:57:29.250152 1453 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5d7ce8e7-0b47-42d9-8dd5-4856cad835ee-hostproc\") pod \"cilium-cwsp7\" (UID: \"5d7ce8e7-0b47-42d9-8dd5-4856cad835ee\") " pod="kube-system/cilium-cwsp7" Dec 13 06:57:29.250364 kubelet[1453]: I1213 06:57:29.250332 1453 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5d7ce8e7-0b47-42d9-8dd5-4856cad835ee-clustermesh-secrets\") pod \"cilium-cwsp7\" (UID: \"5d7ce8e7-0b47-42d9-8dd5-4856cad835ee\") " pod="kube-system/cilium-cwsp7" Dec 13 06:57:29.250561 kubelet[1453]: I1213 06:57:29.250527 1453 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5d7ce8e7-0b47-42d9-8dd5-4856cad835ee-hubble-tls\") pod \"cilium-cwsp7\" (UID: \"5d7ce8e7-0b47-42d9-8dd5-4856cad835ee\") " pod="kube-system/cilium-cwsp7" Dec 13 06:57:29.250780 kubelet[1453]: I1213 06:57:29.250753 1453 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5d7ce8e7-0b47-42d9-8dd5-4856cad835ee-bpf-maps\") pod \"cilium-cwsp7\" (UID: \"5d7ce8e7-0b47-42d9-8dd5-4856cad835ee\") " pod="kube-system/cilium-cwsp7" Dec 13 06:57:29.250935 kubelet[1453]: I1213 06:57:29.250910 1453 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5d7ce8e7-0b47-42d9-8dd5-4856cad835ee-xtables-lock\") pod \"cilium-cwsp7\" (UID: \"5d7ce8e7-0b47-42d9-8dd5-4856cad835ee\") " pod="kube-system/cilium-cwsp7" Dec 13 06:57:29.251119 kubelet[1453]: I1213 06:57:29.251091 1453 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5d7ce8e7-0b47-42d9-8dd5-4856cad835ee-host-proc-sys-net\") pod \"cilium-cwsp7\" (UID: \"5d7ce8e7-0b47-42d9-8dd5-4856cad835ee\") " pod="kube-system/cilium-cwsp7" Dec 13 06:57:29.251279 kubelet[1453]: I1213 06:57:29.251252 1453 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5d7ce8e7-0b47-42d9-8dd5-4856cad835ee-cilium-cgroup\") pod \"cilium-cwsp7\" (UID: \"5d7ce8e7-0b47-42d9-8dd5-4856cad835ee\") " pod="kube-system/cilium-cwsp7" Dec 13 06:57:29.251430 kubelet[1453]: I1213 06:57:29.251404 1453 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5d7ce8e7-0b47-42d9-8dd5-4856cad835ee-lib-modules\") pod \"cilium-cwsp7\" (UID: \"5d7ce8e7-0b47-42d9-8dd5-4856cad835ee\") " pod="kube-system/cilium-cwsp7" Dec 13 06:57:29.251584 kubelet[1453]: I1213 06:57:29.251557 1453 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5d7ce8e7-0b47-42d9-8dd5-4856cad835ee-cilium-config-path\") pod \"cilium-cwsp7\" (UID: \"5d7ce8e7-0b47-42d9-8dd5-4856cad835ee\") " pod="kube-system/cilium-cwsp7" Dec 13 06:57:29.251801 kubelet[1453]: I1213 06:57:29.251745 1453 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqcv8\" (UniqueName: \"kubernetes.io/projected/5d7ce8e7-0b47-42d9-8dd5-4856cad835ee-kube-api-access-jqcv8\") pod \"cilium-cwsp7\" (UID: \"5d7ce8e7-0b47-42d9-8dd5-4856cad835ee\") " pod="kube-system/cilium-cwsp7" Dec 13 06:57:29.251976 kubelet[1453]: I1213 06:57:29.251951 1453 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5d7ce8e7-0b47-42d9-8dd5-4856cad835ee-cni-path\") pod \"cilium-cwsp7\" (UID: \"5d7ce8e7-0b47-42d9-8dd5-4856cad835ee\") " pod="kube-system/cilium-cwsp7" Dec 13 06:57:29.252167 kubelet[1453]: I1213 06:57:29.252140 1453 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5d7ce8e7-0b47-42d9-8dd5-4856cad835ee-etc-cni-netd\") pod \"cilium-cwsp7\" (UID: \"5d7ce8e7-0b47-42d9-8dd5-4856cad835ee\") " pod="kube-system/cilium-cwsp7" Dec 13 06:57:29.252321 kubelet[1453]: I1213 06:57:29.252283 1453 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/5d7ce8e7-0b47-42d9-8dd5-4856cad835ee-cilium-ipsec-secrets\") pod \"cilium-cwsp7\" (UID: \"5d7ce8e7-0b47-42d9-8dd5-4856cad835ee\") " pod="kube-system/cilium-cwsp7" Dec 13 06:57:29.252520 kubelet[1453]: I1213 06:57:29.252490 1453 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5d7ce8e7-0b47-42d9-8dd5-4856cad835ee-host-proc-sys-kernel\") pod \"cilium-cwsp7\" (UID: \"5d7ce8e7-0b47-42d9-8dd5-4856cad835ee\") " pod="kube-system/cilium-cwsp7" Dec 13 06:57:29.296846 kubelet[1453]: W1213 06:57:29.296736 1453 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode38704b8_a542_4850_b3a9_39dcd6dc3145.slice/cri-containerd-25427d594b1a79012034dd247f42fa3cc9bb1831cceab031b5b557cf26411c01.scope WatchSource:0}: container "25427d594b1a79012034dd247f42fa3cc9bb1831cceab031b5b557cf26411c01" in namespace "k8s.io": not found Dec 13 06:57:29.420181 env[1197]: time="2024-12-13T06:57:29.420083298Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-cwsp7,Uid:5d7ce8e7-0b47-42d9-8dd5-4856cad835ee,Namespace:kube-system,Attempt:0,}" Dec 13 06:57:29.457536 env[1197]: time="2024-12-13T06:57:29.457401234Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 06:57:29.457536 env[1197]: time="2024-12-13T06:57:29.457491948Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 06:57:29.457536 env[1197]: time="2024-12-13T06:57:29.457510846Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 06:57:29.461864 env[1197]: time="2024-12-13T06:57:29.461802516Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5f0c49e95f685f708ba5ab4a4983fd64cdc6a39ff97dda0cb9a82522c24de016 pid=3216 runtime=io.containerd.runc.v2 Dec 13 06:57:29.486069 systemd[1]: Started cri-containerd-5f0c49e95f685f708ba5ab4a4983fd64cdc6a39ff97dda0cb9a82522c24de016.scope. Dec 13 06:57:29.548566 env[1197]: time="2024-12-13T06:57:29.547536683Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-cwsp7,Uid:5d7ce8e7-0b47-42d9-8dd5-4856cad835ee,Namespace:kube-system,Attempt:0,} returns sandbox id \"5f0c49e95f685f708ba5ab4a4983fd64cdc6a39ff97dda0cb9a82522c24de016\"" Dec 13 06:57:29.552503 env[1197]: time="2024-12-13T06:57:29.552468030Z" level=info msg="CreateContainer within sandbox \"5f0c49e95f685f708ba5ab4a4983fd64cdc6a39ff97dda0cb9a82522c24de016\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 06:57:29.579960 env[1197]: time="2024-12-13T06:57:29.579919206Z" level=info msg="CreateContainer within sandbox \"5f0c49e95f685f708ba5ab4a4983fd64cdc6a39ff97dda0cb9a82522c24de016\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"56a01e99ee00d9a61c5b7ae6171a50ef8cdca0b7445e5132e4ee7efbfdd597ac\"" Dec 13 06:57:29.580750 env[1197]: time="2024-12-13T06:57:29.580648922Z" level=info msg="StartContainer for \"56a01e99ee00d9a61c5b7ae6171a50ef8cdca0b7445e5132e4ee7efbfdd597ac\"" Dec 13 06:57:29.594603 kubelet[1453]: E1213 06:57:29.594539 1453 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:57:29.613061 systemd[1]: Started cri-containerd-56a01e99ee00d9a61c5b7ae6171a50ef8cdca0b7445e5132e4ee7efbfdd597ac.scope. Dec 13 06:57:29.676135 env[1197]: time="2024-12-13T06:57:29.675988955Z" level=info msg="StartContainer for \"56a01e99ee00d9a61c5b7ae6171a50ef8cdca0b7445e5132e4ee7efbfdd597ac\" returns successfully" Dec 13 06:57:29.694117 systemd[1]: cri-containerd-56a01e99ee00d9a61c5b7ae6171a50ef8cdca0b7445e5132e4ee7efbfdd597ac.scope: Deactivated successfully. Dec 13 06:57:29.823830 env[1197]: time="2024-12-13T06:57:29.823764911Z" level=info msg="shim disconnected" id=56a01e99ee00d9a61c5b7ae6171a50ef8cdca0b7445e5132e4ee7efbfdd597ac Dec 13 06:57:29.823830 env[1197]: time="2024-12-13T06:57:29.823829287Z" level=warning msg="cleaning up after shim disconnected" id=56a01e99ee00d9a61c5b7ae6171a50ef8cdca0b7445e5132e4ee7efbfdd597ac namespace=k8s.io Dec 13 06:57:29.824200 env[1197]: time="2024-12-13T06:57:29.823847424Z" level=info msg="cleaning up dead shim" Dec 13 06:57:29.848806 env[1197]: time="2024-12-13T06:57:29.848741516Z" level=warning msg="cleanup warnings time=\"2024-12-13T06:57:29Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3302 runtime=io.containerd.runc.v2\n" Dec 13 06:57:30.048048 env[1197]: time="2024-12-13T06:57:30.047965806Z" level=info msg="CreateContainer within sandbox \"5f0c49e95f685f708ba5ab4a4983fd64cdc6a39ff97dda0cb9a82522c24de016\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 06:57:30.066771 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2834067152.mount: Deactivated successfully. Dec 13 06:57:30.078815 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3964205022.mount: Deactivated successfully. Dec 13 06:57:30.084110 env[1197]: time="2024-12-13T06:57:30.084065152Z" level=info msg="CreateContainer within sandbox \"5f0c49e95f685f708ba5ab4a4983fd64cdc6a39ff97dda0cb9a82522c24de016\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"2d1fd05e025841ffed7ca0a6bffe641720c72400ba5fc826f3207d8536ee0287\"" Dec 13 06:57:30.085165 env[1197]: time="2024-12-13T06:57:30.085129200Z" level=info msg="StartContainer for \"2d1fd05e025841ffed7ca0a6bffe641720c72400ba5fc826f3207d8536ee0287\"" Dec 13 06:57:30.119787 systemd[1]: Started cri-containerd-2d1fd05e025841ffed7ca0a6bffe641720c72400ba5fc826f3207d8536ee0287.scope. Dec 13 06:57:30.181679 env[1197]: time="2024-12-13T06:57:30.181623046Z" level=info msg="StartContainer for \"2d1fd05e025841ffed7ca0a6bffe641720c72400ba5fc826f3207d8536ee0287\" returns successfully" Dec 13 06:57:30.194180 systemd[1]: cri-containerd-2d1fd05e025841ffed7ca0a6bffe641720c72400ba5fc826f3207d8536ee0287.scope: Deactivated successfully. Dec 13 06:57:30.261240 env[1197]: time="2024-12-13T06:57:30.261177391Z" level=info msg="shim disconnected" id=2d1fd05e025841ffed7ca0a6bffe641720c72400ba5fc826f3207d8536ee0287 Dec 13 06:57:30.261736 env[1197]: time="2024-12-13T06:57:30.261702377Z" level=warning msg="cleaning up after shim disconnected" id=2d1fd05e025841ffed7ca0a6bffe641720c72400ba5fc826f3207d8536ee0287 namespace=k8s.io Dec 13 06:57:30.261901 env[1197]: time="2024-12-13T06:57:30.261861303Z" level=info msg="cleaning up dead shim" Dec 13 06:57:30.288857 env[1197]: time="2024-12-13T06:57:30.288784823Z" level=warning msg="cleanup warnings time=\"2024-12-13T06:57:30Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3363 runtime=io.containerd.runc.v2\n" Dec 13 06:57:30.371574 env[1197]: time="2024-12-13T06:57:30.371051438Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 06:57:30.372926 env[1197]: time="2024-12-13T06:57:30.372876185Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 06:57:30.375963 env[1197]: time="2024-12-13T06:57:30.375904460Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Dec 13 06:57:30.376950 env[1197]: time="2024-12-13T06:57:30.376899687Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 06:57:30.379249 env[1197]: time="2024-12-13T06:57:30.379198882Z" level=info msg="CreateContainer within sandbox \"2e2f4dbc733b10782d470960da65774c4b34f090b8143f278e4adf2e6f1350b6\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Dec 13 06:57:30.400616 env[1197]: time="2024-12-13T06:57:30.400564259Z" level=info msg="CreateContainer within sandbox \"2e2f4dbc733b10782d470960da65774c4b34f090b8143f278e4adf2e6f1350b6\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"d6c270ef5860b4cd7865817f20c3070fe8823e7a7a7f0d47e9fcd108410113fe\"" Dec 13 06:57:30.401931 env[1197]: time="2024-12-13T06:57:30.401883740Z" level=info msg="StartContainer for \"d6c270ef5860b4cd7865817f20c3070fe8823e7a7a7f0d47e9fcd108410113fe\"" Dec 13 06:57:30.426055 systemd[1]: Started cri-containerd-d6c270ef5860b4cd7865817f20c3070fe8823e7a7a7f0d47e9fcd108410113fe.scope. Dec 13 06:57:30.479767 env[1197]: time="2024-12-13T06:57:30.479667654Z" level=info msg="StartContainer for \"d6c270ef5860b4cd7865817f20c3070fe8823e7a7a7f0d47e9fcd108410113fe\" returns successfully" Dec 13 06:57:30.595235 kubelet[1453]: E1213 06:57:30.595168 1453 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:57:30.746645 kubelet[1453]: I1213 06:57:30.746071 1453 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e38704b8-a542-4850-b3a9-39dcd6dc3145" path="/var/lib/kubelet/pods/e38704b8-a542-4850-b3a9-39dcd6dc3145/volumes" Dec 13 06:57:31.052431 env[1197]: time="2024-12-13T06:57:31.052090820Z" level=info msg="CreateContainer within sandbox \"5f0c49e95f685f708ba5ab4a4983fd64cdc6a39ff97dda0cb9a82522c24de016\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 06:57:31.071780 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3740812930.mount: Deactivated successfully. Dec 13 06:57:31.082536 env[1197]: time="2024-12-13T06:57:31.082468085Z" level=info msg="CreateContainer within sandbox \"5f0c49e95f685f708ba5ab4a4983fd64cdc6a39ff97dda0cb9a82522c24de016\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"f5fca8f16669e719b398ebc4b2b6b304338b7ace9d94ad1351447846c2209612\"" Dec 13 06:57:31.083544 env[1197]: time="2024-12-13T06:57:31.083497937Z" level=info msg="StartContainer for \"f5fca8f16669e719b398ebc4b2b6b304338b7ace9d94ad1351447846c2209612\"" Dec 13 06:57:31.118174 systemd[1]: Started cri-containerd-f5fca8f16669e719b398ebc4b2b6b304338b7ace9d94ad1351447846c2209612.scope. Dec 13 06:57:31.180793 env[1197]: time="2024-12-13T06:57:31.180696486Z" level=info msg="StartContainer for \"f5fca8f16669e719b398ebc4b2b6b304338b7ace9d94ad1351447846c2209612\" returns successfully" Dec 13 06:57:31.183656 systemd[1]: cri-containerd-f5fca8f16669e719b398ebc4b2b6b304338b7ace9d94ad1351447846c2209612.scope: Deactivated successfully. Dec 13 06:57:31.240567 env[1197]: time="2024-12-13T06:57:31.240467951Z" level=info msg="shim disconnected" id=f5fca8f16669e719b398ebc4b2b6b304338b7ace9d94ad1351447846c2209612 Dec 13 06:57:31.240946 env[1197]: time="2024-12-13T06:57:31.240646267Z" level=warning msg="cleaning up after shim disconnected" id=f5fca8f16669e719b398ebc4b2b6b304338b7ace9d94ad1351447846c2209612 namespace=k8s.io Dec 13 06:57:31.240946 env[1197]: time="2024-12-13T06:57:31.240667175Z" level=info msg="cleaning up dead shim" Dec 13 06:57:31.253450 env[1197]: time="2024-12-13T06:57:31.253395070Z" level=warning msg="cleanup warnings time=\"2024-12-13T06:57:31Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3462 runtime=io.containerd.runc.v2\n" Dec 13 06:57:31.596027 kubelet[1453]: E1213 06:57:31.595948 1453 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:57:31.671368 kubelet[1453]: E1213 06:57:31.671268 1453 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 06:57:31.913440 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f5fca8f16669e719b398ebc4b2b6b304338b7ace9d94ad1351447846c2209612-rootfs.mount: Deactivated successfully. Dec 13 06:57:32.061292 env[1197]: time="2024-12-13T06:57:32.061205832Z" level=info msg="CreateContainer within sandbox \"5f0c49e95f685f708ba5ab4a4983fd64cdc6a39ff97dda0cb9a82522c24de016\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 06:57:32.077767 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3693041911.mount: Deactivated successfully. Dec 13 06:57:32.083747 kubelet[1453]: I1213 06:57:32.083364 1453 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-jjnqk" podStartSLOduration=3.072520974 podStartE2EDuration="7.083336577s" podCreationTimestamp="2024-12-13 06:57:25 +0000 UTC" firstStartedPulling="2024-12-13 06:57:26.366077721 +0000 UTC m=+81.324877906" lastFinishedPulling="2024-12-13 06:57:30.376893324 +0000 UTC m=+85.335693509" observedRunningTime="2024-12-13 06:57:31.103788511 +0000 UTC m=+86.062588712" watchObservedRunningTime="2024-12-13 06:57:32.083336577 +0000 UTC m=+87.042136761" Dec 13 06:57:32.087923 env[1197]: time="2024-12-13T06:57:32.087861684Z" level=info msg="CreateContainer within sandbox \"5f0c49e95f685f708ba5ab4a4983fd64cdc6a39ff97dda0cb9a82522c24de016\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"b771b7e6e2a3f3ba05444cbe078ceba61123976ca6c80d33f638d7108f96785c\"" Dec 13 06:57:32.089132 env[1197]: time="2024-12-13T06:57:32.089081483Z" level=info msg="StartContainer for \"b771b7e6e2a3f3ba05444cbe078ceba61123976ca6c80d33f638d7108f96785c\"" Dec 13 06:57:32.119808 systemd[1]: Started cri-containerd-b771b7e6e2a3f3ba05444cbe078ceba61123976ca6c80d33f638d7108f96785c.scope. Dec 13 06:57:32.168694 systemd[1]: cri-containerd-b771b7e6e2a3f3ba05444cbe078ceba61123976ca6c80d33f638d7108f96785c.scope: Deactivated successfully. Dec 13 06:57:32.172448 env[1197]: time="2024-12-13T06:57:32.172203607Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5d7ce8e7_0b47_42d9_8dd5_4856cad835ee.slice/cri-containerd-b771b7e6e2a3f3ba05444cbe078ceba61123976ca6c80d33f638d7108f96785c.scope/memory.events\": no such file or directory" Dec 13 06:57:32.174851 env[1197]: time="2024-12-13T06:57:32.174800627Z" level=info msg="StartContainer for \"b771b7e6e2a3f3ba05444cbe078ceba61123976ca6c80d33f638d7108f96785c\" returns successfully" Dec 13 06:57:32.210592 env[1197]: time="2024-12-13T06:57:32.210523689Z" level=info msg="shim disconnected" id=b771b7e6e2a3f3ba05444cbe078ceba61123976ca6c80d33f638d7108f96785c Dec 13 06:57:32.210592 env[1197]: time="2024-12-13T06:57:32.210591980Z" level=warning msg="cleaning up after shim disconnected" id=b771b7e6e2a3f3ba05444cbe078ceba61123976ca6c80d33f638d7108f96785c namespace=k8s.io Dec 13 06:57:32.210592 env[1197]: time="2024-12-13T06:57:32.210609051Z" level=info msg="cleaning up dead shim" Dec 13 06:57:32.222462 env[1197]: time="2024-12-13T06:57:32.222392052Z" level=warning msg="cleanup warnings time=\"2024-12-13T06:57:32Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3519 runtime=io.containerd.runc.v2\n" Dec 13 06:57:32.596581 kubelet[1453]: E1213 06:57:32.596490 1453 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:57:32.913391 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b771b7e6e2a3f3ba05444cbe078ceba61123976ca6c80d33f638d7108f96785c-rootfs.mount: Deactivated successfully. Dec 13 06:57:33.066660 env[1197]: time="2024-12-13T06:57:33.066595207Z" level=info msg="CreateContainer within sandbox \"5f0c49e95f685f708ba5ab4a4983fd64cdc6a39ff97dda0cb9a82522c24de016\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 06:57:33.090070 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2527900283.mount: Deactivated successfully. Dec 13 06:57:33.104628 env[1197]: time="2024-12-13T06:57:33.104554736Z" level=info msg="CreateContainer within sandbox \"5f0c49e95f685f708ba5ab4a4983fd64cdc6a39ff97dda0cb9a82522c24de016\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"8723a4d46f15f9dba86e9ef71ce722f54c75377ce2273f5203fd01f4f11b53fa\"" Dec 13 06:57:33.106521 env[1197]: time="2024-12-13T06:57:33.106486334Z" level=info msg="StartContainer for \"8723a4d46f15f9dba86e9ef71ce722f54c75377ce2273f5203fd01f4f11b53fa\"" Dec 13 06:57:33.136959 systemd[1]: Started cri-containerd-8723a4d46f15f9dba86e9ef71ce722f54c75377ce2273f5203fd01f4f11b53fa.scope. Dec 13 06:57:33.190589 env[1197]: time="2024-12-13T06:57:33.188075777Z" level=info msg="StartContainer for \"8723a4d46f15f9dba86e9ef71ce722f54c75377ce2273f5203fd01f4f11b53fa\" returns successfully" Dec 13 06:57:33.597765 kubelet[1453]: E1213 06:57:33.597672 1453 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:57:33.936768 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Dec 13 06:57:34.099360 kubelet[1453]: I1213 06:57:34.099268 1453 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-cwsp7" podStartSLOduration=5.099177995 podStartE2EDuration="5.099177995s" podCreationTimestamp="2024-12-13 06:57:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 06:57:34.097819272 +0000 UTC m=+89.056619497" watchObservedRunningTime="2024-12-13 06:57:34.099177995 +0000 UTC m=+89.057978198" Dec 13 06:57:34.598434 kubelet[1453]: E1213 06:57:34.598369 1453 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:57:35.487842 systemd[1]: run-containerd-runc-k8s.io-8723a4d46f15f9dba86e9ef71ce722f54c75377ce2273f5203fd01f4f11b53fa-runc.UpIkyJ.mount: Deactivated successfully. Dec 13 06:57:35.599858 kubelet[1453]: E1213 06:57:35.599794 1453 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:57:36.600949 kubelet[1453]: E1213 06:57:36.600898 1453 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:57:37.411530 systemd-networkd[1022]: lxc_health: Link UP Dec 13 06:57:37.429819 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Dec 13 06:57:37.430253 systemd-networkd[1022]: lxc_health: Gained carrier Dec 13 06:57:37.601535 kubelet[1453]: E1213 06:57:37.601457 1453 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:57:37.752240 systemd[1]: run-containerd-runc-k8s.io-8723a4d46f15f9dba86e9ef71ce722f54c75377ce2273f5203fd01f4f11b53fa-runc.0f7eoo.mount: Deactivated successfully. Dec 13 06:57:38.544930 systemd-networkd[1022]: lxc_health: Gained IPv6LL Dec 13 06:57:38.613594 kubelet[1453]: E1213 06:57:38.612985 1453 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:57:39.616520 kubelet[1453]: E1213 06:57:39.615886 1453 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:57:40.098691 systemd[1]: run-containerd-runc-k8s.io-8723a4d46f15f9dba86e9ef71ce722f54c75377ce2273f5203fd01f4f11b53fa-runc.onRjHe.mount: Deactivated successfully. Dec 13 06:57:40.617499 kubelet[1453]: E1213 06:57:40.617397 1453 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:57:41.617950 kubelet[1453]: E1213 06:57:41.617884 1453 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:57:42.319883 systemd[1]: run-containerd-runc-k8s.io-8723a4d46f15f9dba86e9ef71ce722f54c75377ce2273f5203fd01f4f11b53fa-runc.CpDmwz.mount: Deactivated successfully. Dec 13 06:57:42.619716 kubelet[1453]: E1213 06:57:42.619449 1453 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:57:43.619956 kubelet[1453]: E1213 06:57:43.619880 1453 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:57:44.491108 systemd[1]: run-containerd-runc-k8s.io-8723a4d46f15f9dba86e9ef71ce722f54c75377ce2273f5203fd01f4f11b53fa-runc.47Zmux.mount: Deactivated successfully. Dec 13 06:57:44.620653 kubelet[1453]: E1213 06:57:44.620499 1453 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:57:45.620816 kubelet[1453]: E1213 06:57:45.620752 1453 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:57:46.499424 kubelet[1453]: E1213 06:57:46.499339 1453 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:57:46.622565 kubelet[1453]: E1213 06:57:46.622516 1453 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:57:47.624506 kubelet[1453]: E1213 06:57:47.624408 1453 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:57:48.625360 kubelet[1453]: E1213 06:57:48.625309 1453 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"