Dec 13 14:49:43.949081 kernel: Linux version 5.15.173-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri Dec 13 12:55:10 -00 2024 Dec 13 14:49:43.949123 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=8c474c3ec361ec863adbecaa85281a726e1b53f7863ecc4742be8c5f6d02a66e Dec 13 14:49:43.949154 kernel: BIOS-provided physical RAM map: Dec 13 14:49:43.949164 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Dec 13 14:49:43.949173 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Dec 13 14:49:43.949205 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Dec 13 14:49:43.949218 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdbfff] usable Dec 13 14:49:43.949228 kernel: BIOS-e820: [mem 0x000000007ffdc000-0x000000007fffffff] reserved Dec 13 14:49:43.949238 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Dec 13 14:49:43.949248 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Dec 13 14:49:43.949263 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Dec 13 14:49:43.949273 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Dec 13 14:49:43.949283 kernel: NX (Execute Disable) protection: active Dec 13 14:49:43.949293 kernel: SMBIOS 2.8 present. Dec 13 14:49:43.949306 kernel: DMI: Red Hat KVM/RHEL-AV, BIOS 1.13.0-2.module_el8.5.0+2608+72063365 04/01/2014 Dec 13 14:49:43.949317 kernel: Hypervisor detected: KVM Dec 13 14:49:43.949332 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 13 14:49:43.949343 kernel: kvm-clock: cpu 0, msr 2419a001, primary cpu clock Dec 13 14:49:43.952383 kernel: kvm-clock: using sched offset of 4838913278 cycles Dec 13 14:49:43.952405 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 13 14:49:43.952418 kernel: tsc: Detected 2499.998 MHz processor Dec 13 14:49:43.952430 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 13 14:49:43.952442 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 13 14:49:43.952453 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Dec 13 14:49:43.952464 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 13 14:49:43.952482 kernel: Using GB pages for direct mapping Dec 13 14:49:43.952493 kernel: ACPI: Early table checksum verification disabled Dec 13 14:49:43.952504 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS ) Dec 13 14:49:43.952516 kernel: ACPI: RSDT 0x000000007FFE47A5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 14:49:43.952527 kernel: ACPI: FACP 0x000000007FFE438D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 14:49:43.952538 kernel: ACPI: DSDT 0x000000007FFDFD80 00460D (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 14:49:43.952549 kernel: ACPI: FACS 0x000000007FFDFD40 000040 Dec 13 14:49:43.952560 kernel: ACPI: APIC 0x000000007FFE4481 0000F0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 14:49:43.952571 kernel: ACPI: SRAT 0x000000007FFE4571 0001D0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 14:49:43.952586 kernel: ACPI: MCFG 0x000000007FFE4741 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 14:49:43.952597 kernel: ACPI: WAET 0x000000007FFE477D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 14:49:43.952608 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe438d-0x7ffe4480] Dec 13 14:49:43.952619 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfd80-0x7ffe438c] Dec 13 14:49:43.952630 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfd40-0x7ffdfd7f] Dec 13 14:49:43.952641 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe4481-0x7ffe4570] Dec 13 14:49:43.952658 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe4571-0x7ffe4740] Dec 13 14:49:43.952674 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe4741-0x7ffe477c] Dec 13 14:49:43.952685 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe477d-0x7ffe47a4] Dec 13 14:49:43.952697 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Dec 13 14:49:43.952709 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Dec 13 14:49:43.952721 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Dec 13 14:49:43.952732 kernel: SRAT: PXM 0 -> APIC 0x03 -> Node 0 Dec 13 14:49:43.952744 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Dec 13 14:49:43.952760 kernel: SRAT: PXM 0 -> APIC 0x05 -> Node 0 Dec 13 14:49:43.952771 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Dec 13 14:49:43.952783 kernel: SRAT: PXM 0 -> APIC 0x07 -> Node 0 Dec 13 14:49:43.952794 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Dec 13 14:49:43.952806 kernel: SRAT: PXM 0 -> APIC 0x09 -> Node 0 Dec 13 14:49:43.952817 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Dec 13 14:49:43.952829 kernel: SRAT: PXM 0 -> APIC 0x0b -> Node 0 Dec 13 14:49:43.952841 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Dec 13 14:49:43.952856 kernel: SRAT: PXM 0 -> APIC 0x0d -> Node 0 Dec 13 14:49:43.952867 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Dec 13 14:49:43.952883 kernel: SRAT: PXM 0 -> APIC 0x0f -> Node 0 Dec 13 14:49:43.952895 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Dec 13 14:49:43.952906 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Dec 13 14:49:43.952918 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x20800fffff] hotplug Dec 13 14:49:43.952938 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdbfff] -> [mem 0x00000000-0x7ffdbfff] Dec 13 14:49:43.952950 kernel: NODE_DATA(0) allocated [mem 0x7ffd6000-0x7ffdbfff] Dec 13 14:49:43.952962 kernel: Zone ranges: Dec 13 14:49:43.952974 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 13 14:49:43.952986 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdbfff] Dec 13 14:49:43.953002 kernel: Normal empty Dec 13 14:49:43.953014 kernel: Movable zone start for each node Dec 13 14:49:43.953025 kernel: Early memory node ranges Dec 13 14:49:43.953037 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Dec 13 14:49:43.953049 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdbfff] Dec 13 14:49:43.953061 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdbfff] Dec 13 14:49:43.953072 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 14:49:43.953084 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Dec 13 14:49:43.953096 kernel: On node 0, zone DMA32: 36 pages in unavailable ranges Dec 13 14:49:43.953118 kernel: ACPI: PM-Timer IO Port: 0x608 Dec 13 14:49:43.953130 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 13 14:49:43.953142 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 13 14:49:43.953154 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Dec 13 14:49:43.953166 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 13 14:49:43.953178 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 13 14:49:43.953202 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 13 14:49:43.953215 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 13 14:49:43.953226 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 13 14:49:43.953243 kernel: TSC deadline timer available Dec 13 14:49:43.953255 kernel: smpboot: Allowing 16 CPUs, 14 hotplug CPUs Dec 13 14:49:43.953266 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Dec 13 14:49:43.953278 kernel: Booting paravirtualized kernel on KVM Dec 13 14:49:43.953290 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 13 14:49:43.953302 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:16 nr_node_ids:1 Dec 13 14:49:43.953314 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u262144 Dec 13 14:49:43.953327 kernel: pcpu-alloc: s188696 r8192 d32488 u262144 alloc=1*2097152 Dec 13 14:49:43.953372 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Dec 13 14:49:43.953391 kernel: kvm-guest: stealtime: cpu 0, msr 7da1c0c0 Dec 13 14:49:43.953403 kernel: kvm-guest: PV spinlocks enabled Dec 13 14:49:43.953415 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 13 14:49:43.953426 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515804 Dec 13 14:49:43.953438 kernel: Policy zone: DMA32 Dec 13 14:49:43.953451 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=8c474c3ec361ec863adbecaa85281a726e1b53f7863ecc4742be8c5f6d02a66e Dec 13 14:49:43.953464 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 14:49:43.953476 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 14:49:43.953492 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 13 14:49:43.953504 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 14:49:43.953516 kernel: Memory: 1903832K/2096616K available (12294K kernel code, 2275K rwdata, 13716K rodata, 47472K init, 4112K bss, 192524K reserved, 0K cma-reserved) Dec 13 14:49:43.953528 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Dec 13 14:49:43.953540 kernel: Kernel/User page tables isolation: enabled Dec 13 14:49:43.953552 kernel: ftrace: allocating 34549 entries in 135 pages Dec 13 14:49:43.953563 kernel: ftrace: allocated 135 pages with 4 groups Dec 13 14:49:43.953575 kernel: rcu: Hierarchical RCU implementation. Dec 13 14:49:43.953588 kernel: rcu: RCU event tracing is enabled. Dec 13 14:49:43.953604 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Dec 13 14:49:43.953616 kernel: Rude variant of Tasks RCU enabled. Dec 13 14:49:43.953628 kernel: Tracing variant of Tasks RCU enabled. Dec 13 14:49:43.953640 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 14:49:43.953652 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Dec 13 14:49:43.953663 kernel: NR_IRQS: 33024, nr_irqs: 552, preallocated irqs: 16 Dec 13 14:49:43.953675 kernel: random: crng init done Dec 13 14:49:43.953700 kernel: Console: colour VGA+ 80x25 Dec 13 14:49:43.953712 kernel: printk: console [tty0] enabled Dec 13 14:49:43.953725 kernel: printk: console [ttyS0] enabled Dec 13 14:49:43.953737 kernel: ACPI: Core revision 20210730 Dec 13 14:49:43.953749 kernel: APIC: Switch to symmetric I/O mode setup Dec 13 14:49:43.953765 kernel: x2apic enabled Dec 13 14:49:43.953778 kernel: Switched APIC routing to physical x2apic. Dec 13 14:49:43.953802 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Dec 13 14:49:43.953814 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499998) Dec 13 14:49:43.953826 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Dec 13 14:49:43.953842 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Dec 13 14:49:43.953867 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Dec 13 14:49:43.953878 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 13 14:49:43.953890 kernel: Spectre V2 : Mitigation: Retpolines Dec 13 14:49:43.953901 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Dec 13 14:49:43.953913 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Dec 13 14:49:43.953924 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Dec 13 14:49:43.953936 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 13 14:49:43.953962 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Dec 13 14:49:43.953974 kernel: MDS: Mitigation: Clear CPU buffers Dec 13 14:49:43.953985 kernel: MMIO Stale Data: Unknown: No mitigations Dec 13 14:49:43.954013 kernel: SRBDS: Unknown: Dependent on hypervisor status Dec 13 14:49:43.954026 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 13 14:49:43.954039 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 13 14:49:43.954051 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 13 14:49:43.954063 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 13 14:49:43.954076 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Dec 13 14:49:43.954088 kernel: Freeing SMP alternatives memory: 32K Dec 13 14:49:43.954100 kernel: pid_max: default: 32768 minimum: 301 Dec 13 14:49:43.954113 kernel: LSM: Security Framework initializing Dec 13 14:49:43.954125 kernel: SELinux: Initializing. Dec 13 14:49:43.954137 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 13 14:49:43.954154 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 13 14:49:43.954166 kernel: smpboot: CPU0: Intel Xeon E3-12xx v2 (Ivy Bridge, IBRS) (family: 0x6, model: 0x3a, stepping: 0x9) Dec 13 14:49:43.954178 kernel: Performance Events: unsupported p6 CPU model 58 no PMU driver, software events only. Dec 13 14:49:43.954203 kernel: signal: max sigframe size: 1776 Dec 13 14:49:43.954216 kernel: rcu: Hierarchical SRCU implementation. Dec 13 14:49:43.954228 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Dec 13 14:49:43.954241 kernel: smp: Bringing up secondary CPUs ... Dec 13 14:49:43.954253 kernel: x86: Booting SMP configuration: Dec 13 14:49:43.954265 kernel: .... node #0, CPUs: #1 Dec 13 14:49:43.954282 kernel: kvm-clock: cpu 1, msr 2419a041, secondary cpu clock Dec 13 14:49:43.954295 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Dec 13 14:49:43.954307 kernel: kvm-guest: stealtime: cpu 1, msr 7da5c0c0 Dec 13 14:49:43.954320 kernel: smp: Brought up 1 node, 2 CPUs Dec 13 14:49:43.954332 kernel: smpboot: Max logical packages: 16 Dec 13 14:49:43.954356 kernel: smpboot: Total of 2 processors activated (9999.99 BogoMIPS) Dec 13 14:49:43.954368 kernel: devtmpfs: initialized Dec 13 14:49:43.954381 kernel: x86/mm: Memory block size: 128MB Dec 13 14:49:43.954393 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 14:49:43.954406 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Dec 13 14:49:43.954424 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 14:49:43.954436 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 14:49:43.954449 kernel: audit: initializing netlink subsys (disabled) Dec 13 14:49:43.954467 kernel: audit: type=2000 audit(1734101382.675:1): state=initialized audit_enabled=0 res=1 Dec 13 14:49:43.954480 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 14:49:43.954492 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 13 14:49:43.954505 kernel: cpuidle: using governor menu Dec 13 14:49:43.954517 kernel: ACPI: bus type PCI registered Dec 13 14:49:43.954530 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 14:49:43.954546 kernel: dca service started, version 1.12.1 Dec 13 14:49:43.954559 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Dec 13 14:49:43.954571 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved in E820 Dec 13 14:49:43.954592 kernel: PCI: Using configuration type 1 for base access Dec 13 14:49:43.954605 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 13 14:49:43.954617 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 14:49:43.954630 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 14:49:43.954642 kernel: ACPI: Added _OSI(Module Device) Dec 13 14:49:43.954654 kernel: ACPI: Added _OSI(Processor Device) Dec 13 14:49:43.954671 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 14:49:43.954683 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 14:49:43.954696 kernel: ACPI: Added _OSI(Linux-Dell-Video) Dec 13 14:49:43.954708 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Dec 13 14:49:43.954721 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Dec 13 14:49:43.954733 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 14:49:43.954746 kernel: ACPI: Interpreter enabled Dec 13 14:49:43.954758 kernel: ACPI: PM: (supports S0 S5) Dec 13 14:49:43.954771 kernel: ACPI: Using IOAPIC for interrupt routing Dec 13 14:49:43.954787 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 13 14:49:43.954799 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Dec 13 14:49:43.954812 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 13 14:49:43.955099 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 13 14:49:43.955287 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Dec 13 14:49:43.955458 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Dec 13 14:49:43.955477 kernel: PCI host bridge to bus 0000:00 Dec 13 14:49:43.955655 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 13 14:49:43.955796 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 13 14:49:43.955933 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 13 14:49:43.956081 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Dec 13 14:49:43.956243 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Dec 13 14:49:43.956399 kernel: pci_bus 0000:00: root bus resource [mem 0x20c0000000-0x28bfffffff window] Dec 13 14:49:43.956547 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 13 14:49:43.956733 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Dec 13 14:49:43.956905 kernel: pci 0000:00:01.0: [1013:00b8] type 00 class 0x030000 Dec 13 14:49:43.957069 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfa000000-0xfbffffff pref] Dec 13 14:49:43.957248 kernel: pci 0000:00:01.0: reg 0x14: [mem 0xfea50000-0xfea50fff] Dec 13 14:49:43.964490 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea40000-0xfea4ffff pref] Dec 13 14:49:43.964693 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 13 14:49:43.964881 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Dec 13 14:49:43.965064 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea51000-0xfea51fff] Dec 13 14:49:43.965288 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Dec 13 14:49:43.965473 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea52000-0xfea52fff] Dec 13 14:49:43.965641 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Dec 13 14:49:43.965824 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea53000-0xfea53fff] Dec 13 14:49:43.966015 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Dec 13 14:49:43.966208 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea54000-0xfea54fff] Dec 13 14:49:43.966412 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Dec 13 14:49:43.966567 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea55000-0xfea55fff] Dec 13 14:49:43.966752 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Dec 13 14:49:43.966904 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea56000-0xfea56fff] Dec 13 14:49:43.967079 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Dec 13 14:49:43.967270 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea57000-0xfea57fff] Dec 13 14:49:43.967447 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Dec 13 14:49:43.967613 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea58000-0xfea58fff] Dec 13 14:49:43.967816 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Dec 13 14:49:43.967979 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc0c0-0xc0df] Dec 13 14:49:43.968133 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfea59000-0xfea59fff] Dec 13 14:49:43.968307 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Dec 13 14:49:43.968489 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfea00000-0xfea3ffff pref] Dec 13 14:49:43.968682 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Dec 13 14:49:43.968838 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Dec 13 14:49:43.968989 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfea5a000-0xfea5afff] Dec 13 14:49:43.969141 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfd004000-0xfd007fff 64bit pref] Dec 13 14:49:43.969325 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Dec 13 14:49:43.971592 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Dec 13 14:49:43.971795 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Dec 13 14:49:43.971972 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc0e0-0xc0ff] Dec 13 14:49:43.972157 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea5b000-0xfea5bfff] Dec 13 14:49:43.972356 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Dec 13 14:49:43.972540 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Dec 13 14:49:43.972730 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 Dec 13 14:49:43.972934 kernel: pci 0000:01:00.0: reg 0x10: [mem 0xfda00000-0xfda000ff 64bit] Dec 13 14:49:43.973137 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Dec 13 14:49:43.973319 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Dec 13 14:49:43.973513 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Dec 13 14:49:43.973707 kernel: pci_bus 0000:02: extended config space not accessible Dec 13 14:49:43.973919 kernel: pci 0000:02:01.0: [8086:25ab] type 00 class 0x088000 Dec 13 14:49:43.974128 kernel: pci 0000:02:01.0: reg 0x10: [mem 0xfd800000-0xfd80000f] Dec 13 14:49:43.974329 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Dec 13 14:49:43.974537 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Dec 13 14:49:43.974717 kernel: pci 0000:03:00.0: [1b36:000d] type 00 class 0x0c0330 Dec 13 14:49:43.974893 kernel: pci 0000:03:00.0: reg 0x10: [mem 0xfe800000-0xfe803fff 64bit] Dec 13 14:49:43.975066 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Dec 13 14:49:43.975258 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Dec 13 14:49:43.975428 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Dec 13 14:49:43.975605 kernel: pci 0000:04:00.0: [1af4:1044] type 00 class 0x00ff00 Dec 13 14:49:43.975755 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Dec 13 14:49:43.975917 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Dec 13 14:49:43.976075 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Dec 13 14:49:43.976258 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Dec 13 14:49:43.976427 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Dec 13 14:49:43.976620 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Dec 13 14:49:43.976795 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Dec 13 14:49:43.976951 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Dec 13 14:49:43.977109 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Dec 13 14:49:43.977276 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Dec 13 14:49:43.977444 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Dec 13 14:49:43.977599 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Dec 13 14:49:43.977759 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Dec 13 14:49:43.977918 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Dec 13 14:49:43.978071 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Dec 13 14:49:43.978236 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Dec 13 14:49:43.986439 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Dec 13 14:49:43.986616 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Dec 13 14:49:43.986788 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Dec 13 14:49:43.986807 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 13 14:49:43.986827 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 13 14:49:43.986846 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 13 14:49:43.986858 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 13 14:49:43.986871 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Dec 13 14:49:43.986889 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Dec 13 14:49:43.986901 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Dec 13 14:49:43.986913 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Dec 13 14:49:43.986924 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Dec 13 14:49:43.986936 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Dec 13 14:49:43.986960 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Dec 13 14:49:43.986976 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Dec 13 14:49:43.986988 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Dec 13 14:49:43.987000 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Dec 13 14:49:43.987029 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Dec 13 14:49:43.987042 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Dec 13 14:49:43.987054 kernel: iommu: Default domain type: Translated Dec 13 14:49:43.987067 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 13 14:49:43.987260 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Dec 13 14:49:43.987434 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 13 14:49:43.987596 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Dec 13 14:49:43.987616 kernel: vgaarb: loaded Dec 13 14:49:43.987629 kernel: pps_core: LinuxPPS API ver. 1 registered Dec 13 14:49:43.987642 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Dec 13 14:49:43.987655 kernel: PTP clock support registered Dec 13 14:49:43.987667 kernel: PCI: Using ACPI for IRQ routing Dec 13 14:49:43.987679 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 13 14:49:43.987696 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Dec 13 14:49:43.987714 kernel: e820: reserve RAM buffer [mem 0x7ffdc000-0x7fffffff] Dec 13 14:49:43.987726 kernel: clocksource: Switched to clocksource kvm-clock Dec 13 14:49:43.987750 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 14:49:43.987762 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 14:49:43.987774 kernel: pnp: PnP ACPI init Dec 13 14:49:43.987968 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Dec 13 14:49:43.987989 kernel: pnp: PnP ACPI: found 5 devices Dec 13 14:49:43.988001 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 13 14:49:43.988031 kernel: NET: Registered PF_INET protocol family Dec 13 14:49:43.988043 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 14:49:43.988055 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Dec 13 14:49:43.988067 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 14:49:43.988079 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 13 14:49:43.988103 kernel: TCP bind hash table entries: 16384 (order: 6, 262144 bytes, linear) Dec 13 14:49:43.988114 kernel: TCP: Hash tables configured (established 16384 bind 16384) Dec 13 14:49:43.988125 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 13 14:49:43.988137 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 13 14:49:43.988152 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 14:49:43.988176 kernel: NET: Registered PF_XDP protocol family Dec 13 14:49:43.988355 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01-02] add_size 1000 Dec 13 14:49:43.988512 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Dec 13 14:49:43.988665 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Dec 13 14:49:43.988817 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Dec 13 14:49:43.989000 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Dec 13 14:49:43.989177 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Dec 13 14:49:43.989366 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Dec 13 14:49:43.989521 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Dec 13 14:49:43.989679 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Dec 13 14:49:43.989854 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Dec 13 14:49:43.990005 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Dec 13 14:49:43.990163 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Dec 13 14:49:43.990329 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Dec 13 14:49:43.990498 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Dec 13 14:49:43.990652 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Dec 13 14:49:43.990803 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Dec 13 14:49:43.990964 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Dec 13 14:49:43.991123 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Dec 13 14:49:43.991290 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Dec 13 14:49:43.991458 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Dec 13 14:49:43.991619 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Dec 13 14:49:43.991789 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Dec 13 14:49:43.991984 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Dec 13 14:49:43.992128 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Dec 13 14:49:43.992305 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Dec 13 14:49:43.992474 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Dec 13 14:49:43.992627 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Dec 13 14:49:43.992780 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Dec 13 14:49:43.992963 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Dec 13 14:49:43.993126 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Dec 13 14:49:43.993302 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Dec 13 14:49:44.006022 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Dec 13 14:49:44.006226 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Dec 13 14:49:44.006402 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Dec 13 14:49:44.006568 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Dec 13 14:49:44.006731 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Dec 13 14:49:44.006874 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Dec 13 14:49:44.007038 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Dec 13 14:49:44.007227 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Dec 13 14:49:44.007402 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Dec 13 14:49:44.007578 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Dec 13 14:49:44.007724 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Dec 13 14:49:44.007888 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Dec 13 14:49:44.008072 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Dec 13 14:49:44.008239 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Dec 13 14:49:44.008409 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Dec 13 14:49:44.008562 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Dec 13 14:49:44.008715 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Dec 13 14:49:44.008875 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Dec 13 14:49:44.009033 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Dec 13 14:49:44.009201 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 13 14:49:44.009365 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 13 14:49:44.009509 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 13 14:49:44.009647 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Dec 13 14:49:44.009786 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Dec 13 14:49:44.009925 kernel: pci_bus 0000:00: resource 9 [mem 0x20c0000000-0x28bfffffff window] Dec 13 14:49:44.010106 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Dec 13 14:49:44.010308 kernel: pci_bus 0000:01: resource 1 [mem 0xfd800000-0xfdbfffff] Dec 13 14:49:44.010504 kernel: pci_bus 0000:01: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Dec 13 14:49:44.010670 kernel: pci_bus 0000:02: resource 1 [mem 0xfd800000-0xfd9fffff] Dec 13 14:49:44.010847 kernel: pci_bus 0000:03: resource 0 [io 0x2000-0x2fff] Dec 13 14:49:44.010990 kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff] Dec 13 14:49:44.011154 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Dec 13 14:49:44.011398 kernel: pci_bus 0000:04: resource 0 [io 0x3000-0x3fff] Dec 13 14:49:44.011553 kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff] Dec 13 14:49:44.011699 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Dec 13 14:49:44.011866 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Dec 13 14:49:44.012023 kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff] Dec 13 14:49:44.012169 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Dec 13 14:49:44.012350 kernel: pci_bus 0000:06: resource 0 [io 0x5000-0x5fff] Dec 13 14:49:44.012509 kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff] Dec 13 14:49:44.012667 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Dec 13 14:49:44.012886 kernel: pci_bus 0000:07: resource 0 [io 0x6000-0x6fff] Dec 13 14:49:44.013035 kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff] Dec 13 14:49:44.013189 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Dec 13 14:49:44.013383 kernel: pci_bus 0000:08: resource 0 [io 0x7000-0x7fff] Dec 13 14:49:44.013540 kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff] Dec 13 14:49:44.013685 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Dec 13 14:49:44.013840 kernel: pci_bus 0000:09: resource 0 [io 0x8000-0x8fff] Dec 13 14:49:44.013986 kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff] Dec 13 14:49:44.014132 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Dec 13 14:49:44.014152 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Dec 13 14:49:44.014166 kernel: PCI: CLS 0 bytes, default 64 Dec 13 14:49:44.014190 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Dec 13 14:49:44.014211 kernel: software IO TLB: mapped [mem 0x0000000079800000-0x000000007d800000] (64MB) Dec 13 14:49:44.014225 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Dec 13 14:49:44.014239 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Dec 13 14:49:44.014252 kernel: Initialise system trusted keyrings Dec 13 14:49:44.014266 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Dec 13 14:49:44.014279 kernel: Key type asymmetric registered Dec 13 14:49:44.014292 kernel: Asymmetric key parser 'x509' registered Dec 13 14:49:44.014305 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Dec 13 14:49:44.014318 kernel: io scheduler mq-deadline registered Dec 13 14:49:44.014348 kernel: io scheduler kyber registered Dec 13 14:49:44.014363 kernel: io scheduler bfq registered Dec 13 14:49:44.014518 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Dec 13 14:49:44.014679 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Dec 13 14:49:44.014852 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 14:49:44.015049 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Dec 13 14:49:44.015226 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Dec 13 14:49:44.015402 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 14:49:44.015575 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Dec 13 14:49:44.015755 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Dec 13 14:49:44.015904 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 14:49:44.016105 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Dec 13 14:49:44.016302 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Dec 13 14:49:44.016490 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 14:49:44.016646 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Dec 13 14:49:44.016837 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Dec 13 14:49:44.017045 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 14:49:44.017241 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Dec 13 14:49:44.017423 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Dec 13 14:49:44.017597 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 14:49:44.017765 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Dec 13 14:49:44.017932 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Dec 13 14:49:44.018086 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 14:49:44.018281 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Dec 13 14:49:44.025087 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Dec 13 14:49:44.025304 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 14:49:44.025327 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 13 14:49:44.025362 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Dec 13 14:49:44.025377 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Dec 13 14:49:44.025390 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 14:49:44.025404 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 13 14:49:44.025417 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 13 14:49:44.025431 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 13 14:49:44.025451 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 13 14:49:44.025619 kernel: rtc_cmos 00:03: RTC can wake from S4 Dec 13 14:49:44.025641 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 13 14:49:44.025784 kernel: rtc_cmos 00:03: registered as rtc0 Dec 13 14:49:44.025955 kernel: rtc_cmos 00:03: setting system clock to 2024-12-13T14:49:43 UTC (1734101383) Dec 13 14:49:44.026089 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Dec 13 14:49:44.026106 kernel: intel_pstate: CPU model not supported Dec 13 14:49:44.026124 kernel: NET: Registered PF_INET6 protocol family Dec 13 14:49:44.026138 kernel: Segment Routing with IPv6 Dec 13 14:49:44.026150 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 14:49:44.026197 kernel: NET: Registered PF_PACKET protocol family Dec 13 14:49:44.026212 kernel: Key type dns_resolver registered Dec 13 14:49:44.026225 kernel: IPI shorthand broadcast: enabled Dec 13 14:49:44.026239 kernel: sched_clock: Marking stable (996348989, 224957121)->(1518591110, -297285000) Dec 13 14:49:44.026252 kernel: registered taskstats version 1 Dec 13 14:49:44.026266 kernel: Loading compiled-in X.509 certificates Dec 13 14:49:44.026279 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.173-flatcar: e1d88c9e01f5bb2adeb5b99325e46e5ca8dff115' Dec 13 14:49:44.026297 kernel: Key type .fscrypt registered Dec 13 14:49:44.026309 kernel: Key type fscrypt-provisioning registered Dec 13 14:49:44.026322 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 14:49:44.026348 kernel: ima: Allocated hash algorithm: sha1 Dec 13 14:49:44.026363 kernel: ima: No architecture policies found Dec 13 14:49:44.026377 kernel: clk: Disabling unused clocks Dec 13 14:49:44.026390 kernel: Freeing unused kernel image (initmem) memory: 47472K Dec 13 14:49:44.026404 kernel: Write protecting the kernel read-only data: 28672k Dec 13 14:49:44.026422 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Dec 13 14:49:44.026436 kernel: Freeing unused kernel image (rodata/data gap) memory: 620K Dec 13 14:49:44.026449 kernel: Run /init as init process Dec 13 14:49:44.026472 kernel: with arguments: Dec 13 14:49:44.026497 kernel: /init Dec 13 14:49:44.026509 kernel: with environment: Dec 13 14:49:44.026521 kernel: HOME=/ Dec 13 14:49:44.026553 kernel: TERM=linux Dec 13 14:49:44.026566 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 14:49:44.026589 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 14:49:44.026612 systemd[1]: Detected virtualization kvm. Dec 13 14:49:44.026632 systemd[1]: Detected architecture x86-64. Dec 13 14:49:44.026646 systemd[1]: Running in initrd. Dec 13 14:49:44.026659 systemd[1]: No hostname configured, using default hostname. Dec 13 14:49:44.026673 systemd[1]: Hostname set to . Dec 13 14:49:44.026690 systemd[1]: Initializing machine ID from VM UUID. Dec 13 14:49:44.026704 systemd[1]: Queued start job for default target initrd.target. Dec 13 14:49:44.026722 systemd[1]: Started systemd-ask-password-console.path. Dec 13 14:49:44.026736 systemd[1]: Reached target cryptsetup.target. Dec 13 14:49:44.026749 systemd[1]: Reached target paths.target. Dec 13 14:49:44.026763 systemd[1]: Reached target slices.target. Dec 13 14:49:44.026776 systemd[1]: Reached target swap.target. Dec 13 14:49:44.026790 systemd[1]: Reached target timers.target. Dec 13 14:49:44.026805 systemd[1]: Listening on iscsid.socket. Dec 13 14:49:44.026823 systemd[1]: Listening on iscsiuio.socket. Dec 13 14:49:44.026846 systemd[1]: Listening on systemd-journald-audit.socket. Dec 13 14:49:44.026860 systemd[1]: Listening on systemd-journald-dev-log.socket. Dec 13 14:49:44.026878 systemd[1]: Listening on systemd-journald.socket. Dec 13 14:49:44.026892 systemd[1]: Listening on systemd-networkd.socket. Dec 13 14:49:44.026906 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 14:49:44.026920 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 14:49:44.026934 systemd[1]: Reached target sockets.target. Dec 13 14:49:44.026952 systemd[1]: Starting kmod-static-nodes.service... Dec 13 14:49:44.026977 systemd[1]: Finished network-cleanup.service. Dec 13 14:49:44.026991 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 14:49:44.027005 systemd[1]: Starting systemd-journald.service... Dec 13 14:49:44.027037 systemd[1]: Starting systemd-modules-load.service... Dec 13 14:49:44.027051 systemd[1]: Starting systemd-resolved.service... Dec 13 14:49:44.027065 systemd[1]: Starting systemd-vconsole-setup.service... Dec 13 14:49:44.027079 systemd[1]: Finished kmod-static-nodes.service. Dec 13 14:49:44.027096 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 14:49:44.027120 systemd-journald[201]: Journal started Dec 13 14:49:44.027224 systemd-journald[201]: Runtime Journal (/run/log/journal/13a47ed4dae84ff7bbcc5b696543e75a) is 4.7M, max 38.1M, 33.3M free. Dec 13 14:49:43.949757 systemd-modules-load[202]: Inserted module 'overlay' Dec 13 14:49:44.047318 kernel: Bridge firewalling registered Dec 13 14:49:44.005754 systemd-resolved[203]: Positive Trust Anchors: Dec 13 14:49:44.062009 systemd[1]: Started systemd-resolved.service. Dec 13 14:49:44.062043 kernel: audit: type=1130 audit(1734101384.047:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:49:44.062069 systemd[1]: Started systemd-journald.service. Dec 13 14:49:44.062089 kernel: audit: type=1130 audit(1734101384.055:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:49:44.047000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:49:44.055000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:49:44.005770 systemd-resolved[203]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 14:49:44.081874 kernel: SCSI subsystem initialized Dec 13 14:49:44.081913 kernel: audit: type=1130 audit(1734101384.064:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:49:44.081959 kernel: audit: type=1130 audit(1734101384.065:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:49:44.081979 kernel: audit: type=1130 audit(1734101384.065:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:49:44.064000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:49:44.065000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:49:44.065000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:49:44.005814 systemd-resolved[203]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 14:49:44.092975 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 14:49:44.093018 kernel: device-mapper: uevent: version 1.0.3 Dec 13 14:49:44.093037 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Dec 13 14:49:44.010963 systemd-resolved[203]: Defaulting to hostname 'linux'. Dec 13 14:49:44.033724 systemd-modules-load[202]: Inserted module 'br_netfilter' Dec 13 14:49:44.064839 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 14:49:44.099000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:49:44.065681 systemd[1]: Finished systemd-vconsole-setup.service. Dec 13 14:49:44.105882 kernel: audit: type=1130 audit(1734101384.099:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:49:44.066508 systemd[1]: Reached target nss-lookup.target. Dec 13 14:49:44.083658 systemd[1]: Starting dracut-cmdline-ask.service... Dec 13 14:49:44.092076 systemd-modules-load[202]: Inserted module 'dm_multipath' Dec 13 14:49:44.111000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:49:44.097297 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Dec 13 14:49:44.117504 kernel: audit: type=1130 audit(1734101384.111:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:49:44.099300 systemd[1]: Finished systemd-modules-load.service. Dec 13 14:49:44.107543 systemd[1]: Starting systemd-sysctl.service... Dec 13 14:49:44.110861 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Dec 13 14:49:44.120889 systemd[1]: Finished systemd-sysctl.service. Dec 13 14:49:44.121000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:49:44.127997 systemd[1]: Finished dracut-cmdline-ask.service. Dec 13 14:49:44.150937 kernel: audit: type=1130 audit(1734101384.121:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:49:44.150971 kernel: audit: type=1130 audit(1734101384.128:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:49:44.128000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:49:44.129987 systemd[1]: Starting dracut-cmdline.service... Dec 13 14:49:44.151864 dracut-cmdline[224]: dracut-dracut-053 Dec 13 14:49:44.151864 dracut-cmdline[224]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LA Dec 13 14:49:44.151864 dracut-cmdline[224]: BEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=8c474c3ec361ec863adbecaa85281a726e1b53f7863ecc4742be8c5f6d02a66e Dec 13 14:49:44.248362 kernel: Loading iSCSI transport class v2.0-870. Dec 13 14:49:44.271368 kernel: iscsi: registered transport (tcp) Dec 13 14:49:44.300361 kernel: iscsi: registered transport (qla4xxx) Dec 13 14:49:44.302371 kernel: QLogic iSCSI HBA Driver Dec 13 14:49:44.351631 systemd[1]: Finished dracut-cmdline.service. Dec 13 14:49:44.352000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:49:44.353632 systemd[1]: Starting dracut-pre-udev.service... Dec 13 14:49:44.413408 kernel: raid6: sse2x4 gen() 12594 MB/s Dec 13 14:49:44.431411 kernel: raid6: sse2x4 xor() 7423 MB/s Dec 13 14:49:44.449385 kernel: raid6: sse2x2 gen() 9300 MB/s Dec 13 14:49:44.467398 kernel: raid6: sse2x2 xor() 7784 MB/s Dec 13 14:49:44.485385 kernel: raid6: sse2x1 gen() 8941 MB/s Dec 13 14:49:44.504156 kernel: raid6: sse2x1 xor() 7007 MB/s Dec 13 14:49:44.504236 kernel: raid6: using algorithm sse2x4 gen() 12594 MB/s Dec 13 14:49:44.504255 kernel: raid6: .... xor() 7423 MB/s, rmw enabled Dec 13 14:49:44.505534 kernel: raid6: using ssse3x2 recovery algorithm Dec 13 14:49:44.523372 kernel: xor: automatically using best checksumming function avx Dec 13 14:49:44.644407 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Dec 13 14:49:44.657840 systemd[1]: Finished dracut-pre-udev.service. Dec 13 14:49:44.658000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:49:44.658000 audit: BPF prog-id=7 op=LOAD Dec 13 14:49:44.658000 audit: BPF prog-id=8 op=LOAD Dec 13 14:49:44.659900 systemd[1]: Starting systemd-udevd.service... Dec 13 14:49:44.677316 systemd-udevd[401]: Using default interface naming scheme 'v252'. Dec 13 14:49:44.686891 systemd[1]: Started systemd-udevd.service. Dec 13 14:49:44.687000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:49:44.690657 systemd[1]: Starting dracut-pre-trigger.service... Dec 13 14:49:44.708204 dracut-pre-trigger[405]: rd.md=0: removing MD RAID activation Dec 13 14:49:44.749907 systemd[1]: Finished dracut-pre-trigger.service. Dec 13 14:49:44.750000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:49:44.751777 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 14:49:44.845000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:49:44.845392 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 14:49:44.947407 kernel: ACPI: bus type USB registered Dec 13 14:49:44.947592 kernel: usbcore: registered new interface driver usbfs Dec 13 14:49:44.950330 kernel: usbcore: registered new interface driver hub Dec 13 14:49:44.950404 kernel: usbcore: registered new device driver usb Dec 13 14:49:44.964459 kernel: virtio_blk virtio1: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Dec 13 14:49:45.010261 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 14:49:45.010295 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 14:49:45.010318 kernel: GPT:17805311 != 125829119 Dec 13 14:49:45.010359 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 14:49:45.010385 kernel: GPT:17805311 != 125829119 Dec 13 14:49:45.010406 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 14:49:45.010440 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 14:49:45.010470 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Dec 13 14:49:45.026558 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 1 Dec 13 14:49:45.026760 kernel: xhci_hcd 0000:03:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Dec 13 14:49:45.026956 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Dec 13 14:49:45.027142 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 2 Dec 13 14:49:45.027350 kernel: xhci_hcd 0000:03:00.0: Host supports USB 3.0 SuperSpeed Dec 13 14:49:45.027541 kernel: hub 1-0:1.0: USB hub found Dec 13 14:49:45.027785 kernel: hub 1-0:1.0: 4 ports detected Dec 13 14:49:45.028001 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Dec 13 14:49:45.028252 kernel: hub 2-0:1.0: USB hub found Dec 13 14:49:45.028472 kernel: hub 2-0:1.0: 4 ports detected Dec 13 14:49:45.047363 kernel: AVX version of gcm_enc/dec engaged. Dec 13 14:49:45.052361 kernel: AES CTR mode by8 optimization enabled Dec 13 14:49:45.056359 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (454) Dec 13 14:49:45.058827 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Dec 13 14:49:45.165163 kernel: libata version 3.00 loaded. Dec 13 14:49:45.165202 kernel: ahci 0000:00:1f.2: version 3.0 Dec 13 14:49:45.165440 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Dec 13 14:49:45.165462 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Dec 13 14:49:45.165632 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Dec 13 14:49:45.165824 kernel: scsi host0: ahci Dec 13 14:49:45.166056 kernel: scsi host1: ahci Dec 13 14:49:45.166264 kernel: scsi host2: ahci Dec 13 14:49:45.166463 kernel: scsi host3: ahci Dec 13 14:49:45.166662 kernel: scsi host4: ahci Dec 13 14:49:45.166856 kernel: scsi host5: ahci Dec 13 14:49:45.167086 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b100 irq 41 Dec 13 14:49:45.167107 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b180 irq 41 Dec 13 14:49:45.167132 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b200 irq 41 Dec 13 14:49:45.167160 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b280 irq 41 Dec 13 14:49:45.167187 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b300 irq 41 Dec 13 14:49:45.167204 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b380 irq 41 Dec 13 14:49:45.180016 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Dec 13 14:49:45.180978 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Dec 13 14:49:45.187562 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Dec 13 14:49:45.193066 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Dec 13 14:49:45.196252 systemd[1]: Starting disk-uuid.service... Dec 13 14:49:45.208368 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 14:49:45.212728 disk-uuid[528]: Primary Header is updated. Dec 13 14:49:45.212728 disk-uuid[528]: Secondary Entries is updated. Dec 13 14:49:45.212728 disk-uuid[528]: Secondary Header is updated. Dec 13 14:49:45.261371 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Dec 13 14:49:45.401368 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 13 14:49:45.416362 kernel: ata3: SATA link down (SStatus 0 SControl 300) Dec 13 14:49:45.420371 kernel: ata2: SATA link down (SStatus 0 SControl 300) Dec 13 14:49:45.420410 kernel: ata6: SATA link down (SStatus 0 SControl 300) Dec 13 14:49:45.424852 kernel: ata4: SATA link down (SStatus 0 SControl 300) Dec 13 14:49:45.424895 kernel: ata1: SATA link down (SStatus 0 SControl 300) Dec 13 14:49:45.426662 kernel: ata5: SATA link down (SStatus 0 SControl 300) Dec 13 14:49:45.443353 kernel: usbcore: registered new interface driver usbhid Dec 13 14:49:45.443420 kernel: usbhid: USB HID core driver Dec 13 14:49:45.453302 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:03:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input3 Dec 13 14:49:45.453364 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:03:00.0-1/input0 Dec 13 14:49:46.223366 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 14:49:46.224287 disk-uuid[529]: The operation has completed successfully. Dec 13 14:49:46.282446 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 14:49:46.283000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:49:46.283000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:49:46.282587 systemd[1]: Finished disk-uuid.service. Dec 13 14:49:46.289244 systemd[1]: Starting verity-setup.service... Dec 13 14:49:46.310376 kernel: device-mapper: verity: sha256 using implementation "sha256-avx" Dec 13 14:49:46.366231 systemd[1]: Found device dev-mapper-usr.device. Dec 13 14:49:46.368249 systemd[1]: Mounting sysusr-usr.mount... Dec 13 14:49:46.370198 systemd[1]: Finished verity-setup.service. Dec 13 14:49:46.370000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:49:46.468366 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Dec 13 14:49:46.468985 systemd[1]: Mounted sysusr-usr.mount. Dec 13 14:49:46.470591 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Dec 13 14:49:46.472749 systemd[1]: Starting ignition-setup.service... Dec 13 14:49:46.475431 systemd[1]: Starting parse-ip-for-networkd.service... Dec 13 14:49:46.491705 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 14:49:46.491753 kernel: BTRFS info (device vda6): using free space tree Dec 13 14:49:46.491777 kernel: BTRFS info (device vda6): has skinny extents Dec 13 14:49:46.507039 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 14:49:46.514000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:49:46.513901 systemd[1]: Finished ignition-setup.service. Dec 13 14:49:46.515632 systemd[1]: Starting ignition-fetch-offline.service... Dec 13 14:49:46.627884 systemd[1]: Finished parse-ip-for-networkd.service. Dec 13 14:49:46.628000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:49:46.629000 audit: BPF prog-id=9 op=LOAD Dec 13 14:49:46.631143 systemd[1]: Starting systemd-networkd.service... Dec 13 14:49:46.664722 systemd-networkd[712]: lo: Link UP Dec 13 14:49:46.664753 systemd-networkd[712]: lo: Gained carrier Dec 13 14:49:46.665791 systemd-networkd[712]: Enumeration completed Dec 13 14:49:46.667000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:49:46.665892 systemd[1]: Started systemd-networkd.service. Dec 13 14:49:46.666329 systemd-networkd[712]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 14:49:46.667711 systemd[1]: Reached target network.target. Dec 13 14:49:46.671011 systemd[1]: Starting iscsiuio.service... Dec 13 14:49:46.690000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:49:46.671523 systemd-networkd[712]: eth0: Link UP Dec 13 14:49:46.671529 systemd-networkd[712]: eth0: Gained carrier Dec 13 14:49:46.690440 systemd[1]: Started iscsiuio.service. Dec 13 14:49:46.692299 systemd[1]: Starting iscsid.service... Dec 13 14:49:46.698986 iscsid[717]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Dec 13 14:49:46.698986 iscsid[717]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Dec 13 14:49:46.698986 iscsid[717]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Dec 13 14:49:46.698986 iscsid[717]: If using hardware iscsi like qla4xxx this message can be ignored. Dec 13 14:49:46.698986 iscsid[717]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Dec 13 14:49:46.698986 iscsid[717]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Dec 13 14:49:46.703000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:49:46.701365 systemd[1]: Started iscsid.service. Dec 13 14:49:46.705014 systemd[1]: Starting dracut-initqueue.service... Dec 13 14:49:46.705162 systemd-networkd[712]: eth0: DHCPv4 address 10.230.34.126/30, gateway 10.230.34.125 acquired from 10.230.34.125 Dec 13 14:49:46.724000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:49:46.723307 systemd[1]: Finished dracut-initqueue.service. Dec 13 14:49:46.725251 systemd[1]: Reached target remote-fs-pre.target. Dec 13 14:49:46.725999 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 14:49:46.727408 systemd[1]: Reached target remote-fs.target. Dec 13 14:49:46.731083 systemd[1]: Starting dracut-pre-mount.service... Dec 13 14:49:46.734562 ignition[628]: Ignition 2.14.0 Dec 13 14:49:46.739000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:49:46.734582 ignition[628]: Stage: fetch-offline Dec 13 14:49:46.739235 systemd[1]: Finished ignition-fetch-offline.service. Dec 13 14:49:46.734734 ignition[628]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:49:46.741295 systemd[1]: Starting ignition-fetch.service... Dec 13 14:49:46.734790 ignition[628]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Dec 13 14:49:46.736824 ignition[628]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 14:49:46.737022 ignition[628]: parsed url from cmdline: "" Dec 13 14:49:46.748668 systemd[1]: Finished dracut-pre-mount.service. Dec 13 14:49:46.749000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:49:46.737029 ignition[628]: no config URL provided Dec 13 14:49:46.737039 ignition[628]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 14:49:46.737056 ignition[628]: no config at "/usr/lib/ignition/user.ign" Dec 13 14:49:46.737065 ignition[628]: failed to fetch config: resource requires networking Dec 13 14:49:46.737534 ignition[628]: Ignition finished successfully Dec 13 14:49:46.755406 ignition[729]: Ignition 2.14.0 Dec 13 14:49:46.755424 ignition[729]: Stage: fetch Dec 13 14:49:46.755601 ignition[729]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:49:46.755644 ignition[729]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Dec 13 14:49:46.756961 ignition[729]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 14:49:46.757080 ignition[729]: parsed url from cmdline: "" Dec 13 14:49:46.757105 ignition[729]: no config URL provided Dec 13 14:49:46.757126 ignition[729]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 14:49:46.757141 ignition[729]: no config at "/usr/lib/ignition/user.ign" Dec 13 14:49:46.761568 ignition[729]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Dec 13 14:49:46.761586 ignition[729]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Dec 13 14:49:46.762522 ignition[729]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Dec 13 14:49:46.779777 ignition[729]: GET result: OK Dec 13 14:49:46.780742 ignition[729]: parsing config with SHA512: 6b090b17a2a75fa1d19d8d097c5070aac0e7ccc84c44ca65bddbded5c7af7b0d682fca043905494c1161cd338d06536a0cfcb4aceb6b89f56ebfca60c8df0a56 Dec 13 14:49:46.791786 unknown[729]: fetched base config from "system" Dec 13 14:49:46.792645 unknown[729]: fetched base config from "system" Dec 13 14:49:46.793460 unknown[729]: fetched user config from "openstack" Dec 13 14:49:46.794668 ignition[729]: fetch: fetch complete Dec 13 14:49:46.795417 ignition[729]: fetch: fetch passed Dec 13 14:49:46.796191 ignition[729]: Ignition finished successfully Dec 13 14:49:46.798569 systemd[1]: Finished ignition-fetch.service. Dec 13 14:49:46.798000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:49:46.801178 systemd[1]: Starting ignition-kargs.service... Dec 13 14:49:46.814212 ignition[737]: Ignition 2.14.0 Dec 13 14:49:46.814231 ignition[737]: Stage: kargs Dec 13 14:49:46.814415 ignition[737]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:49:46.814462 ignition[737]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Dec 13 14:49:46.815847 ignition[737]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 14:49:46.817035 ignition[737]: kargs: kargs passed Dec 13 14:49:46.818330 systemd[1]: Finished ignition-kargs.service. Dec 13 14:49:46.817104 ignition[737]: Ignition finished successfully Dec 13 14:49:46.819000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:49:46.821457 systemd[1]: Starting ignition-disks.service... Dec 13 14:49:46.833294 ignition[743]: Ignition 2.14.0 Dec 13 14:49:46.834308 ignition[743]: Stage: disks Dec 13 14:49:46.835189 ignition[743]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:49:46.836153 ignition[743]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Dec 13 14:49:46.837600 ignition[743]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 14:49:46.839937 ignition[743]: disks: disks passed Dec 13 14:49:46.840730 ignition[743]: Ignition finished successfully Dec 13 14:49:46.842496 systemd[1]: Finished ignition-disks.service. Dec 13 14:49:46.842000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:49:46.843405 systemd[1]: Reached target initrd-root-device.target. Dec 13 14:49:46.844735 systemd[1]: Reached target local-fs-pre.target. Dec 13 14:49:46.846209 systemd[1]: Reached target local-fs.target. Dec 13 14:49:46.847548 systemd[1]: Reached target sysinit.target. Dec 13 14:49:46.848859 systemd[1]: Reached target basic.target. Dec 13 14:49:46.851500 systemd[1]: Starting systemd-fsck-root.service... Dec 13 14:49:46.871252 systemd-fsck[750]: ROOT: clean, 621/1628000 files, 124058/1617920 blocks Dec 13 14:49:46.876000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:49:46.875886 systemd[1]: Finished systemd-fsck-root.service. Dec 13 14:49:46.877646 systemd[1]: Mounting sysroot.mount... Dec 13 14:49:46.895383 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Dec 13 14:49:46.895805 systemd[1]: Mounted sysroot.mount. Dec 13 14:49:46.896593 systemd[1]: Reached target initrd-root-fs.target. Dec 13 14:49:46.899418 systemd[1]: Mounting sysroot-usr.mount... Dec 13 14:49:46.900707 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Dec 13 14:49:46.901777 systemd[1]: Starting flatcar-openstack-hostname.service... Dec 13 14:49:46.902607 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 14:49:46.902658 systemd[1]: Reached target ignition-diskful.target. Dec 13 14:49:46.909419 systemd[1]: Mounted sysroot-usr.mount. Dec 13 14:49:46.911721 systemd[1]: Starting initrd-setup-root.service... Dec 13 14:49:46.920299 initrd-setup-root[761]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 14:49:46.936694 initrd-setup-root[769]: cut: /sysroot/etc/group: No such file or directory Dec 13 14:49:46.947057 initrd-setup-root[777]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 14:49:46.958286 initrd-setup-root[786]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 14:49:47.035560 systemd[1]: Finished initrd-setup-root.service. Dec 13 14:49:47.036000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:49:47.037722 systemd[1]: Starting ignition-mount.service... Dec 13 14:49:47.039555 systemd[1]: Starting sysroot-boot.service... Dec 13 14:49:47.051439 bash[804]: umount: /sysroot/usr/share/oem: not mounted. Dec 13 14:49:47.066253 ignition[805]: INFO : Ignition 2.14.0 Dec 13 14:49:47.067359 ignition[805]: INFO : Stage: mount Dec 13 14:49:47.068284 ignition[805]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:49:47.069355 ignition[805]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Dec 13 14:49:47.072225 coreos-metadata[756]: Dec 13 14:49:47.072 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Dec 13 14:49:47.075519 ignition[805]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 14:49:47.078944 ignition[805]: INFO : mount: mount passed Dec 13 14:49:47.079778 ignition[805]: INFO : Ignition finished successfully Dec 13 14:49:47.081732 systemd[1]: Finished ignition-mount.service. Dec 13 14:49:47.082000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:49:47.088038 systemd[1]: Finished sysroot-boot.service. Dec 13 14:49:47.088000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:49:47.094602 coreos-metadata[756]: Dec 13 14:49:47.094 INFO Fetch successful Dec 13 14:49:47.095751 coreos-metadata[756]: Dec 13 14:49:47.095 INFO wrote hostname srv-h3gjt.gb1.brightbox.com to /sysroot/etc/hostname Dec 13 14:49:47.100181 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Dec 13 14:49:47.100317 systemd[1]: Finished flatcar-openstack-hostname.service. Dec 13 14:49:47.101000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:49:47.101000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:49:47.390400 systemd[1]: Mounting sysroot-usr-share-oem.mount... Dec 13 14:49:47.408742 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (814) Dec 13 14:49:47.411359 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 14:49:47.411393 kernel: BTRFS info (device vda6): using free space tree Dec 13 14:49:47.411411 kernel: BTRFS info (device vda6): has skinny extents Dec 13 14:49:47.419206 systemd[1]: Mounted sysroot-usr-share-oem.mount. Dec 13 14:49:47.421172 systemd[1]: Starting ignition-files.service... Dec 13 14:49:47.444838 ignition[834]: INFO : Ignition 2.14.0 Dec 13 14:49:47.445861 ignition[834]: INFO : Stage: files Dec 13 14:49:47.445861 ignition[834]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:49:47.445861 ignition[834]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Dec 13 14:49:47.448772 ignition[834]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 14:49:47.449773 ignition[834]: DEBUG : files: compiled without relabeling support, skipping Dec 13 14:49:47.450654 ignition[834]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 14:49:47.450654 ignition[834]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 14:49:47.453539 ignition[834]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 14:49:47.454662 ignition[834]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 14:49:47.456066 unknown[834]: wrote ssh authorized keys file for user: core Dec 13 14:49:47.457154 ignition[834]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 14:49:47.458153 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Dec 13 14:49:47.458153 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Dec 13 14:49:47.458153 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 13 14:49:47.458153 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 14:49:47.458153 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 14:49:47.458153 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 14:49:47.458153 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 14:49:47.458153 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 14:49:47.468118 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 14:49:47.468118 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Dec 13 14:49:48.062617 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK Dec 13 14:49:48.376819 systemd-networkd[712]: eth0: Gained IPv6LL Dec 13 14:49:49.883411 systemd-networkd[712]: eth0: Ignoring DHCPv6 address 2a02:1348:179:889f:24:19ff:fee6:227e/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:889f:24:19ff:fee6:227e/64 assigned by NDisc. Dec 13 14:49:49.883423 systemd-networkd[712]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Dec 13 14:49:50.358900 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 14:49:50.360670 ignition[834]: INFO : files: op(8): [started] processing unit "coreos-metadata-sshkeys@.service" Dec 13 14:49:50.360670 ignition[834]: INFO : files: op(8): [finished] processing unit "coreos-metadata-sshkeys@.service" Dec 13 14:49:50.360670 ignition[834]: INFO : files: op(9): [started] processing unit "containerd.service" Dec 13 14:49:50.360670 ignition[834]: INFO : files: op(9): op(a): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Dec 13 14:49:50.360670 ignition[834]: INFO : files: op(9): op(a): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Dec 13 14:49:50.360670 ignition[834]: INFO : files: op(9): [finished] processing unit "containerd.service" Dec 13 14:49:50.360670 ignition[834]: INFO : files: op(b): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Dec 13 14:49:50.360670 ignition[834]: INFO : files: op(b): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Dec 13 14:49:50.372177 ignition[834]: INFO : files: createResultFile: createFiles: op(c): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 14:49:50.372177 ignition[834]: INFO : files: createResultFile: createFiles: op(c): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 14:49:50.372177 ignition[834]: INFO : files: files passed Dec 13 14:49:50.372177 ignition[834]: INFO : Ignition finished successfully Dec 13 14:49:50.385153 kernel: kauditd_printk_skb: 28 callbacks suppressed Dec 13 14:49:50.385186 kernel: audit: type=1130 audit(1734101390.376:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:49:50.376000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:49:50.374716 systemd[1]: Finished ignition-files.service. Dec 13 14:49:50.378733 systemd[1]: Starting initrd-setup-root-after-ignition.service... Dec 13 14:49:50.385886 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Dec 13 14:49:50.387528 systemd[1]: Starting ignition-quench.service... Dec 13 14:49:50.394321 initrd-setup-root-after-ignition[859]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 14:49:50.401178 kernel: audit: type=1130 audit(1734101390.394:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:49:50.394000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:49:50.394288 systemd[1]: Finished initrd-setup-root-after-ignition.service. Dec 13 14:49:50.413443 kernel: audit: type=1130 audit(1734101390.401:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:49:50.413491 kernel: audit: type=1131 audit(1734101390.403:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:49:50.401000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:49:50.403000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:49:50.395442 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 14:49:50.395559 systemd[1]: Finished ignition-quench.service. Dec 13 14:49:50.403560 systemd[1]: Reached target ignition-complete.target. Dec 13 14:49:50.413867 systemd[1]: Starting initrd-parse-etc.service... Dec 13 14:49:50.432119 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 14:49:50.433097 systemd[1]: Finished initrd-parse-etc.service. Dec 13 14:49:50.434000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:49:50.438580 systemd[1]: Reached target initrd-fs.target. Dec 13 14:49:50.446324 kernel: audit: type=1130 audit(1734101390.434:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:49:50.446372 kernel: audit: type=1131 audit(1734101390.438:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:49:50.438000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:49:50.445051 systemd[1]: Reached target initrd.target. Dec 13 14:49:50.445778 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Dec 13 14:49:50.446877 systemd[1]: Starting dracut-pre-pivot.service... Dec 13 14:49:50.463593 systemd[1]: Finished dracut-pre-pivot.service. Dec 13 14:49:50.463000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:49:50.465264 systemd[1]: Starting initrd-cleanup.service... Dec 13 14:49:50.471658 kernel: audit: type=1130 audit(1734101390.463:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:49:50.478084 systemd[1]: Stopped target nss-lookup.target. Dec 13 14:49:50.478867 systemd[1]: Stopped target remote-cryptsetup.target. Dec 13 14:49:50.480177 systemd[1]: Stopped target timers.target. Dec 13 14:49:50.482167 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 14:49:50.482433 systemd[1]: Stopped dracut-pre-pivot.service. Dec 13 14:49:50.488831 kernel: audit: type=1131 audit(1734101390.483:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:49:50.483000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:49:50.483943 systemd[1]: Stopped target initrd.target. Dec 13 14:49:50.489617 systemd[1]: Stopped target basic.target. Dec 13 14:49:50.490765 systemd[1]: Stopped target ignition-complete.target. Dec 13 14:49:50.491958 systemd[1]: Stopped target ignition-diskful.target. Dec 13 14:49:50.493197 systemd[1]: Stopped target initrd-root-device.target. Dec 13 14:49:50.494507 systemd[1]: Stopped target remote-fs.target. Dec 13 14:49:50.495728 systemd[1]: Stopped target remote-fs-pre.target. Dec 13 14:49:50.497001 systemd[1]: Stopped target sysinit.target. Dec 13 14:49:50.498181 systemd[1]: Stopped target local-fs.target. Dec 13 14:49:50.513263 systemd[1]: Stopped target local-fs-pre.target. Dec 13 14:49:50.514475 systemd[1]: Stopped target swap.target. Dec 13 14:49:50.515673 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 14:49:50.522178 kernel: audit: type=1131 audit(1734101390.516:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:49:50.516000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:49:50.515894 systemd[1]: Stopped dracut-pre-mount.service. Dec 13 14:49:50.517073 systemd[1]: Stopped target cryptsetup.target. Dec 13 14:49:50.529261 kernel: audit: type=1131 audit(1734101390.523:48): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:49:50.523000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:49:50.522966 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 14:49:50.529000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:49:50.523196 systemd[1]: Stopped dracut-initqueue.service. Dec 13 14:49:50.531000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:49:50.524362 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 14:49:50.524592 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Dec 13 14:49:50.542000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:49:50.543000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:49:50.530197 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 14:49:50.551590 iscsid[717]: iscsid shutting down. Dec 13 14:49:50.530430 systemd[1]: Stopped ignition-files.service. Dec 13 14:49:50.551000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:49:50.553746 ignition[872]: INFO : Ignition 2.14.0 Dec 13 14:49:50.553746 ignition[872]: INFO : Stage: umount Dec 13 14:49:50.553746 ignition[872]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:49:50.553746 ignition[872]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Dec 13 14:49:50.553746 ignition[872]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 14:49:50.553746 ignition[872]: INFO : umount: umount passed Dec 13 14:49:50.553746 ignition[872]: INFO : Ignition finished successfully Dec 13 14:49:50.558000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:49:50.562000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:49:50.563000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:49:50.564000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:49:50.532703 systemd[1]: Stopping ignition-mount.service... Dec 13 14:49:50.565000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:49:50.533845 systemd[1]: Stopping iscsid.service... Dec 13 14:49:50.540523 systemd[1]: Stopping sysroot-boot.service... Dec 13 14:49:50.541229 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 14:49:50.541575 systemd[1]: Stopped systemd-udev-trigger.service. Dec 13 14:49:50.542475 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 14:49:50.578000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:49:50.542645 systemd[1]: Stopped dracut-pre-trigger.service. Dec 13 14:49:50.545663 systemd[1]: iscsid.service: Deactivated successfully. Dec 13 14:49:50.547450 systemd[1]: Stopped iscsid.service. Dec 13 14:49:50.552739 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 14:49:50.552872 systemd[1]: Stopped ignition-mount.service. Dec 13 14:49:50.586000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:49:50.559614 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 14:49:50.587000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:49:50.587000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:49:50.559766 systemd[1]: Stopped ignition-disks.service. Dec 13 14:49:50.563198 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 14:49:50.563261 systemd[1]: Stopped ignition-kargs.service. Dec 13 14:49:50.563946 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 13 14:49:50.564027 systemd[1]: Stopped ignition-fetch.service. Dec 13 14:49:50.564733 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 14:49:50.564802 systemd[1]: Stopped ignition-fetch-offline.service. Dec 13 14:49:50.565733 systemd[1]: Stopped target paths.target. Dec 13 14:49:50.566294 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 14:49:50.599000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:49:50.567882 systemd[1]: Stopped systemd-ask-password-console.path. Dec 13 14:49:50.600000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:49:50.568736 systemd[1]: Stopped target slices.target. Dec 13 14:49:50.573437 systemd[1]: Stopped target sockets.target. Dec 13 14:49:50.603000 audit: BPF prog-id=6 op=UNLOAD Dec 13 14:49:50.576522 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 14:49:50.576592 systemd[1]: Closed iscsid.socket. Dec 13 14:49:50.577853 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 14:49:50.577915 systemd[1]: Stopped ignition-setup.service. Dec 13 14:49:50.579226 systemd[1]: Stopping iscsiuio.service... Dec 13 14:49:50.607000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:49:50.584970 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 14:49:50.609000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:49:50.585805 systemd[1]: iscsiuio.service: Deactivated successfully. Dec 13 14:49:50.610000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:49:50.585966 systemd[1]: Stopped iscsiuio.service. Dec 13 14:49:50.587116 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 14:49:50.587241 systemd[1]: Finished initrd-cleanup.service. Dec 13 14:49:50.589023 systemd[1]: Stopped target network.target. Dec 13 14:49:50.590040 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 14:49:50.590099 systemd[1]: Closed iscsiuio.socket. Dec 13 14:49:50.592035 systemd[1]: Stopping systemd-networkd.service... Dec 13 14:49:50.592862 systemd[1]: Stopping systemd-resolved.service... Dec 13 14:49:50.596410 systemd-networkd[712]: eth0: DHCPv6 lease lost Dec 13 14:49:50.619000 audit: BPF prog-id=9 op=UNLOAD Dec 13 14:49:50.597719 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 14:49:50.597869 systemd[1]: Stopped systemd-networkd.service. Dec 13 14:49:50.600147 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 14:49:50.600299 systemd[1]: Stopped systemd-resolved.service. Dec 13 14:49:50.623000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:49:50.601754 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 14:49:50.601802 systemd[1]: Closed systemd-networkd.socket. Dec 13 14:49:50.603711 systemd[1]: Stopping network-cleanup.service... Dec 13 14:49:50.606918 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 14:49:50.606987 systemd[1]: Stopped parse-ip-for-networkd.service. Dec 13 14:49:50.608249 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 14:49:50.629000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:49:50.608323 systemd[1]: Stopped systemd-sysctl.service. Dec 13 14:49:50.631000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:49:50.609814 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 14:49:50.632000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:49:50.609873 systemd[1]: Stopped systemd-modules-load.service. Dec 13 14:49:50.616750 systemd[1]: Stopping systemd-udevd.service... Dec 13 14:49:50.621108 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 13 14:49:50.622026 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 14:49:50.622244 systemd[1]: Stopped systemd-udevd.service. Dec 13 14:49:50.624859 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 14:49:50.624945 systemd[1]: Closed systemd-udevd-control.socket. Dec 13 14:49:50.625864 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 14:49:50.625920 systemd[1]: Closed systemd-udevd-kernel.socket. Dec 13 14:49:50.645000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:49:50.628923 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 14:49:50.646000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:49:50.628995 systemd[1]: Stopped dracut-pre-udev.service. Dec 13 14:49:50.647000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:49:50.630263 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 14:49:50.630366 systemd[1]: Stopped dracut-cmdline.service. Dec 13 14:49:50.631598 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 14:49:50.631663 systemd[1]: Stopped dracut-cmdline-ask.service. Dec 13 14:49:50.651000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:49:50.633763 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Dec 13 14:49:50.652000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:49:50.652000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:49:50.644064 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 14:49:50.644140 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Dec 13 14:49:50.645878 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 14:49:50.645952 systemd[1]: Stopped kmod-static-nodes.service. Dec 13 14:49:50.646794 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 14:49:50.646857 systemd[1]: Stopped systemd-vconsole-setup.service. Dec 13 14:49:50.649492 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Dec 13 14:49:50.650252 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 14:49:50.650391 systemd[1]: Stopped network-cleanup.service. Dec 13 14:49:50.651959 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 14:49:50.652100 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Dec 13 14:49:50.689113 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 14:49:50.689269 systemd[1]: Stopped sysroot-boot.service. Dec 13 14:49:50.690000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:49:50.690855 systemd[1]: Reached target initrd-switch-root.target. Dec 13 14:49:50.705821 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 14:49:50.706000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:49:50.705900 systemd[1]: Stopped initrd-setup-root.service. Dec 13 14:49:50.708205 systemd[1]: Starting initrd-switch-root.service... Dec 13 14:49:50.802044 systemd[1]: Switching root. Dec 13 14:49:50.805000 audit: BPF prog-id=8 op=UNLOAD Dec 13 14:49:50.805000 audit: BPF prog-id=7 op=UNLOAD Dec 13 14:49:50.806000 audit: BPF prog-id=5 op=UNLOAD Dec 13 14:49:50.806000 audit: BPF prog-id=4 op=UNLOAD Dec 13 14:49:50.806000 audit: BPF prog-id=3 op=UNLOAD Dec 13 14:49:50.825620 systemd-journald[201]: Journal stopped Dec 13 14:49:54.990310 systemd-journald[201]: Received SIGTERM from PID 1 (systemd). Dec 13 14:49:54.990488 kernel: SELinux: Class mctp_socket not defined in policy. Dec 13 14:49:54.990527 kernel: SELinux: Class anon_inode not defined in policy. Dec 13 14:49:54.990561 kernel: SELinux: the above unknown classes and permissions will be allowed Dec 13 14:49:54.990594 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 14:49:54.990615 kernel: SELinux: policy capability open_perms=1 Dec 13 14:49:54.990634 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 14:49:54.990659 kernel: SELinux: policy capability always_check_network=0 Dec 13 14:49:54.990684 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 14:49:54.990712 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 14:49:54.990731 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 14:49:54.990751 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 14:49:54.990785 systemd[1]: Successfully loaded SELinux policy in 58.549ms. Dec 13 14:49:54.990830 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 16.335ms. Dec 13 14:49:54.990854 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 14:49:54.990875 systemd[1]: Detected virtualization kvm. Dec 13 14:49:54.990914 systemd[1]: Detected architecture x86-64. Dec 13 14:49:54.990945 systemd[1]: Detected first boot. Dec 13 14:49:54.990966 systemd[1]: Hostname set to . Dec 13 14:49:54.990987 systemd[1]: Initializing machine ID from VM UUID. Dec 13 14:49:54.991029 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Dec 13 14:49:54.991090 systemd[1]: Populated /etc with preset unit settings. Dec 13 14:49:54.991115 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 14:49:54.991145 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 14:49:54.991180 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:49:54.991218 systemd[1]: Queued start job for default target multi-user.target. Dec 13 14:49:54.991241 systemd[1]: Unnecessary job was removed for dev-vda6.device. Dec 13 14:49:54.991261 systemd[1]: Created slice system-addon\x2dconfig.slice. Dec 13 14:49:54.991288 systemd[1]: Created slice system-addon\x2drun.slice. Dec 13 14:49:54.991319 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Dec 13 14:49:54.991364 systemd[1]: Created slice system-getty.slice. Dec 13 14:49:54.991401 systemd[1]: Created slice system-modprobe.slice. Dec 13 14:49:54.991425 systemd[1]: Created slice system-serial\x2dgetty.slice. Dec 13 14:49:54.991445 systemd[1]: Created slice system-system\x2dcloudinit.slice. Dec 13 14:49:54.991472 systemd[1]: Created slice system-systemd\x2dfsck.slice. Dec 13 14:49:54.991513 systemd[1]: Created slice user.slice. Dec 13 14:49:54.991535 systemd[1]: Started systemd-ask-password-console.path. Dec 13 14:49:54.991555 systemd[1]: Started systemd-ask-password-wall.path. Dec 13 14:49:54.991575 systemd[1]: Set up automount boot.automount. Dec 13 14:49:54.991596 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Dec 13 14:49:54.991627 systemd[1]: Reached target integritysetup.target. Dec 13 14:49:54.991652 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 14:49:54.991672 systemd[1]: Reached target remote-fs.target. Dec 13 14:49:54.991692 systemd[1]: Reached target slices.target. Dec 13 14:49:54.991727 systemd[1]: Reached target swap.target. Dec 13 14:49:54.991754 systemd[1]: Reached target torcx.target. Dec 13 14:49:54.991776 systemd[1]: Reached target veritysetup.target. Dec 13 14:49:54.991812 systemd[1]: Listening on systemd-coredump.socket. Dec 13 14:49:54.991842 systemd[1]: Listening on systemd-initctl.socket. Dec 13 14:49:54.991863 systemd[1]: Listening on systemd-journald-audit.socket. Dec 13 14:49:54.991883 systemd[1]: Listening on systemd-journald-dev-log.socket. Dec 13 14:49:54.991918 systemd[1]: Listening on systemd-journald.socket. Dec 13 14:49:54.991941 systemd[1]: Listening on systemd-networkd.socket. Dec 13 14:49:54.991961 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 14:49:54.991981 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 14:49:54.992008 systemd[1]: Listening on systemd-userdbd.socket. Dec 13 14:49:54.992030 systemd[1]: Mounting dev-hugepages.mount... Dec 13 14:49:54.992063 systemd[1]: Mounting dev-mqueue.mount... Dec 13 14:49:54.992084 systemd[1]: Mounting media.mount... Dec 13 14:49:54.992106 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:49:54.992126 systemd[1]: Mounting sys-kernel-debug.mount... Dec 13 14:49:54.992146 systemd[1]: Mounting sys-kernel-tracing.mount... Dec 13 14:49:54.992166 systemd[1]: Mounting tmp.mount... Dec 13 14:49:54.992185 systemd[1]: Starting flatcar-tmpfiles.service... Dec 13 14:49:54.992206 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:49:54.992250 systemd[1]: Starting kmod-static-nodes.service... Dec 13 14:49:54.992272 systemd[1]: Starting modprobe@configfs.service... Dec 13 14:49:54.992292 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:49:54.992319 systemd[1]: Starting modprobe@drm.service... Dec 13 14:49:54.992363 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:49:54.992393 systemd[1]: Starting modprobe@fuse.service... Dec 13 14:49:54.992415 systemd[1]: Starting modprobe@loop.service... Dec 13 14:49:54.992442 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 14:49:54.992473 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Dec 13 14:49:54.992505 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Dec 13 14:49:54.992548 systemd[1]: Starting systemd-journald.service... Dec 13 14:49:54.992570 systemd[1]: Starting systemd-modules-load.service... Dec 13 14:49:54.992593 systemd[1]: Starting systemd-network-generator.service... Dec 13 14:49:54.992613 kernel: loop: module loaded Dec 13 14:49:54.992638 systemd[1]: Starting systemd-remount-fs.service... Dec 13 14:49:54.992667 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 14:49:54.992687 kernel: fuse: init (API version 7.34) Dec 13 14:49:54.992722 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:49:54.992754 systemd[1]: Mounted dev-hugepages.mount. Dec 13 14:49:54.992782 systemd[1]: Mounted dev-mqueue.mount. Dec 13 14:49:54.992809 systemd[1]: Mounted media.mount. Dec 13 14:49:54.992831 systemd[1]: Mounted sys-kernel-debug.mount. Dec 13 14:49:54.992872 systemd[1]: Mounted sys-kernel-tracing.mount. Dec 13 14:49:54.992894 systemd[1]: Mounted tmp.mount. Dec 13 14:49:54.992935 systemd[1]: Finished kmod-static-nodes.service. Dec 13 14:49:54.992956 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 14:49:54.992977 systemd[1]: Finished modprobe@configfs.service. Dec 13 14:49:54.993004 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:49:54.993038 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:49:54.993061 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 14:49:54.993081 systemd[1]: Finished modprobe@drm.service. Dec 13 14:49:54.993101 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:49:54.993121 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:49:54.993141 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 14:49:54.993163 systemd-journald[1015]: Journal started Dec 13 14:49:54.993236 systemd-journald[1015]: Runtime Journal (/run/log/journal/13a47ed4dae84ff7bbcc5b696543e75a) is 4.7M, max 38.1M, 33.3M free. Dec 13 14:49:54.756000 audit[1]: AVC avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 14:49:54.756000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Dec 13 14:49:54.947000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:49:54.966000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:49:54.966000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:49:54.979000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:49:54.979000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:49:54.984000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:49:54.984000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:49:54.987000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Dec 13 14:49:54.987000 audit[1015]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=5 a1=7ffc3856fbd0 a2=4000 a3=7ffc3856fc6c items=0 ppid=1 pid=1015 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:49:54.987000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Dec 13 14:49:54.990000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:49:54.990000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:49:54.995357 systemd[1]: Finished modprobe@fuse.service. Dec 13 14:49:54.999106 systemd[1]: Started systemd-journald.service. Dec 13 14:49:54.995000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:49:54.995000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:49:54.998000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:49:55.000063 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:49:55.000314 systemd[1]: Finished modprobe@loop.service. Dec 13 14:49:55.000000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:49:55.000000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:49:55.004000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:49:55.004582 systemd[1]: Finished systemd-modules-load.service. Dec 13 14:49:55.009000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:49:55.009401 systemd[1]: Finished systemd-network-generator.service. Dec 13 14:49:55.012000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:49:55.012574 systemd[1]: Finished systemd-remount-fs.service. Dec 13 14:49:55.013784 systemd[1]: Reached target network-pre.target. Dec 13 14:49:55.016113 systemd[1]: Mounting sys-fs-fuse-connections.mount... Dec 13 14:49:55.018919 systemd[1]: Mounting sys-kernel-config.mount... Dec 13 14:49:55.019629 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 14:49:55.025811 systemd[1]: Starting systemd-hwdb-update.service... Dec 13 14:49:55.030465 systemd[1]: Starting systemd-journal-flush.service... Dec 13 14:49:55.032607 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:49:55.036460 systemd[1]: Starting systemd-random-seed.service... Dec 13 14:49:55.037228 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 14:49:55.039075 systemd[1]: Starting systemd-sysctl.service... Dec 13 14:49:55.044406 systemd[1]: Mounted sys-fs-fuse-connections.mount. Dec 13 14:49:55.047554 systemd[1]: Mounted sys-kernel-config.mount. Dec 13 14:49:55.052000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:49:55.052538 systemd[1]: Finished systemd-random-seed.service. Dec 13 14:49:55.053382 systemd[1]: Reached target first-boot-complete.target. Dec 13 14:49:55.053672 systemd-journald[1015]: Time spent on flushing to /var/log/journal/13a47ed4dae84ff7bbcc5b696543e75a is 88.351ms for 1218 entries. Dec 13 14:49:55.053672 systemd-journald[1015]: System Journal (/var/log/journal/13a47ed4dae84ff7bbcc5b696543e75a) is 8.0M, max 584.8M, 576.8M free. Dec 13 14:49:55.149307 systemd-journald[1015]: Received client request to flush runtime journal. Dec 13 14:49:55.072000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:49:55.087000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:49:55.117000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:49:55.072480 systemd[1]: Finished flatcar-tmpfiles.service. Dec 13 14:49:55.150000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:49:55.075123 systemd[1]: Starting systemd-sysusers.service... Dec 13 14:49:55.087467 systemd[1]: Finished systemd-sysctl.service. Dec 13 14:49:55.116970 systemd[1]: Finished systemd-sysusers.service. Dec 13 14:49:55.119572 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Dec 13 14:49:55.150457 systemd[1]: Finished systemd-journal-flush.service. Dec 13 14:49:55.170000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:49:55.170507 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Dec 13 14:49:55.212000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:49:55.212539 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 14:49:55.215041 systemd[1]: Starting systemd-udev-settle.service... Dec 13 14:49:55.234335 udevadm[1071]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Dec 13 14:49:55.738031 systemd[1]: Finished systemd-hwdb-update.service. Dec 13 14:49:55.745823 kernel: kauditd_printk_skb: 78 callbacks suppressed Dec 13 14:49:55.745907 kernel: audit: type=1130 audit(1734101395.738:118): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:49:55.738000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:49:55.740720 systemd[1]: Starting systemd-udevd.service... Dec 13 14:49:55.772041 systemd-udevd[1074]: Using default interface naming scheme 'v252'. Dec 13 14:49:55.804256 systemd[1]: Started systemd-udevd.service. Dec 13 14:49:55.812861 kernel: audit: type=1130 audit(1734101395.804:119): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:49:55.804000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:49:55.807675 systemd[1]: Starting systemd-networkd.service... Dec 13 14:49:55.821962 systemd[1]: Starting systemd-userdbd.service... Dec 13 14:49:55.878331 systemd[1]: Found device dev-ttyS0.device. Dec 13 14:49:55.890469 systemd[1]: Started systemd-userdbd.service. Dec 13 14:49:55.890000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:49:55.897384 kernel: audit: type=1130 audit(1734101395.890:120): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:49:56.027435 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Dec 13 14:49:56.043063 systemd-networkd[1075]: lo: Link UP Dec 13 14:49:56.043080 systemd-networkd[1075]: lo: Gained carrier Dec 13 14:49:56.044026 systemd-networkd[1075]: Enumeration completed Dec 13 14:49:56.044257 systemd[1]: Started systemd-networkd.service. Dec 13 14:49:56.044000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:49:56.045564 systemd-networkd[1075]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 14:49:56.048329 systemd-networkd[1075]: eth0: Link UP Dec 13 14:49:56.048355 systemd-networkd[1075]: eth0: Gained carrier Dec 13 14:49:56.052358 kernel: audit: type=1130 audit(1734101396.044:121): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:49:56.067395 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 14:49:56.070558 systemd-networkd[1075]: eth0: DHCPv4 address 10.230.34.126/30, gateway 10.230.34.125 acquired from 10.230.34.125 Dec 13 14:49:56.078964 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Dec 13 14:49:56.088381 kernel: ACPI: button: Power Button [PWRF] Dec 13 14:49:56.122000 audit[1076]: AVC avc: denied { confidentiality } for pid=1076 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Dec 13 14:49:56.133359 kernel: audit: type=1400 audit(1734101396.122:122): avc: denied { confidentiality } for pid=1076 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Dec 13 14:49:56.122000 audit[1076]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=56331af72f60 a1=337fc a2=7f1c14c20bc5 a3=5 items=110 ppid=1074 pid=1076 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:49:56.143555 kernel: audit: type=1300 audit(1734101396.122:122): arch=c000003e syscall=175 success=yes exit=0 a0=56331af72f60 a1=337fc a2=7f1c14c20bc5 a3=5 items=110 ppid=1074 pid=1076 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:49:56.143618 kernel: audit: type=1307 audit(1734101396.122:122): cwd="/" Dec 13 14:49:56.122000 audit: CWD cwd="/" Dec 13 14:49:56.148963 kernel: audit: type=1302 audit(1734101396.122:122): item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:49:56.122000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:49:56.158388 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input5 Dec 13 14:49:56.173112 kernel: audit: type=1302 audit(1734101396.122:122): item=1 name=(null) inode=15922 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:49:56.173185 kernel: audit: type=1302 audit(1734101396.122:122): item=2 name=(null) inode=15922 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:49:56.122000 audit: PATH item=1 name=(null) inode=15922 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:49:56.122000 audit: PATH item=2 name=(null) inode=15922 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:49:56.122000 audit: PATH item=3 name=(null) inode=15923 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:49:56.122000 audit: PATH item=4 name=(null) inode=15922 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:49:56.122000 audit: PATH item=5 name=(null) inode=15924 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:49:56.122000 audit: PATH item=6 name=(null) inode=15922 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:49:56.122000 audit: PATH item=7 name=(null) inode=15925 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:49:56.122000 audit: PATH item=8 name=(null) inode=15925 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:49:56.122000 audit: PATH item=9 name=(null) inode=15926 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:49:56.122000 audit: PATH item=10 name=(null) inode=15925 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:49:56.122000 audit: PATH item=11 name=(null) inode=15927 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:49:56.122000 audit: PATH item=12 name=(null) inode=15925 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:49:56.122000 audit: PATH item=13 name=(null) inode=15928 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:49:56.122000 audit: PATH item=14 name=(null) inode=15925 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:49:56.122000 audit: PATH item=15 name=(null) inode=15929 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:49:56.122000 audit: PATH item=16 name=(null) inode=15925 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:49:56.122000 audit: PATH item=17 name=(null) inode=15930 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:49:56.122000 audit: PATH item=18 name=(null) inode=15922 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:49:56.122000 audit: PATH item=19 name=(null) inode=15931 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:49:56.122000 audit: PATH item=20 name=(null) inode=15931 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:49:56.122000 audit: PATH item=21 name=(null) inode=15932 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:49:56.122000 audit: PATH item=22 name=(null) inode=15931 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:49:56.122000 audit: PATH item=23 name=(null) inode=15933 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:49:56.122000 audit: PATH item=24 name=(null) inode=15931 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:49:56.122000 audit: PATH item=25 name=(null) inode=15934 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:49:56.122000 audit: PATH item=26 name=(null) inode=15931 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:49:56.122000 audit: PATH item=27 name=(null) inode=15935 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:49:56.122000 audit: PATH item=28 name=(null) inode=15931 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:49:56.122000 audit: PATH item=29 name=(null) inode=15936 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:49:56.122000 audit: PATH item=30 name=(null) inode=15922 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:49:56.122000 audit: PATH item=31 name=(null) inode=15937 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:49:56.122000 audit: PATH item=32 name=(null) inode=15937 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:49:56.122000 audit: PATH item=33 name=(null) inode=15938 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:49:56.122000 audit: PATH item=34 name=(null) inode=15937 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:49:56.122000 audit: PATH item=35 name=(null) inode=15939 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:49:56.122000 audit: PATH item=36 name=(null) inode=15937 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:49:56.122000 audit: PATH item=37 name=(null) inode=15940 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:49:56.122000 audit: PATH item=38 name=(null) inode=15937 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:49:56.122000 audit: PATH item=39 name=(null) inode=15941 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:49:56.122000 audit: PATH item=40 name=(null) inode=15937 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:49:56.122000 audit: PATH item=41 name=(null) inode=15942 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:49:56.122000 audit: PATH item=42 name=(null) inode=15922 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:49:56.122000 audit: PATH item=43 name=(null) inode=15943 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:49:56.122000 audit: PATH item=44 name=(null) inode=15943 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:49:56.122000 audit: PATH item=45 name=(null) inode=15944 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:49:56.122000 audit: PATH item=46 name=(null) inode=15943 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:49:56.122000 audit: PATH item=47 name=(null) inode=15945 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:49:56.122000 audit: PATH item=48 name=(null) inode=15943 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:49:56.122000 audit: PATH item=49 name=(null) inode=15946 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:49:56.122000 audit: PATH item=50 name=(null) inode=15943 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:49:56.122000 audit: PATH item=51 name=(null) inode=15947 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:49:56.122000 audit: PATH item=52 name=(null) inode=15943 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:49:56.122000 audit: PATH item=53 name=(null) inode=15948 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:49:56.122000 audit: PATH item=54 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:49:56.122000 audit: PATH item=55 name=(null) inode=15949 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:49:56.122000 audit: PATH item=56 name=(null) inode=15949 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:49:56.122000 audit: PATH item=57 name=(null) inode=15950 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:49:56.122000 audit: PATH item=58 name=(null) inode=15949 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:49:56.122000 audit: PATH item=59 name=(null) inode=15951 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:49:56.122000 audit: PATH item=60 name=(null) inode=15949 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:49:56.122000 audit: PATH item=61 name=(null) inode=15952 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:49:56.122000 audit: PATH item=62 name=(null) inode=15952 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:49:56.122000 audit: PATH item=63 name=(null) inode=15953 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:49:56.122000 audit: PATH item=64 name=(null) inode=15952 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:49:56.122000 audit: PATH item=65 name=(null) inode=15954 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:49:56.122000 audit: PATH item=66 name=(null) inode=15952 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:49:56.122000 audit: PATH item=67 name=(null) inode=15955 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:49:56.122000 audit: PATH item=68 name=(null) inode=15952 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:49:56.122000 audit: PATH item=69 name=(null) inode=15956 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:49:56.122000 audit: PATH item=70 name=(null) inode=15952 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:49:56.122000 audit: PATH item=71 name=(null) inode=15957 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:49:56.122000 audit: PATH item=72 name=(null) inode=15949 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:49:56.122000 audit: PATH item=73 name=(null) inode=15958 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:49:56.122000 audit: PATH item=74 name=(null) inode=15958 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:49:56.122000 audit: PATH item=75 name=(null) inode=15959 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:49:56.122000 audit: PATH item=76 name=(null) inode=15958 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:49:56.122000 audit: PATH item=77 name=(null) inode=15960 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:49:56.122000 audit: PATH item=78 name=(null) inode=15958 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:49:56.122000 audit: PATH item=79 name=(null) inode=15961 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:49:56.122000 audit: PATH item=80 name=(null) inode=15958 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:49:56.122000 audit: PATH item=81 name=(null) inode=15962 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:49:56.122000 audit: PATH item=82 name=(null) inode=15958 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:49:56.122000 audit: PATH item=83 name=(null) inode=15963 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:49:56.122000 audit: PATH item=84 name=(null) inode=15949 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:49:56.122000 audit: PATH item=85 name=(null) inode=15964 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:49:56.122000 audit: PATH item=86 name=(null) inode=15964 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:49:56.122000 audit: PATH item=87 name=(null) inode=15965 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:49:56.122000 audit: PATH item=88 name=(null) inode=15964 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:49:56.122000 audit: PATH item=89 name=(null) inode=15966 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:49:56.122000 audit: PATH item=90 name=(null) inode=15964 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:49:56.122000 audit: PATH item=91 name=(null) inode=15967 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:49:56.122000 audit: PATH item=92 name=(null) inode=15964 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:49:56.122000 audit: PATH item=93 name=(null) inode=15968 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:49:56.122000 audit: PATH item=94 name=(null) inode=15964 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:49:56.122000 audit: PATH item=95 name=(null) inode=15969 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:49:56.122000 audit: PATH item=96 name=(null) inode=15949 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:49:56.122000 audit: PATH item=97 name=(null) inode=15970 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:49:56.122000 audit: PATH item=98 name=(null) inode=15970 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:49:56.122000 audit: PATH item=99 name=(null) inode=15971 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:49:56.122000 audit: PATH item=100 name=(null) inode=15970 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:49:56.122000 audit: PATH item=101 name=(null) inode=15972 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:49:56.122000 audit: PATH item=102 name=(null) inode=15970 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:49:56.122000 audit: PATH item=103 name=(null) inode=15973 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:49:56.122000 audit: PATH item=104 name=(null) inode=15970 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:49:56.122000 audit: PATH item=105 name=(null) inode=15974 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:49:56.122000 audit: PATH item=106 name=(null) inode=15970 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:49:56.122000 audit: PATH item=107 name=(null) inode=15975 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:49:56.122000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:49:56.122000 audit: PATH item=109 name=(null) inode=15976 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:49:56.122000 audit: PROCTITLE proctitle="(udev-worker)" Dec 13 14:49:56.228387 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Dec 13 14:49:56.255076 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Dec 13 14:49:56.255427 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Dec 13 14:49:56.388319 systemd[1]: Finished systemd-udev-settle.service. Dec 13 14:49:56.388000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:49:56.392111 systemd[1]: Starting lvm2-activation-early.service... Dec 13 14:49:56.426591 lvm[1104]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 14:49:56.464034 systemd[1]: Finished lvm2-activation-early.service. Dec 13 14:49:56.464000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:49:56.465171 systemd[1]: Reached target cryptsetup.target. Dec 13 14:49:56.468433 systemd[1]: Starting lvm2-activation.service... Dec 13 14:49:56.477383 lvm[1106]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 14:49:56.508512 systemd[1]: Finished lvm2-activation.service. Dec 13 14:49:56.509446 systemd[1]: Reached target local-fs-pre.target. Dec 13 14:49:56.510195 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 14:49:56.510252 systemd[1]: Reached target local-fs.target. Dec 13 14:49:56.510967 systemd[1]: Reached target machines.target. Dec 13 14:49:56.508000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:49:56.514005 systemd[1]: Starting ldconfig.service... Dec 13 14:49:56.515879 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:49:56.515998 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:49:56.517993 systemd[1]: Starting systemd-boot-update.service... Dec 13 14:49:56.521806 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Dec 13 14:49:56.526997 systemd[1]: Starting systemd-machine-id-commit.service... Dec 13 14:49:56.533666 systemd[1]: Starting systemd-sysext.service... Dec 13 14:49:56.547769 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1109 (bootctl) Dec 13 14:49:56.549643 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Dec 13 14:49:56.554913 systemd[1]: Unmounting usr-share-oem.mount... Dec 13 14:49:56.563001 systemd[1]: usr-share-oem.mount: Deactivated successfully. Dec 13 14:49:56.563402 systemd[1]: Unmounted usr-share-oem.mount. Dec 13 14:49:56.566874 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 14:49:56.569833 systemd[1]: Finished systemd-machine-id-commit.service. Dec 13 14:49:56.570000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:49:56.592382 kernel: loop0: detected capacity change from 0 to 211296 Dec 13 14:49:56.599528 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Dec 13 14:49:56.599000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:49:56.623434 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 14:49:56.647381 kernel: loop1: detected capacity change from 0 to 211296 Dec 13 14:49:56.664560 (sd-sysext)[1126]: Using extensions 'kubernetes'. Dec 13 14:49:56.667913 (sd-sysext)[1126]: Merged extensions into '/usr'. Dec 13 14:49:56.703376 systemd-fsck[1123]: fsck.fat 4.2 (2021-01-31) Dec 13 14:49:56.703376 systemd-fsck[1123]: /dev/vda1: 789 files, 119291/258078 clusters Dec 13 14:49:56.717963 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Dec 13 14:49:56.718000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:49:56.721776 systemd[1]: Mounting boot.mount... Dec 13 14:49:56.726545 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:49:56.729460 systemd[1]: Mounting usr-share-oem.mount... Dec 13 14:49:56.735760 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:49:56.738744 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:49:56.743617 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:49:56.746047 systemd[1]: Starting modprobe@loop.service... Dec 13 14:49:56.748579 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:49:56.748796 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:49:56.749049 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:49:56.765467 systemd[1]: Mounted boot.mount. Dec 13 14:49:56.766467 systemd[1]: Mounted usr-share-oem.mount. Dec 13 14:49:56.768000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:49:56.768000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:49:56.770000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:49:56.770000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:49:56.772000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:49:56.772000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:49:56.768040 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:49:56.768298 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:49:56.769559 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:49:56.769774 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:49:56.771656 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:49:56.771905 systemd[1]: Finished modprobe@loop.service. Dec 13 14:49:56.777792 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:49:56.777983 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 14:49:56.785289 systemd[1]: Finished systemd-sysext.service. Dec 13 14:49:56.789000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:49:56.794023 systemd[1]: Starting ensure-sysext.service... Dec 13 14:49:56.799085 systemd[1]: Starting systemd-tmpfiles-setup.service... Dec 13 14:49:56.812563 systemd[1]: Reloading. Dec 13 14:49:56.829233 systemd-tmpfiles[1144]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Dec 13 14:49:56.836402 systemd-tmpfiles[1144]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 14:49:56.848998 systemd-tmpfiles[1144]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 14:49:57.003545 /usr/lib/systemd/system-generators/torcx-generator[1165]: time="2024-12-13T14:49:56Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 14:49:57.003614 /usr/lib/systemd/system-generators/torcx-generator[1165]: time="2024-12-13T14:49:57Z" level=info msg="torcx already run" Dec 13 14:49:57.048099 ldconfig[1108]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 14:49:57.156966 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 14:49:57.157001 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 14:49:57.188984 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:49:57.269799 systemd[1]: Finished ldconfig.service. Dec 13 14:49:57.270000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:49:57.271139 systemd[1]: Finished systemd-boot-update.service. Dec 13 14:49:57.271000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:49:57.273502 systemd[1]: Finished systemd-tmpfiles-setup.service. Dec 13 14:49:57.273000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:49:57.278224 systemd[1]: Starting audit-rules.service... Dec 13 14:49:57.280853 systemd[1]: Starting clean-ca-certificates.service... Dec 13 14:49:57.283909 systemd[1]: Starting systemd-journal-catalog-update.service... Dec 13 14:49:57.289863 systemd[1]: Starting systemd-resolved.service... Dec 13 14:49:57.295157 systemd[1]: Starting systemd-timesyncd.service... Dec 13 14:49:57.299891 systemd[1]: Starting systemd-update-utmp.service... Dec 13 14:49:57.306866 systemd[1]: Finished clean-ca-certificates.service. Dec 13 14:49:57.308000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:49:57.309177 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 14:49:57.327783 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:49:57.329000 audit[1226]: SYSTEM_BOOT pid=1226 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Dec 13 14:49:57.330390 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:49:57.333144 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:49:57.338062 systemd[1]: Starting modprobe@loop.service... Dec 13 14:49:57.339008 systemd-networkd[1075]: eth0: Gained IPv6LL Dec 13 14:49:57.340001 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:49:57.340246 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:49:57.340498 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 14:49:57.349418 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:49:57.349668 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:49:57.350000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:49:57.350000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:49:57.351077 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:49:57.351314 systemd[1]: Finished modprobe@loop.service. Dec 13 14:49:57.351000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:49:57.351000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:49:57.352722 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:49:57.353027 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:49:57.353246 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:49:57.353543 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 14:49:57.353754 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 14:49:57.356769 systemd[1]: Finished systemd-update-utmp.service. Dec 13 14:49:57.362000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:49:57.363256 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:49:57.363540 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:49:57.364000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:49:57.364000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:49:57.365123 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:49:57.375297 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:49:57.380633 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:49:57.383491 systemd[1]: Starting modprobe@drm.service... Dec 13 14:49:57.386703 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:49:57.389809 systemd[1]: Starting modprobe@loop.service... Dec 13 14:49:57.393437 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:49:57.393689 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:49:57.399500 systemd[1]: Starting systemd-networkd-wait-online.service... Dec 13 14:49:57.400508 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 14:49:57.402739 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:49:57.403031 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:49:57.403000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:49:57.403000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:49:57.405254 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 14:49:57.405547 systemd[1]: Finished modprobe@drm.service. Dec 13 14:49:57.405000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:49:57.406000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:49:57.411000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:49:57.411569 systemd[1]: Finished ensure-sysext.service. Dec 13 14:49:57.417954 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:49:57.418192 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:49:57.418000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:49:57.418000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:49:57.419020 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:49:57.423979 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:49:57.424236 systemd[1]: Finished modprobe@loop.service. Dec 13 14:49:57.424000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:49:57.424000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:49:57.425089 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 14:49:57.428148 systemd[1]: Finished systemd-networkd-wait-online.service. Dec 13 14:49:57.428000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:49:57.454457 systemd[1]: Finished systemd-journal-catalog-update.service. Dec 13 14:49:57.454000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:49:57.457529 systemd[1]: Starting systemd-update-done.service... Dec 13 14:49:57.472700 systemd[1]: Finished systemd-update-done.service. Dec 13 14:49:57.474000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:49:57.486000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Dec 13 14:49:57.486000 audit[1263]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffed0fd2630 a2=420 a3=0 items=0 ppid=1220 pid=1263 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:49:57.486000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 13 14:49:57.487063 augenrules[1263]: No rules Dec 13 14:49:57.487784 systemd[1]: Finished audit-rules.service. Dec 13 14:49:57.521585 systemd-resolved[1223]: Positive Trust Anchors: Dec 13 14:49:57.522073 systemd-resolved[1223]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 14:49:57.522223 systemd-resolved[1223]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 14:49:57.530858 systemd-resolved[1223]: Using system hostname 'srv-h3gjt.gb1.brightbox.com'. Dec 13 14:49:57.533119 systemd[1]: Started systemd-timesyncd.service. Dec 13 14:49:57.533994 systemd[1]: Started systemd-resolved.service. Dec 13 14:49:57.534720 systemd[1]: Reached target network.target. Dec 13 14:49:57.535376 systemd[1]: Reached target network-online.target. Dec 13 14:49:57.536041 systemd[1]: Reached target nss-lookup.target. Dec 13 14:49:57.536684 systemd[1]: Reached target sysinit.target. Dec 13 14:49:57.537452 systemd[1]: Started motdgen.path. Dec 13 14:49:57.538108 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Dec 13 14:49:57.538930 systemd[1]: Started systemd-tmpfiles-clean.timer. Dec 13 14:49:57.540007 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 14:49:57.540060 systemd[1]: Reached target paths.target. Dec 13 14:49:57.540707 systemd[1]: Reached target time-set.target. Dec 13 14:49:57.542013 systemd[1]: Started logrotate.timer. Dec 13 14:49:57.542871 systemd[1]: Started mdadm.timer. Dec 13 14:49:57.543464 systemd[1]: Reached target timers.target. Dec 13 14:49:57.544676 systemd[1]: Listening on dbus.socket. Dec 13 14:49:57.548029 systemd[1]: Starting docker.socket... Dec 13 14:49:57.564508 systemd[1]: Listening on sshd.socket. Dec 13 14:49:57.565253 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:49:57.568423 systemd[1]: Listening on docker.socket. Dec 13 14:49:57.569134 systemd[1]: Reached target sockets.target. Dec 13 14:49:57.569798 systemd[1]: Reached target basic.target. Dec 13 14:49:57.570683 systemd[1]: System is tainted: cgroupsv1 Dec 13 14:49:57.570733 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:49:57.570786 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 14:49:57.570826 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 14:49:57.572746 systemd[1]: Starting containerd.service... Dec 13 14:49:57.576714 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Dec 13 14:49:57.580044 systemd[1]: Starting dbus.service... Dec 13 14:49:57.582948 systemd[1]: Starting enable-oem-cloudinit.service... Dec 13 14:49:57.586024 systemd[1]: Starting extend-filesystems.service... Dec 13 14:49:57.587619 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Dec 13 14:49:57.594918 systemd[1]: Starting kubelet.service... Dec 13 14:49:57.599597 systemd[1]: Starting motdgen.service... Dec 13 14:49:57.605145 systemd[1]: Starting ssh-key-proc-cmdline.service... Dec 13 14:49:57.613477 systemd[1]: Starting sshd-keygen.service... Dec 13 14:49:57.662847 jq[1279]: false Dec 13 14:49:57.620182 systemd[1]: Starting systemd-logind.service... Dec 13 14:49:57.621028 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:49:57.621184 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 14:49:57.623894 systemd[1]: Starting update-engine.service... Dec 13 14:49:57.667620 jq[1292]: true Dec 13 14:49:57.630310 systemd[1]: Starting update-ssh-keys-after-ignition.service... Dec 13 14:49:57.631331 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:49:57.636758 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 14:49:57.668588 jq[1298]: true Dec 13 14:49:57.637254 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Dec 13 14:49:57.667649 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 14:49:57.668068 systemd[1]: Finished ssh-key-proc-cmdline.service. Dec 13 14:49:58.297125 systemd-resolved[1223]: Clock change detected. Flushing caches. Dec 13 14:49:58.297290 systemd-timesyncd[1224]: Contacted time server 212.69.41.125:123 (0.flatcar.pool.ntp.org). Dec 13 14:49:58.297521 systemd-timesyncd[1224]: Initial clock synchronization to Fri 2024-12-13 14:49:58.296802 UTC. Dec 13 14:49:58.315720 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 14:49:58.316120 systemd[1]: Finished motdgen.service. Dec 13 14:49:58.347003 extend-filesystems[1280]: Found loop1 Dec 13 14:49:58.349586 extend-filesystems[1280]: Found vda Dec 13 14:49:58.350540 extend-filesystems[1280]: Found vda1 Dec 13 14:49:58.352073 extend-filesystems[1280]: Found vda2 Dec 13 14:49:58.354331 extend-filesystems[1280]: Found vda3 Dec 13 14:49:58.354331 extend-filesystems[1280]: Found usr Dec 13 14:49:58.354331 extend-filesystems[1280]: Found vda4 Dec 13 14:49:58.354331 extend-filesystems[1280]: Found vda6 Dec 13 14:49:58.354331 extend-filesystems[1280]: Found vda7 Dec 13 14:49:58.354331 extend-filesystems[1280]: Found vda9 Dec 13 14:49:58.354331 extend-filesystems[1280]: Checking size of /dev/vda9 Dec 13 14:49:58.364285 dbus-daemon[1276]: [system] SELinux support is enabled Dec 13 14:49:58.370528 systemd[1]: Started dbus.service. Dec 13 14:49:58.390736 dbus-daemon[1276]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1075 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Dec 13 14:49:58.393240 update_engine[1289]: I1213 14:49:58.392433 1289 main.cc:92] Flatcar Update Engine starting Dec 13 14:49:58.386054 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 14:49:58.395758 dbus-daemon[1276]: [system] Successfully activated service 'org.freedesktop.systemd1' Dec 13 14:49:58.386091 systemd[1]: Reached target system-config.target. Dec 13 14:49:58.386944 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 14:49:58.386992 systemd[1]: Reached target user-config.target. Dec 13 14:49:58.402628 systemd[1]: Starting systemd-hostnamed.service... Dec 13 14:49:58.415546 systemd[1]: Started update-engine.service. Dec 13 14:49:58.419917 systemd[1]: Started locksmithd.service. Dec 13 14:49:58.427121 extend-filesystems[1280]: Resized partition /dev/vda9 Dec 13 14:49:58.428026 update_engine[1289]: I1213 14:49:58.427228 1289 update_check_scheduler.cc:74] Next update check in 11m22s Dec 13 14:49:58.434699 extend-filesystems[1338]: resize2fs 1.46.5 (30-Dec-2021) Dec 13 14:49:58.454072 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 15121403 blocks Dec 13 14:49:58.454065 systemd[1]: Created slice system-sshd.slice. Dec 13 14:49:58.454226 bash[1334]: Updated "/home/core/.ssh/authorized_keys" Dec 13 14:49:58.455317 systemd[1]: Finished update-ssh-keys-after-ignition.service. Dec 13 14:49:58.537229 env[1300]: time="2024-12-13T14:49:58.537101357Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Dec 13 14:49:58.545212 systemd-logind[1287]: Watching system buttons on /dev/input/event2 (Power Button) Dec 13 14:49:58.545257 systemd-logind[1287]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 13 14:49:58.549693 systemd-logind[1287]: New seat seat0. Dec 13 14:49:58.558246 systemd[1]: Started systemd-logind.service. Dec 13 14:49:58.590841 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Dec 13 14:49:58.608634 extend-filesystems[1338]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Dec 13 14:49:58.608634 extend-filesystems[1338]: old_desc_blocks = 1, new_desc_blocks = 8 Dec 13 14:49:58.608634 extend-filesystems[1338]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Dec 13 14:49:58.611478 extend-filesystems[1280]: Resized filesystem in /dev/vda9 Dec 13 14:49:58.611462 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 14:49:58.611847 systemd[1]: Finished extend-filesystems.service. Dec 13 14:49:58.621246 env[1300]: time="2024-12-13T14:49:58.621189196Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 14:49:58.628241 env[1300]: time="2024-12-13T14:49:58.628185145Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:49:58.631570 env[1300]: time="2024-12-13T14:49:58.631526058Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.173-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 14:49:58.631710 env[1300]: time="2024-12-13T14:49:58.631679755Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:49:58.632117 env[1300]: time="2024-12-13T14:49:58.632082007Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 14:49:58.632238 env[1300]: time="2024-12-13T14:49:58.632208465Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 14:49:58.632369 env[1300]: time="2024-12-13T14:49:58.632337569Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Dec 13 14:49:58.632521 env[1300]: time="2024-12-13T14:49:58.632490249Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 14:49:58.633590 env[1300]: time="2024-12-13T14:49:58.633549241Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:49:58.637068 env[1300]: time="2024-12-13T14:49:58.637023997Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:49:58.638027 env[1300]: time="2024-12-13T14:49:58.637990728Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 14:49:58.638165 env[1300]: time="2024-12-13T14:49:58.638133928Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 14:49:58.638458 env[1300]: time="2024-12-13T14:49:58.638358284Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Dec 13 14:49:58.640951 env[1300]: time="2024-12-13T14:49:58.640917507Z" level=info msg="metadata content store policy set" policy=shared Dec 13 14:49:58.647568 env[1300]: time="2024-12-13T14:49:58.647374403Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 14:49:58.647568 env[1300]: time="2024-12-13T14:49:58.647441402Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 14:49:58.647568 env[1300]: time="2024-12-13T14:49:58.647499375Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 14:49:58.647814 env[1300]: time="2024-12-13T14:49:58.647782911Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 14:49:58.648025 env[1300]: time="2024-12-13T14:49:58.647996095Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 14:49:58.648198 env[1300]: time="2024-12-13T14:49:58.648168463Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 14:49:58.648379 env[1300]: time="2024-12-13T14:49:58.648351096Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 14:49:58.648542 env[1300]: time="2024-12-13T14:49:58.648513331Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 14:49:58.648694 env[1300]: time="2024-12-13T14:49:58.648666171Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Dec 13 14:49:58.648846 env[1300]: time="2024-12-13T14:49:58.648816733Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 14:49:58.649032 env[1300]: time="2024-12-13T14:49:58.648953970Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 14:49:58.649249 env[1300]: time="2024-12-13T14:49:58.649219578Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 14:49:58.649617 env[1300]: time="2024-12-13T14:49:58.649589945Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 14:49:58.649946 env[1300]: time="2024-12-13T14:49:58.649918886Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 14:49:58.650682 env[1300]: time="2024-12-13T14:49:58.650653710Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 14:49:58.650862 env[1300]: time="2024-12-13T14:49:58.650824407Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 14:49:58.651048 env[1300]: time="2024-12-13T14:49:58.650962961Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 14:49:58.651282 env[1300]: time="2024-12-13T14:49:58.651252350Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 14:49:58.651458 env[1300]: time="2024-12-13T14:49:58.651419306Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 14:49:58.651608 env[1300]: time="2024-12-13T14:49:58.651578567Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 14:49:58.651753 env[1300]: time="2024-12-13T14:49:58.651724930Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 14:49:58.651897 env[1300]: time="2024-12-13T14:49:58.651867770Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 14:49:58.652061 env[1300]: time="2024-12-13T14:49:58.652033247Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 14:49:58.652216 env[1300]: time="2024-12-13T14:49:58.652187964Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 14:49:58.652382 env[1300]: time="2024-12-13T14:49:58.652351765Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 14:49:58.652573 env[1300]: time="2024-12-13T14:49:58.652545507Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 14:49:58.653076 env[1300]: time="2024-12-13T14:49:58.653037562Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 14:49:58.658693 env[1300]: time="2024-12-13T14:49:58.658659813Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 14:49:58.658893 env[1300]: time="2024-12-13T14:49:58.658840911Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 14:49:58.659051 env[1300]: time="2024-12-13T14:49:58.659022372Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 14:49:58.659201 env[1300]: time="2024-12-13T14:49:58.659170156Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Dec 13 14:49:58.659336 env[1300]: time="2024-12-13T14:49:58.659308406Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 14:49:58.659557 env[1300]: time="2024-12-13T14:49:58.659490997Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Dec 13 14:49:58.659759 env[1300]: time="2024-12-13T14:49:58.659712590Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 14:49:58.660365 env[1300]: time="2024-12-13T14:49:58.660256144Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 14:49:58.662829 env[1300]: time="2024-12-13T14:49:58.660637387Z" level=info msg="Connect containerd service" Dec 13 14:49:58.662829 env[1300]: time="2024-12-13T14:49:58.660737582Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 14:49:58.664125 env[1300]: time="2024-12-13T14:49:58.664084400Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 14:49:58.664628 env[1300]: time="2024-12-13T14:49:58.664597773Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 14:49:58.664714 env[1300]: time="2024-12-13T14:49:58.664685593Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 14:49:58.664995 systemd[1]: Started containerd.service. Dec 13 14:49:58.668539 env[1300]: time="2024-12-13T14:49:58.668474232Z" level=info msg="containerd successfully booted in 0.158030s" Dec 13 14:49:58.677586 env[1300]: time="2024-12-13T14:49:58.677537872Z" level=info msg="Start subscribing containerd event" Dec 13 14:49:58.677680 env[1300]: time="2024-12-13T14:49:58.677626929Z" level=info msg="Start recovering state" Dec 13 14:49:58.677839 env[1300]: time="2024-12-13T14:49:58.677804622Z" level=info msg="Start event monitor" Dec 13 14:49:58.677933 env[1300]: time="2024-12-13T14:49:58.677851179Z" level=info msg="Start snapshots syncer" Dec 13 14:49:58.677933 env[1300]: time="2024-12-13T14:49:58.677881075Z" level=info msg="Start cni network conf syncer for default" Dec 13 14:49:58.677933 env[1300]: time="2024-12-13T14:49:58.677897504Z" level=info msg="Start streaming server" Dec 13 14:49:58.713862 dbus-daemon[1276]: [system] Successfully activated service 'org.freedesktop.hostname1' Dec 13 14:49:58.714070 systemd[1]: Started systemd-hostnamed.service. Dec 13 14:49:58.717678 dbus-daemon[1276]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1333 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Dec 13 14:49:58.723418 systemd[1]: Starting polkit.service... Dec 13 14:49:58.745507 polkitd[1348]: Started polkitd version 121 Dec 13 14:49:58.765941 polkitd[1348]: Loading rules from directory /etc/polkit-1/rules.d Dec 13 14:49:58.766302 polkitd[1348]: Loading rules from directory /usr/share/polkit-1/rules.d Dec 13 14:49:58.769154 polkitd[1348]: Finished loading, compiling and executing 2 rules Dec 13 14:49:58.770236 dbus-daemon[1276]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Dec 13 14:49:58.771126 systemd[1]: Started polkit.service. Dec 13 14:49:58.772345 polkitd[1348]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Dec 13 14:49:58.790872 systemd-hostnamed[1333]: Hostname set to (static) Dec 13 14:49:58.800102 systemd-networkd[1075]: eth0: Ignoring DHCPv6 address 2a02:1348:179:889f:24:19ff:fee6:227e/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:889f:24:19ff:fee6:227e/64 assigned by NDisc. Dec 13 14:49:58.800115 systemd-networkd[1075]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Dec 13 14:49:58.915797 locksmithd[1335]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 14:49:59.505693 systemd[1]: Started kubelet.service. Dec 13 14:50:00.303528 kubelet[1366]: E1213 14:50:00.303355 1366 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:50:00.306283 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:50:00.306648 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:50:00.723124 sshd_keygen[1312]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 14:50:00.751850 systemd[1]: Finished sshd-keygen.service. Dec 13 14:50:00.755717 systemd[1]: Starting issuegen.service... Dec 13 14:50:00.758755 systemd[1]: Started sshd@0-10.230.34.126:22-139.178.68.195:33816.service. Dec 13 14:50:00.767992 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 14:50:00.768365 systemd[1]: Finished issuegen.service. Dec 13 14:50:00.777300 systemd[1]: Starting systemd-user-sessions.service... Dec 13 14:50:00.790567 systemd[1]: Finished systemd-user-sessions.service. Dec 13 14:50:00.793866 systemd[1]: Started getty@tty1.service. Dec 13 14:50:00.796941 systemd[1]: Started serial-getty@ttyS0.service. Dec 13 14:50:00.799565 systemd[1]: Reached target getty.target. Dec 13 14:50:01.687871 sshd[1382]: Accepted publickey for core from 139.178.68.195 port 33816 ssh2: RSA SHA256:BRWuvX4vngANWcecei9LW91Zd3OWx+vtbErQ53ehsZc Dec 13 14:50:01.690988 sshd[1382]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:50:01.712348 systemd[1]: Created slice user-500.slice. Dec 13 14:50:01.715095 systemd[1]: Starting user-runtime-dir@500.service... Dec 13 14:50:01.722743 systemd-logind[1287]: New session 1 of user core. Dec 13 14:50:01.732193 systemd[1]: Finished user-runtime-dir@500.service. Dec 13 14:50:01.734921 systemd[1]: Starting user@500.service... Dec 13 14:50:01.742533 (systemd)[1395]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:50:01.862766 systemd[1395]: Queued start job for default target default.target. Dec 13 14:50:01.863693 systemd[1395]: Reached target paths.target. Dec 13 14:50:01.863884 systemd[1395]: Reached target sockets.target. Dec 13 14:50:01.864089 systemd[1395]: Reached target timers.target. Dec 13 14:50:01.864237 systemd[1395]: Reached target basic.target. Dec 13 14:50:01.864486 systemd[1395]: Reached target default.target. Dec 13 14:50:01.864614 systemd[1]: Started user@500.service. Dec 13 14:50:01.864796 systemd[1395]: Startup finished in 112ms. Dec 13 14:50:01.867036 systemd[1]: Started session-1.scope. Dec 13 14:50:02.497799 systemd[1]: Started sshd@1-10.230.34.126:22-139.178.68.195:33822.service. Dec 13 14:50:03.390178 sshd[1404]: Accepted publickey for core from 139.178.68.195 port 33822 ssh2: RSA SHA256:BRWuvX4vngANWcecei9LW91Zd3OWx+vtbErQ53ehsZc Dec 13 14:50:03.392436 sshd[1404]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:50:03.399818 systemd-logind[1287]: New session 2 of user core. Dec 13 14:50:03.400698 systemd[1]: Started session-2.scope. Dec 13 14:50:04.016804 sshd[1404]: pam_unix(sshd:session): session closed for user core Dec 13 14:50:04.020307 systemd[1]: sshd@1-10.230.34.126:22-139.178.68.195:33822.service: Deactivated successfully. Dec 13 14:50:04.021923 systemd-logind[1287]: Session 2 logged out. Waiting for processes to exit. Dec 13 14:50:04.022044 systemd[1]: session-2.scope: Deactivated successfully. Dec 13 14:50:04.023960 systemd-logind[1287]: Removed session 2. Dec 13 14:50:04.162622 systemd[1]: Started sshd@2-10.230.34.126:22-139.178.68.195:33824.service. Dec 13 14:50:05.051919 sshd[1411]: Accepted publickey for core from 139.178.68.195 port 33824 ssh2: RSA SHA256:BRWuvX4vngANWcecei9LW91Zd3OWx+vtbErQ53ehsZc Dec 13 14:50:05.053846 sshd[1411]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:50:05.060942 systemd-logind[1287]: New session 3 of user core. Dec 13 14:50:05.061767 systemd[1]: Started session-3.scope. Dec 13 14:50:05.358912 coreos-metadata[1275]: Dec 13 14:50:05.358 WARN failed to locate config-drive, using the metadata service API instead Dec 13 14:50:05.416206 coreos-metadata[1275]: Dec 13 14:50:05.415 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Dec 13 14:50:05.442320 coreos-metadata[1275]: Dec 13 14:50:05.442 INFO Fetch successful Dec 13 14:50:05.442443 coreos-metadata[1275]: Dec 13 14:50:05.442 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Dec 13 14:50:05.481648 coreos-metadata[1275]: Dec 13 14:50:05.481 INFO Fetch successful Dec 13 14:50:05.497886 unknown[1275]: wrote ssh authorized keys file for user: core Dec 13 14:50:05.513059 update-ssh-keys[1417]: Updated "/home/core/.ssh/authorized_keys" Dec 13 14:50:05.513925 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Dec 13 14:50:05.515228 systemd[1]: Reached target multi-user.target. Dec 13 14:50:05.518784 systemd[1]: Starting systemd-update-utmp-runlevel.service... Dec 13 14:50:05.530528 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Dec 13 14:50:05.530877 systemd[1]: Finished systemd-update-utmp-runlevel.service. Dec 13 14:50:05.539785 systemd[1]: Startup finished in 8.459s (kernel) + 13.970s (userspace) = 22.429s. Dec 13 14:50:05.693851 sshd[1411]: pam_unix(sshd:session): session closed for user core Dec 13 14:50:05.698131 systemd[1]: sshd@2-10.230.34.126:22-139.178.68.195:33824.service: Deactivated successfully. Dec 13 14:50:05.699245 systemd[1]: session-3.scope: Deactivated successfully. Dec 13 14:50:05.701009 systemd-logind[1287]: Session 3 logged out. Waiting for processes to exit. Dec 13 14:50:05.702643 systemd-logind[1287]: Removed session 3. Dec 13 14:50:10.558377 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 14:50:10.558720 systemd[1]: Stopped kubelet.service. Dec 13 14:50:10.561344 systemd[1]: Starting kubelet.service... Dec 13 14:50:10.727630 systemd[1]: Started kubelet.service. Dec 13 14:50:10.828170 kubelet[1434]: E1213 14:50:10.827883 1434 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:50:10.832822 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:50:10.833173 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:50:15.842567 systemd[1]: Started sshd@3-10.230.34.126:22-139.178.68.195:37000.service. Dec 13 14:50:16.730240 sshd[1442]: Accepted publickey for core from 139.178.68.195 port 37000 ssh2: RSA SHA256:BRWuvX4vngANWcecei9LW91Zd3OWx+vtbErQ53ehsZc Dec 13 14:50:16.732646 sshd[1442]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:50:16.740453 systemd-logind[1287]: New session 4 of user core. Dec 13 14:50:16.741312 systemd[1]: Started session-4.scope. Dec 13 14:50:17.350804 sshd[1442]: pam_unix(sshd:session): session closed for user core Dec 13 14:50:17.355222 systemd[1]: sshd@3-10.230.34.126:22-139.178.68.195:37000.service: Deactivated successfully. Dec 13 14:50:17.356642 systemd-logind[1287]: Session 4 logged out. Waiting for processes to exit. Dec 13 14:50:17.356746 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 14:50:17.358231 systemd-logind[1287]: Removed session 4. Dec 13 14:50:17.496880 systemd[1]: Started sshd@4-10.230.34.126:22-139.178.68.195:34830.service. Dec 13 14:50:18.385290 sshd[1449]: Accepted publickey for core from 139.178.68.195 port 34830 ssh2: RSA SHA256:BRWuvX4vngANWcecei9LW91Zd3OWx+vtbErQ53ehsZc Dec 13 14:50:18.387981 sshd[1449]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:50:18.395370 systemd[1]: Started session-5.scope. Dec 13 14:50:18.396406 systemd-logind[1287]: New session 5 of user core. Dec 13 14:50:18.999430 sshd[1449]: pam_unix(sshd:session): session closed for user core Dec 13 14:50:19.003228 systemd[1]: sshd@4-10.230.34.126:22-139.178.68.195:34830.service: Deactivated successfully. Dec 13 14:50:19.004288 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 14:50:19.006438 systemd-logind[1287]: Session 5 logged out. Waiting for processes to exit. Dec 13 14:50:19.008070 systemd-logind[1287]: Removed session 5. Dec 13 14:50:19.144064 systemd[1]: Started sshd@5-10.230.34.126:22-139.178.68.195:34832.service. Dec 13 14:50:20.032104 sshd[1456]: Accepted publickey for core from 139.178.68.195 port 34832 ssh2: RSA SHA256:BRWuvX4vngANWcecei9LW91Zd3OWx+vtbErQ53ehsZc Dec 13 14:50:20.034863 sshd[1456]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:50:20.041655 systemd-logind[1287]: New session 6 of user core. Dec 13 14:50:20.042455 systemd[1]: Started session-6.scope. Dec 13 14:50:20.651499 sshd[1456]: pam_unix(sshd:session): session closed for user core Dec 13 14:50:20.655401 systemd[1]: sshd@5-10.230.34.126:22-139.178.68.195:34832.service: Deactivated successfully. Dec 13 14:50:20.656470 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 14:50:20.657780 systemd-logind[1287]: Session 6 logged out. Waiting for processes to exit. Dec 13 14:50:20.659159 systemd-logind[1287]: Removed session 6. Dec 13 14:50:20.797752 systemd[1]: Started sshd@6-10.230.34.126:22-139.178.68.195:34836.service. Dec 13 14:50:20.899599 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 14:50:20.899922 systemd[1]: Stopped kubelet.service. Dec 13 14:50:20.902701 systemd[1]: Starting kubelet.service... Dec 13 14:50:21.028659 systemd[1]: Started kubelet.service. Dec 13 14:50:21.156107 kubelet[1473]: E1213 14:50:21.155892 1473 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:50:21.159056 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:50:21.159359 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:50:21.685906 sshd[1463]: Accepted publickey for core from 139.178.68.195 port 34836 ssh2: RSA SHA256:BRWuvX4vngANWcecei9LW91Zd3OWx+vtbErQ53ehsZc Dec 13 14:50:21.687359 sshd[1463]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:50:21.696054 systemd-logind[1287]: New session 7 of user core. Dec 13 14:50:21.696706 systemd[1]: Started session-7.scope. Dec 13 14:50:22.173956 sudo[1482]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 14:50:22.174993 sudo[1482]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Dec 13 14:50:22.199367 systemd[1]: Starting coreos-metadata.service... Dec 13 14:50:28.828114 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Dec 13 14:50:29.255299 coreos-metadata[1486]: Dec 13 14:50:29.255 WARN failed to locate config-drive, using the metadata service API instead Dec 13 14:50:29.308852 coreos-metadata[1486]: Dec 13 14:50:29.308 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Dec 13 14:50:29.311016 coreos-metadata[1486]: Dec 13 14:50:29.310 INFO Fetch successful Dec 13 14:50:29.311293 coreos-metadata[1486]: Dec 13 14:50:29.311 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Dec 13 14:50:29.323542 coreos-metadata[1486]: Dec 13 14:50:29.323 INFO Fetch successful Dec 13 14:50:29.323795 coreos-metadata[1486]: Dec 13 14:50:29.323 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Dec 13 14:50:29.339107 coreos-metadata[1486]: Dec 13 14:50:29.339 INFO Fetch successful Dec 13 14:50:29.339302 coreos-metadata[1486]: Dec 13 14:50:29.339 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Dec 13 14:50:29.353615 coreos-metadata[1486]: Dec 13 14:50:29.353 INFO Fetch successful Dec 13 14:50:29.353816 coreos-metadata[1486]: Dec 13 14:50:29.353 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Dec 13 14:50:29.368570 coreos-metadata[1486]: Dec 13 14:50:29.368 INFO Fetch successful Dec 13 14:50:29.381369 systemd[1]: Finished coreos-metadata.service. Dec 13 14:50:30.285537 systemd[1]: Stopped kubelet.service. Dec 13 14:50:30.291197 systemd[1]: Starting kubelet.service... Dec 13 14:50:30.322561 systemd[1]: Reloading. Dec 13 14:50:30.487497 /usr/lib/systemd/system-generators/torcx-generator[1555]: time="2024-12-13T14:50:30Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 14:50:30.488149 /usr/lib/systemd/system-generators/torcx-generator[1555]: time="2024-12-13T14:50:30Z" level=info msg="torcx already run" Dec 13 14:50:30.610921 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 14:50:30.610995 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 14:50:30.643416 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:50:30.766541 systemd[1]: Started kubelet.service. Dec 13 14:50:30.770739 systemd[1]: Stopping kubelet.service... Dec 13 14:50:30.771732 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 14:50:30.774368 systemd[1]: Stopped kubelet.service. Dec 13 14:50:30.780202 systemd[1]: Starting kubelet.service... Dec 13 14:50:30.903763 systemd[1]: Started kubelet.service. Dec 13 14:50:30.973241 kubelet[1623]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 14:50:30.973241 kubelet[1623]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 14:50:30.973241 kubelet[1623]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 14:50:30.973910 kubelet[1623]: I1213 14:50:30.973263 1623 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 14:50:31.964464 kubelet[1623]: I1213 14:50:31.964412 1623 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 14:50:31.964464 kubelet[1623]: I1213 14:50:31.964459 1623 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 14:50:31.964882 kubelet[1623]: I1213 14:50:31.964830 1623 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 14:50:32.002222 kubelet[1623]: I1213 14:50:32.002177 1623 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 14:50:32.026253 kubelet[1623]: I1213 14:50:32.026206 1623 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 14:50:32.028564 kubelet[1623]: I1213 14:50:32.028528 1623 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 14:50:32.028919 kubelet[1623]: I1213 14:50:32.028810 1623 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 14:50:32.030477 kubelet[1623]: I1213 14:50:32.030428 1623 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 14:50:32.030477 kubelet[1623]: I1213 14:50:32.030460 1623 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 14:50:32.030737 kubelet[1623]: I1213 14:50:32.030705 1623 state_mem.go:36] "Initialized new in-memory state store" Dec 13 14:50:32.030950 kubelet[1623]: I1213 14:50:32.030918 1623 kubelet.go:396] "Attempting to sync node with API server" Dec 13 14:50:32.032206 kubelet[1623]: I1213 14:50:32.032179 1623 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 14:50:32.032291 kubelet[1623]: I1213 14:50:32.032261 1623 kubelet.go:312] "Adding apiserver pod source" Dec 13 14:50:32.032345 kubelet[1623]: I1213 14:50:32.032293 1623 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 14:50:32.032569 kubelet[1623]: E1213 14:50:32.032537 1623 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:50:32.032735 kubelet[1623]: E1213 14:50:32.032710 1623 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:50:32.034147 kubelet[1623]: I1213 14:50:32.034073 1623 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Dec 13 14:50:32.037777 kubelet[1623]: I1213 14:50:32.037734 1623 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 14:50:32.038684 kubelet[1623]: W1213 14:50:32.038656 1623 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Dec 13 14:50:32.038846 kubelet[1623]: E1213 14:50:32.038821 1623 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Dec 13 14:50:32.039058 kubelet[1623]: W1213 14:50:32.039032 1623 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes "10.230.34.126" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Dec 13 14:50:32.039184 kubelet[1623]: E1213 14:50:32.039162 1623 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.230.34.126" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Dec 13 14:50:32.039761 kubelet[1623]: W1213 14:50:32.039719 1623 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 14:50:32.041820 kubelet[1623]: I1213 14:50:32.041181 1623 server.go:1256] "Started kubelet" Dec 13 14:50:32.046461 kubelet[1623]: I1213 14:50:32.045789 1623 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 14:50:32.046461 kubelet[1623]: I1213 14:50:32.046390 1623 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 14:50:32.046609 kubelet[1623]: I1213 14:50:32.046462 1623 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 14:50:32.050132 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Dec 13 14:50:32.050438 kubelet[1623]: I1213 14:50:32.050377 1623 server.go:461] "Adding debug handlers to kubelet server" Dec 13 14:50:32.050616 kubelet[1623]: I1213 14:50:32.050590 1623 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 14:50:32.066675 kubelet[1623]: I1213 14:50:32.066595 1623 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 14:50:32.066815 kubelet[1623]: I1213 14:50:32.066792 1623 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 14:50:32.067092 kubelet[1623]: I1213 14:50:32.067063 1623 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 14:50:32.068860 kubelet[1623]: E1213 14:50:32.068833 1623 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 14:50:32.070177 kubelet[1623]: I1213 14:50:32.070145 1623 factory.go:221] Registration of the systemd container factory successfully Dec 13 14:50:32.070332 kubelet[1623]: I1213 14:50:32.070303 1623 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 14:50:32.072380 kubelet[1623]: I1213 14:50:32.072354 1623 factory.go:221] Registration of the containerd container factory successfully Dec 13 14:50:32.109720 kubelet[1623]: E1213 14:50:32.109679 1623 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.230.34.126\" not found" node="10.230.34.126" Dec 13 14:50:32.120935 kubelet[1623]: I1213 14:50:32.120909 1623 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 14:50:32.121135 kubelet[1623]: I1213 14:50:32.121112 1623 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 14:50:32.121327 kubelet[1623]: I1213 14:50:32.121292 1623 state_mem.go:36] "Initialized new in-memory state store" Dec 13 14:50:32.123442 kubelet[1623]: I1213 14:50:32.123406 1623 policy_none.go:49] "None policy: Start" Dec 13 14:50:32.125149 kubelet[1623]: I1213 14:50:32.125123 1623 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 14:50:32.125316 kubelet[1623]: I1213 14:50:32.125292 1623 state_mem.go:35] "Initializing new in-memory state store" Dec 13 14:50:32.142847 kubelet[1623]: I1213 14:50:32.142814 1623 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 14:50:32.143384 kubelet[1623]: I1213 14:50:32.143358 1623 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 14:50:32.151141 kubelet[1623]: E1213 14:50:32.151114 1623 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.230.34.126\" not found" Dec 13 14:50:32.168262 kubelet[1623]: I1213 14:50:32.168230 1623 kubelet_node_status.go:73] "Attempting to register node" node="10.230.34.126" Dec 13 14:50:32.175048 kubelet[1623]: I1213 14:50:32.175008 1623 kubelet_node_status.go:76] "Successfully registered node" node="10.230.34.126" Dec 13 14:50:32.187690 kubelet[1623]: E1213 14:50:32.187625 1623 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.230.34.126\" not found" Dec 13 14:50:32.226551 kubelet[1623]: I1213 14:50:32.224343 1623 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 14:50:32.228955 kubelet[1623]: I1213 14:50:32.228921 1623 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 14:50:32.229051 kubelet[1623]: I1213 14:50:32.229022 1623 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 14:50:32.229129 kubelet[1623]: I1213 14:50:32.229066 1623 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 14:50:32.229210 kubelet[1623]: E1213 14:50:32.229169 1623 kubelet.go:2353] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Dec 13 14:50:32.288812 kubelet[1623]: E1213 14:50:32.288739 1623 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.230.34.126\" not found" Dec 13 14:50:32.390077 kubelet[1623]: E1213 14:50:32.389922 1623 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.230.34.126\" not found" Dec 13 14:50:32.492594 kubelet[1623]: E1213 14:50:32.491604 1623 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.230.34.126\" not found" Dec 13 14:50:32.592707 kubelet[1623]: E1213 14:50:32.592602 1623 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.230.34.126\" not found" Dec 13 14:50:32.693535 kubelet[1623]: E1213 14:50:32.693481 1623 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.230.34.126\" not found" Dec 13 14:50:32.795127 kubelet[1623]: E1213 14:50:32.794447 1623 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.230.34.126\" not found" Dec 13 14:50:32.895641 kubelet[1623]: E1213 14:50:32.895522 1623 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.230.34.126\" not found" Dec 13 14:50:32.967867 kubelet[1623]: I1213 14:50:32.967808 1623 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Dec 13 14:50:32.968161 kubelet[1623]: W1213 14:50:32.968126 1623 reflector.go:462] vendor/k8s.io/client-go/informers/factory.go:159: watch of *v1.RuntimeClass ended with: very short watch: vendor/k8s.io/client-go/informers/factory.go:159: Unexpected watch close - watch lasted less than a second and no items received Dec 13 14:50:32.968293 kubelet[1623]: W1213 14:50:32.968253 1623 reflector.go:462] vendor/k8s.io/client-go/informers/factory.go:159: watch of *v1.CSIDriver ended with: very short watch: vendor/k8s.io/client-go/informers/factory.go:159: Unexpected watch close - watch lasted less than a second and no items received Dec 13 14:50:32.996389 kubelet[1623]: E1213 14:50:32.996357 1623 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.230.34.126\" not found" Dec 13 14:50:33.033026 kubelet[1623]: E1213 14:50:33.032982 1623 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:50:33.098288 kubelet[1623]: E1213 14:50:33.097327 1623 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.230.34.126\" not found" Dec 13 14:50:33.198206 kubelet[1623]: E1213 14:50:33.198123 1623 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.230.34.126\" not found" Dec 13 14:50:33.298906 kubelet[1623]: E1213 14:50:33.298846 1623 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.230.34.126\" not found" Dec 13 14:50:33.399616 kubelet[1623]: E1213 14:50:33.399569 1623 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.230.34.126\" not found" Dec 13 14:50:33.431687 sudo[1482]: pam_unix(sudo:session): session closed for user root Dec 13 14:50:33.500839 kubelet[1623]: I1213 14:50:33.500785 1623 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Dec 13 14:50:33.501378 env[1300]: time="2024-12-13T14:50:33.501288765Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 14:50:33.502440 kubelet[1623]: I1213 14:50:33.502345 1623 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Dec 13 14:50:33.577233 sshd[1463]: pam_unix(sshd:session): session closed for user core Dec 13 14:50:33.582035 systemd[1]: sshd@6-10.230.34.126:22-139.178.68.195:34836.service: Deactivated successfully. Dec 13 14:50:33.583781 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 14:50:33.583813 systemd-logind[1287]: Session 7 logged out. Waiting for processes to exit. Dec 13 14:50:33.585679 systemd-logind[1287]: Removed session 7. Dec 13 14:50:34.033822 kubelet[1623]: E1213 14:50:34.033757 1623 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:50:34.034604 kubelet[1623]: I1213 14:50:34.034563 1623 apiserver.go:52] "Watching apiserver" Dec 13 14:50:34.042315 kubelet[1623]: I1213 14:50:34.042275 1623 topology_manager.go:215] "Topology Admit Handler" podUID="1d744a84-2197-4261-825f-25fbd8bac166" podNamespace="kube-system" podName="cilium-vc2vz" Dec 13 14:50:34.042596 kubelet[1623]: I1213 14:50:34.042572 1623 topology_manager.go:215] "Topology Admit Handler" podUID="5208d502-7db7-4927-9a8c-9b80ae22e27b" podNamespace="kube-system" podName="kube-proxy-tbxxt" Dec 13 14:50:34.068685 kubelet[1623]: I1213 14:50:34.068641 1623 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 14:50:34.081843 kubelet[1623]: I1213 14:50:34.081816 1623 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1d744a84-2197-4261-825f-25fbd8bac166-cilium-run\") pod \"cilium-vc2vz\" (UID: \"1d744a84-2197-4261-825f-25fbd8bac166\") " pod="kube-system/cilium-vc2vz" Dec 13 14:50:34.082120 kubelet[1623]: I1213 14:50:34.082086 1623 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5208d502-7db7-4927-9a8c-9b80ae22e27b-lib-modules\") pod \"kube-proxy-tbxxt\" (UID: \"5208d502-7db7-4927-9a8c-9b80ae22e27b\") " pod="kube-system/kube-proxy-tbxxt" Dec 13 14:50:34.082214 kubelet[1623]: I1213 14:50:34.082196 1623 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1d744a84-2197-4261-825f-25fbd8bac166-cni-path\") pod \"cilium-vc2vz\" (UID: \"1d744a84-2197-4261-825f-25fbd8bac166\") " pod="kube-system/cilium-vc2vz" Dec 13 14:50:34.082305 kubelet[1623]: I1213 14:50:34.082282 1623 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1d744a84-2197-4261-825f-25fbd8bac166-clustermesh-secrets\") pod \"cilium-vc2vz\" (UID: \"1d744a84-2197-4261-825f-25fbd8bac166\") " pod="kube-system/cilium-vc2vz" Dec 13 14:50:34.082380 kubelet[1623]: I1213 14:50:34.082355 1623 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1d744a84-2197-4261-825f-25fbd8bac166-cilium-config-path\") pod \"cilium-vc2vz\" (UID: \"1d744a84-2197-4261-825f-25fbd8bac166\") " pod="kube-system/cilium-vc2vz" Dec 13 14:50:34.082448 kubelet[1623]: I1213 14:50:34.082416 1623 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1d744a84-2197-4261-825f-25fbd8bac166-hubble-tls\") pod \"cilium-vc2vz\" (UID: \"1d744a84-2197-4261-825f-25fbd8bac166\") " pod="kube-system/cilium-vc2vz" Dec 13 14:50:34.082508 kubelet[1623]: I1213 14:50:34.082486 1623 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1d744a84-2197-4261-825f-25fbd8bac166-bpf-maps\") pod \"cilium-vc2vz\" (UID: \"1d744a84-2197-4261-825f-25fbd8bac166\") " pod="kube-system/cilium-vc2vz" Dec 13 14:50:34.082565 kubelet[1623]: I1213 14:50:34.082520 1623 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1d744a84-2197-4261-825f-25fbd8bac166-cilium-cgroup\") pod \"cilium-vc2vz\" (UID: \"1d744a84-2197-4261-825f-25fbd8bac166\") " pod="kube-system/cilium-vc2vz" Dec 13 14:50:34.082637 kubelet[1623]: I1213 14:50:34.082590 1623 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fpq6r\" (UniqueName: \"kubernetes.io/projected/1d744a84-2197-4261-825f-25fbd8bac166-kube-api-access-fpq6r\") pod \"cilium-vc2vz\" (UID: \"1d744a84-2197-4261-825f-25fbd8bac166\") " pod="kube-system/cilium-vc2vz" Dec 13 14:50:34.082712 kubelet[1623]: I1213 14:50:34.082657 1623 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/5208d502-7db7-4927-9a8c-9b80ae22e27b-kube-proxy\") pod \"kube-proxy-tbxxt\" (UID: \"5208d502-7db7-4927-9a8c-9b80ae22e27b\") " pod="kube-system/kube-proxy-tbxxt" Dec 13 14:50:34.082767 kubelet[1623]: I1213 14:50:34.082749 1623 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbcv8\" (UniqueName: \"kubernetes.io/projected/5208d502-7db7-4927-9a8c-9b80ae22e27b-kube-api-access-zbcv8\") pod \"kube-proxy-tbxxt\" (UID: \"5208d502-7db7-4927-9a8c-9b80ae22e27b\") " pod="kube-system/kube-proxy-tbxxt" Dec 13 14:50:34.082823 kubelet[1623]: I1213 14:50:34.082813 1623 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1d744a84-2197-4261-825f-25fbd8bac166-hostproc\") pod \"cilium-vc2vz\" (UID: \"1d744a84-2197-4261-825f-25fbd8bac166\") " pod="kube-system/cilium-vc2vz" Dec 13 14:50:34.082874 kubelet[1623]: I1213 14:50:34.082862 1623 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1d744a84-2197-4261-825f-25fbd8bac166-host-proc-sys-net\") pod \"cilium-vc2vz\" (UID: \"1d744a84-2197-4261-825f-25fbd8bac166\") " pod="kube-system/cilium-vc2vz" Dec 13 14:50:34.082947 kubelet[1623]: I1213 14:50:34.082917 1623 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1d744a84-2197-4261-825f-25fbd8bac166-xtables-lock\") pod \"cilium-vc2vz\" (UID: \"1d744a84-2197-4261-825f-25fbd8bac166\") " pod="kube-system/cilium-vc2vz" Dec 13 14:50:34.083019 kubelet[1623]: I1213 14:50:34.082952 1623 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1d744a84-2197-4261-825f-25fbd8bac166-host-proc-sys-kernel\") pod \"cilium-vc2vz\" (UID: \"1d744a84-2197-4261-825f-25fbd8bac166\") " pod="kube-system/cilium-vc2vz" Dec 13 14:50:34.083090 kubelet[1623]: I1213 14:50:34.083037 1623 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5208d502-7db7-4927-9a8c-9b80ae22e27b-xtables-lock\") pod \"kube-proxy-tbxxt\" (UID: \"5208d502-7db7-4927-9a8c-9b80ae22e27b\") " pod="kube-system/kube-proxy-tbxxt" Dec 13 14:50:34.083149 kubelet[1623]: I1213 14:50:34.083093 1623 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1d744a84-2197-4261-825f-25fbd8bac166-etc-cni-netd\") pod \"cilium-vc2vz\" (UID: \"1d744a84-2197-4261-825f-25fbd8bac166\") " pod="kube-system/cilium-vc2vz" Dec 13 14:50:34.083205 kubelet[1623]: I1213 14:50:34.083150 1623 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1d744a84-2197-4261-825f-25fbd8bac166-lib-modules\") pod \"cilium-vc2vz\" (UID: \"1d744a84-2197-4261-825f-25fbd8bac166\") " pod="kube-system/cilium-vc2vz" Dec 13 14:50:34.352546 env[1300]: time="2024-12-13T14:50:34.349711243Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vc2vz,Uid:1d744a84-2197-4261-825f-25fbd8bac166,Namespace:kube-system,Attempt:0,}" Dec 13 14:50:34.352546 env[1300]: time="2024-12-13T14:50:34.350124568Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tbxxt,Uid:5208d502-7db7-4927-9a8c-9b80ae22e27b,Namespace:kube-system,Attempt:0,}" Dec 13 14:50:35.035439 kubelet[1623]: E1213 14:50:35.035351 1623 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:50:35.268834 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2638756085.mount: Deactivated successfully. Dec 13 14:50:35.277490 env[1300]: time="2024-12-13T14:50:35.277426462Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:50:35.279379 env[1300]: time="2024-12-13T14:50:35.279273356Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:50:35.281671 env[1300]: time="2024-12-13T14:50:35.281634764Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:50:35.283099 env[1300]: time="2024-12-13T14:50:35.283064717Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:50:35.286207 env[1300]: time="2024-12-13T14:50:35.285796622Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:50:35.287434 env[1300]: time="2024-12-13T14:50:35.287401691Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:50:35.290840 env[1300]: time="2024-12-13T14:50:35.290782540Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:50:35.313659 env[1300]: time="2024-12-13T14:50:35.313604426Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:50:35.378557 env[1300]: time="2024-12-13T14:50:35.378433316Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:50:35.378803 env[1300]: time="2024-12-13T14:50:35.378540384Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:50:35.378803 env[1300]: time="2024-12-13T14:50:35.378560627Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:50:35.379041 env[1300]: time="2024-12-13T14:50:35.378826742Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0d0320d98c5f9b71b27ef90ddfee766481e94d487f270f4836ef3b61eda9c3ea pid=1687 runtime=io.containerd.runc.v2 Dec 13 14:50:35.380157 env[1300]: time="2024-12-13T14:50:35.380052560Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:50:35.380391 env[1300]: time="2024-12-13T14:50:35.380335153Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:50:35.380567 env[1300]: time="2024-12-13T14:50:35.380513084Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:50:35.381089 env[1300]: time="2024-12-13T14:50:35.381031092Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/39ceaafacf5f770d006bb9a084b2d5ed25b4325ead8867952f2244655e7f596c pid=1686 runtime=io.containerd.runc.v2 Dec 13 14:50:35.472899 env[1300]: time="2024-12-13T14:50:35.472842696Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vc2vz,Uid:1d744a84-2197-4261-825f-25fbd8bac166,Namespace:kube-system,Attempt:0,} returns sandbox id \"0d0320d98c5f9b71b27ef90ddfee766481e94d487f270f4836ef3b61eda9c3ea\"" Dec 13 14:50:35.477880 env[1300]: time="2024-12-13T14:50:35.477842291Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Dec 13 14:50:35.487318 env[1300]: time="2024-12-13T14:50:35.487260903Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tbxxt,Uid:5208d502-7db7-4927-9a8c-9b80ae22e27b,Namespace:kube-system,Attempt:0,} returns sandbox id \"39ceaafacf5f770d006bb9a084b2d5ed25b4325ead8867952f2244655e7f596c\"" Dec 13 14:50:36.036251 kubelet[1623]: E1213 14:50:36.036156 1623 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:50:37.036921 kubelet[1623]: E1213 14:50:37.036822 1623 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:50:38.037259 kubelet[1623]: E1213 14:50:38.037181 1623 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:50:39.038388 kubelet[1623]: E1213 14:50:39.038289 1623 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:50:40.039266 kubelet[1623]: E1213 14:50:40.039125 1623 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:50:41.039531 kubelet[1623]: E1213 14:50:41.039417 1623 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:50:42.040510 kubelet[1623]: E1213 14:50:42.040405 1623 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:50:43.041677 kubelet[1623]: E1213 14:50:43.041614 1623 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:50:44.042249 kubelet[1623]: E1213 14:50:44.042197 1623 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:50:44.123143 update_engine[1289]: I1213 14:50:44.121351 1289 update_attempter.cc:509] Updating boot flags... Dec 13 14:50:45.043026 kubelet[1623]: E1213 14:50:45.042890 1623 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:50:45.212055 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3223860028.mount: Deactivated successfully. Dec 13 14:50:46.043191 kubelet[1623]: E1213 14:50:46.043060 1623 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:50:47.043944 kubelet[1623]: E1213 14:50:47.043879 1623 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:50:48.044629 kubelet[1623]: E1213 14:50:48.044546 1623 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:50:49.045564 kubelet[1623]: E1213 14:50:49.045488 1623 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:50:49.950223 env[1300]: time="2024-12-13T14:50:49.950076318Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:50:49.952715 env[1300]: time="2024-12-13T14:50:49.952673909Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:50:49.954854 env[1300]: time="2024-12-13T14:50:49.954816377Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:50:49.955988 env[1300]: time="2024-12-13T14:50:49.955917743Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Dec 13 14:50:49.958957 env[1300]: time="2024-12-13T14:50:49.958908521Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\"" Dec 13 14:50:49.962189 env[1300]: time="2024-12-13T14:50:49.962118242Z" level=info msg="CreateContainer within sandbox \"0d0320d98c5f9b71b27ef90ddfee766481e94d487f270f4836ef3b61eda9c3ea\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 14:50:49.980728 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount378461549.mount: Deactivated successfully. Dec 13 14:50:49.995980 env[1300]: time="2024-12-13T14:50:49.995921311Z" level=info msg="CreateContainer within sandbox \"0d0320d98c5f9b71b27ef90ddfee766481e94d487f270f4836ef3b61eda9c3ea\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"d43ef6b7a07b71319b9392bceac4ede8981f1054e4785c32af6e1bda0f9d6e24\"" Dec 13 14:50:49.997318 env[1300]: time="2024-12-13T14:50:49.997272227Z" level=info msg="StartContainer for \"d43ef6b7a07b71319b9392bceac4ede8981f1054e4785c32af6e1bda0f9d6e24\"" Dec 13 14:50:50.047763 kubelet[1623]: E1213 14:50:50.046528 1623 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:50:50.092502 env[1300]: time="2024-12-13T14:50:50.092445330Z" level=info msg="StartContainer for \"d43ef6b7a07b71319b9392bceac4ede8981f1054e4785c32af6e1bda0f9d6e24\" returns successfully" Dec 13 14:50:50.264818 env[1300]: time="2024-12-13T14:50:50.264274253Z" level=info msg="shim disconnected" id=d43ef6b7a07b71319b9392bceac4ede8981f1054e4785c32af6e1bda0f9d6e24 Dec 13 14:50:50.264818 env[1300]: time="2024-12-13T14:50:50.264336617Z" level=warning msg="cleaning up after shim disconnected" id=d43ef6b7a07b71319b9392bceac4ede8981f1054e4785c32af6e1bda0f9d6e24 namespace=k8s.io Dec 13 14:50:50.264818 env[1300]: time="2024-12-13T14:50:50.264354958Z" level=info msg="cleaning up dead shim" Dec 13 14:50:50.284160 env[1300]: time="2024-12-13T14:50:50.284106145Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:50:50Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1819 runtime=io.containerd.runc.v2\n" Dec 13 14:50:50.974892 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d43ef6b7a07b71319b9392bceac4ede8981f1054e4785c32af6e1bda0f9d6e24-rootfs.mount: Deactivated successfully. Dec 13 14:50:51.047085 kubelet[1623]: E1213 14:50:51.047013 1623 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:50:51.287149 env[1300]: time="2024-12-13T14:50:51.286590105Z" level=info msg="CreateContainer within sandbox \"0d0320d98c5f9b71b27ef90ddfee766481e94d487f270f4836ef3b61eda9c3ea\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 14:50:51.307386 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1558551306.mount: Deactivated successfully. Dec 13 14:50:51.317344 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3434653819.mount: Deactivated successfully. Dec 13 14:50:51.325424 env[1300]: time="2024-12-13T14:50:51.325261374Z" level=info msg="CreateContainer within sandbox \"0d0320d98c5f9b71b27ef90ddfee766481e94d487f270f4836ef3b61eda9c3ea\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"75936ed9c0a6ce689e393d792e6177809b7d336d4308fae269b0b8df7ffbce08\"" Dec 13 14:50:51.326200 env[1300]: time="2024-12-13T14:50:51.326122813Z" level=info msg="StartContainer for \"75936ed9c0a6ce689e393d792e6177809b7d336d4308fae269b0b8df7ffbce08\"" Dec 13 14:50:51.411252 env[1300]: time="2024-12-13T14:50:51.411169202Z" level=info msg="StartContainer for \"75936ed9c0a6ce689e393d792e6177809b7d336d4308fae269b0b8df7ffbce08\" returns successfully" Dec 13 14:50:51.424860 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 14:50:51.425840 systemd[1]: Stopped systemd-sysctl.service. Dec 13 14:50:51.427297 systemd[1]: Stopping systemd-sysctl.service... Dec 13 14:50:51.435193 systemd[1]: Starting systemd-sysctl.service... Dec 13 14:50:51.446666 systemd[1]: Finished systemd-sysctl.service. Dec 13 14:50:51.518609 env[1300]: time="2024-12-13T14:50:51.518520316Z" level=info msg="shim disconnected" id=75936ed9c0a6ce689e393d792e6177809b7d336d4308fae269b0b8df7ffbce08 Dec 13 14:50:51.519083 env[1300]: time="2024-12-13T14:50:51.518996109Z" level=warning msg="cleaning up after shim disconnected" id=75936ed9c0a6ce689e393d792e6177809b7d336d4308fae269b0b8df7ffbce08 namespace=k8s.io Dec 13 14:50:51.519376 env[1300]: time="2024-12-13T14:50:51.519163089Z" level=info msg="cleaning up dead shim" Dec 13 14:50:51.532091 env[1300]: time="2024-12-13T14:50:51.532054405Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:50:51Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1885 runtime=io.containerd.runc.v2\n" Dec 13 14:50:52.032785 kubelet[1623]: E1213 14:50:52.032712 1623 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:50:52.048053 kubelet[1623]: E1213 14:50:52.047994 1623 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:50:52.290537 env[1300]: time="2024-12-13T14:50:52.290211146Z" level=info msg="CreateContainer within sandbox \"0d0320d98c5f9b71b27ef90ddfee766481e94d487f270f4836ef3b61eda9c3ea\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 14:50:52.312803 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount457666244.mount: Deactivated successfully. Dec 13 14:50:52.337188 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1111597575.mount: Deactivated successfully. Dec 13 14:50:52.365761 env[1300]: time="2024-12-13T14:50:52.365691429Z" level=info msg="CreateContainer within sandbox \"0d0320d98c5f9b71b27ef90ddfee766481e94d487f270f4836ef3b61eda9c3ea\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"e55356e12c40bf55f395e58d0da792dc491362d79f158640adc2c1d7cd80972d\"" Dec 13 14:50:52.366652 env[1300]: time="2024-12-13T14:50:52.366589442Z" level=info msg="StartContainer for \"e55356e12c40bf55f395e58d0da792dc491362d79f158640adc2c1d7cd80972d\"" Dec 13 14:50:52.463586 env[1300]: time="2024-12-13T14:50:52.463484764Z" level=info msg="StartContainer for \"e55356e12c40bf55f395e58d0da792dc491362d79f158640adc2c1d7cd80972d\" returns successfully" Dec 13 14:50:52.619801 env[1300]: time="2024-12-13T14:50:52.619251331Z" level=info msg="shim disconnected" id=e55356e12c40bf55f395e58d0da792dc491362d79f158640adc2c1d7cd80972d Dec 13 14:50:52.620092 env[1300]: time="2024-12-13T14:50:52.620059001Z" level=warning msg="cleaning up after shim disconnected" id=e55356e12c40bf55f395e58d0da792dc491362d79f158640adc2c1d7cd80972d namespace=k8s.io Dec 13 14:50:52.620245 env[1300]: time="2024-12-13T14:50:52.620216714Z" level=info msg="cleaning up dead shim" Dec 13 14:50:52.642916 env[1300]: time="2024-12-13T14:50:52.642832778Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:50:52Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1943 runtime=io.containerd.runc.v2\n" Dec 13 14:50:52.974480 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount618631898.mount: Deactivated successfully. Dec 13 14:50:53.048950 kubelet[1623]: E1213 14:50:53.048878 1623 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:50:53.294382 env[1300]: time="2024-12-13T14:50:53.294253998Z" level=info msg="CreateContainer within sandbox \"0d0320d98c5f9b71b27ef90ddfee766481e94d487f270f4836ef3b61eda9c3ea\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 14:50:53.309909 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3037265901.mount: Deactivated successfully. Dec 13 14:50:53.326805 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2723154323.mount: Deactivated successfully. Dec 13 14:50:53.348694 env[1300]: time="2024-12-13T14:50:53.348638694Z" level=info msg="CreateContainer within sandbox \"0d0320d98c5f9b71b27ef90ddfee766481e94d487f270f4836ef3b61eda9c3ea\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"b7563026dad6743308294b30bb1197f84111a7fd7cec1d539b6219e8e84c7ae5\"" Dec 13 14:50:53.349457 env[1300]: time="2024-12-13T14:50:53.349418240Z" level=info msg="StartContainer for \"b7563026dad6743308294b30bb1197f84111a7fd7cec1d539b6219e8e84c7ae5\"" Dec 13 14:50:53.485406 env[1300]: time="2024-12-13T14:50:53.485299921Z" level=info msg="StartContainer for \"b7563026dad6743308294b30bb1197f84111a7fd7cec1d539b6219e8e84c7ae5\" returns successfully" Dec 13 14:50:53.500835 env[1300]: time="2024-12-13T14:50:53.500763673Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:50:53.505943 env[1300]: time="2024-12-13T14:50:53.505898191Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:50:53.509672 env[1300]: time="2024-12-13T14:50:53.509634325Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:50:53.512789 env[1300]: time="2024-12-13T14:50:53.512746554Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:50:53.513122 env[1300]: time="2024-12-13T14:50:53.513083512Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\" returns image reference \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\"" Dec 13 14:50:53.516620 env[1300]: time="2024-12-13T14:50:53.516564202Z" level=info msg="CreateContainer within sandbox \"39ceaafacf5f770d006bb9a084b2d5ed25b4325ead8867952f2244655e7f596c\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 14:50:53.540316 env[1300]: time="2024-12-13T14:50:53.540257854Z" level=info msg="shim disconnected" id=b7563026dad6743308294b30bb1197f84111a7fd7cec1d539b6219e8e84c7ae5 Dec 13 14:50:53.540739 env[1300]: time="2024-12-13T14:50:53.540707917Z" level=warning msg="cleaning up after shim disconnected" id=b7563026dad6743308294b30bb1197f84111a7fd7cec1d539b6219e8e84c7ae5 namespace=k8s.io Dec 13 14:50:53.540886 env[1300]: time="2024-12-13T14:50:53.540856544Z" level=info msg="cleaning up dead shim" Dec 13 14:50:53.550632 env[1300]: time="2024-12-13T14:50:53.550464910Z" level=info msg="CreateContainer within sandbox \"39ceaafacf5f770d006bb9a084b2d5ed25b4325ead8867952f2244655e7f596c\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"73c910127c54285bf5070dc9ee802eada01d11cc9533b4e1cfea26deb5398210\"" Dec 13 14:50:53.552504 env[1300]: time="2024-12-13T14:50:53.552431270Z" level=info msg="StartContainer for \"73c910127c54285bf5070dc9ee802eada01d11cc9533b4e1cfea26deb5398210\"" Dec 13 14:50:53.563741 env[1300]: time="2024-12-13T14:50:53.563688449Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:50:53Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1997 runtime=io.containerd.runc.v2\n" Dec 13 14:50:53.653823 env[1300]: time="2024-12-13T14:50:53.653768680Z" level=info msg="StartContainer for \"73c910127c54285bf5070dc9ee802eada01d11cc9533b4e1cfea26deb5398210\" returns successfully" Dec 13 14:50:54.049401 kubelet[1623]: E1213 14:50:54.049304 1623 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:50:54.300458 env[1300]: time="2024-12-13T14:50:54.300276055Z" level=info msg="CreateContainer within sandbox \"0d0320d98c5f9b71b27ef90ddfee766481e94d487f270f4836ef3b61eda9c3ea\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 14:50:54.316348 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2583787293.mount: Deactivated successfully. Dec 13 14:50:54.344995 env[1300]: time="2024-12-13T14:50:54.342208300Z" level=info msg="CreateContainer within sandbox \"0d0320d98c5f9b71b27ef90ddfee766481e94d487f270f4836ef3b61eda9c3ea\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"d02fb8923cbe191552b028d59866f9b98e798699ac851bab17bb435cfc5453b9\"" Dec 13 14:50:54.347983 env[1300]: time="2024-12-13T14:50:54.347911310Z" level=info msg="StartContainer for \"d02fb8923cbe191552b028d59866f9b98e798699ac851bab17bb435cfc5453b9\"" Dec 13 14:50:54.436884 env[1300]: time="2024-12-13T14:50:54.431365264Z" level=info msg="StartContainer for \"d02fb8923cbe191552b028d59866f9b98e798699ac851bab17bb435cfc5453b9\" returns successfully" Dec 13 14:50:54.578788 kubelet[1623]: I1213 14:50:54.578653 1623 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Dec 13 14:50:55.050128 kubelet[1623]: E1213 14:50:55.050069 1623 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:50:55.056993 kernel: Initializing XFRM netlink socket Dec 13 14:50:55.329906 kubelet[1623]: I1213 14:50:55.329776 1623 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-tbxxt" podStartSLOduration=5.304495803 podStartE2EDuration="23.329683756s" podCreationTimestamp="2024-12-13 14:50:32 +0000 UTC" firstStartedPulling="2024-12-13 14:50:35.48884252 +0000 UTC m=+4.575800758" lastFinishedPulling="2024-12-13 14:50:53.51403047 +0000 UTC m=+22.600988711" observedRunningTime="2024-12-13 14:50:54.356627169 +0000 UTC m=+23.443585409" watchObservedRunningTime="2024-12-13 14:50:55.329683756 +0000 UTC m=+24.416642005" Dec 13 14:50:56.050695 kubelet[1623]: E1213 14:50:56.050641 1623 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:50:56.815695 systemd-networkd[1075]: cilium_host: Link UP Dec 13 14:50:56.825036 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Dec 13 14:50:56.825221 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Dec 13 14:50:56.817699 systemd-networkd[1075]: cilium_net: Link UP Dec 13 14:50:56.822362 systemd-networkd[1075]: cilium_net: Gained carrier Dec 13 14:50:56.823586 systemd-networkd[1075]: cilium_host: Gained carrier Dec 13 14:50:56.983649 systemd-networkd[1075]: cilium_vxlan: Link UP Dec 13 14:50:56.983659 systemd-networkd[1075]: cilium_vxlan: Gained carrier Dec 13 14:50:57.052634 kubelet[1623]: E1213 14:50:57.052535 1623 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:50:57.241199 systemd-networkd[1075]: cilium_net: Gained IPv6LL Dec 13 14:50:57.248137 systemd-networkd[1075]: cilium_host: Gained IPv6LL Dec 13 14:50:57.376248 kubelet[1623]: I1213 14:50:57.376204 1623 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-vc2vz" podStartSLOduration=10.894479831 podStartE2EDuration="25.376144834s" podCreationTimestamp="2024-12-13 14:50:32 +0000 UTC" firstStartedPulling="2024-12-13 14:50:35.475554016 +0000 UTC m=+4.562512254" lastFinishedPulling="2024-12-13 14:50:49.95721901 +0000 UTC m=+19.044177257" observedRunningTime="2024-12-13 14:50:55.33092596 +0000 UTC m=+24.417884220" watchObservedRunningTime="2024-12-13 14:50:57.376144834 +0000 UTC m=+26.463103071" Dec 13 14:50:57.376686 kubelet[1623]: I1213 14:50:57.376656 1623 topology_manager.go:215] "Topology Admit Handler" podUID="233df86b-0ccc-4a85-b485-ce0c13f93875" podNamespace="default" podName="nginx-deployment-6d5f899847-4lwsp" Dec 13 14:50:57.392003 kernel: NET: Registered PF_ALG protocol family Dec 13 14:50:57.444885 kubelet[1623]: I1213 14:50:57.444818 1623 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jf2p6\" (UniqueName: \"kubernetes.io/projected/233df86b-0ccc-4a85-b485-ce0c13f93875-kube-api-access-jf2p6\") pod \"nginx-deployment-6d5f899847-4lwsp\" (UID: \"233df86b-0ccc-4a85-b485-ce0c13f93875\") " pod="default/nginx-deployment-6d5f899847-4lwsp" Dec 13 14:50:57.685790 env[1300]: time="2024-12-13T14:50:57.684515023Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-4lwsp,Uid:233df86b-0ccc-4a85-b485-ce0c13f93875,Namespace:default,Attempt:0,}" Dec 13 14:50:58.053302 kubelet[1623]: E1213 14:50:58.053138 1623 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:50:58.408200 systemd-networkd[1075]: lxc_health: Link UP Dec 13 14:50:58.418070 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Dec 13 14:50:58.418448 systemd-networkd[1075]: lxc_health: Gained carrier Dec 13 14:50:58.755537 systemd-networkd[1075]: lxc6a011ad8eb89: Link UP Dec 13 14:50:58.763010 kernel: eth0: renamed from tmp36813 Dec 13 14:50:58.769291 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc6a011ad8eb89: link becomes ready Dec 13 14:50:58.768914 systemd-networkd[1075]: lxc6a011ad8eb89: Gained carrier Dec 13 14:50:58.864073 systemd-networkd[1075]: cilium_vxlan: Gained IPv6LL Dec 13 14:50:59.054010 kubelet[1623]: E1213 14:50:59.053732 1623 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:50:59.816307 systemd-networkd[1075]: lxc_health: Gained IPv6LL Dec 13 14:51:00.054911 kubelet[1623]: E1213 14:51:00.054851 1623 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:51:00.392255 systemd-networkd[1075]: lxc6a011ad8eb89: Gained IPv6LL Dec 13 14:51:01.056730 kubelet[1623]: E1213 14:51:01.056628 1623 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:51:02.056900 kubelet[1623]: E1213 14:51:02.056836 1623 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:51:03.058214 kubelet[1623]: E1213 14:51:03.058069 1623 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:51:04.059359 kubelet[1623]: E1213 14:51:04.059302 1623 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:51:04.324937 env[1300]: time="2024-12-13T14:51:04.324145839Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:51:04.324937 env[1300]: time="2024-12-13T14:51:04.324233704Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:51:04.324937 env[1300]: time="2024-12-13T14:51:04.324271791Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:51:04.326190 env[1300]: time="2024-12-13T14:51:04.326088441Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/368137faecde7b6c00e443a5d4c09b018f01ba28dec4fe4e28ceaca57f1252a1 pid=2679 runtime=io.containerd.runc.v2 Dec 13 14:51:04.420894 env[1300]: time="2024-12-13T14:51:04.420814796Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-4lwsp,Uid:233df86b-0ccc-4a85-b485-ce0c13f93875,Namespace:default,Attempt:0,} returns sandbox id \"368137faecde7b6c00e443a5d4c09b018f01ba28dec4fe4e28ceaca57f1252a1\"" Dec 13 14:51:04.424094 env[1300]: time="2024-12-13T14:51:04.424045187Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Dec 13 14:51:05.059876 kubelet[1623]: E1213 14:51:05.059799 1623 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:51:06.061031 kubelet[1623]: E1213 14:51:06.060860 1623 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:51:07.062171 kubelet[1623]: E1213 14:51:07.062027 1623 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:51:08.063208 kubelet[1623]: E1213 14:51:08.063131 1623 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:51:08.857032 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1828497510.mount: Deactivated successfully. Dec 13 14:51:09.064114 kubelet[1623]: E1213 14:51:09.064037 1623 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:51:10.065339 kubelet[1623]: E1213 14:51:10.065227 1623 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:51:11.066517 kubelet[1623]: E1213 14:51:11.066423 1623 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:51:11.355310 env[1300]: time="2024-12-13T14:51:11.355136200Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:51:11.357994 env[1300]: time="2024-12-13T14:51:11.357946531Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:51:11.360174 env[1300]: time="2024-12-13T14:51:11.360139792Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:51:11.361305 env[1300]: time="2024-12-13T14:51:11.361230708Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3\"" Dec 13 14:51:11.363714 env[1300]: time="2024-12-13T14:51:11.363649417Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx@sha256:e04edf30a4ea4c5a4107110797c72d3ee8a654415f00acd4019be17218afd9a1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:51:11.365997 env[1300]: time="2024-12-13T14:51:11.365939961Z" level=info msg="CreateContainer within sandbox \"368137faecde7b6c00e443a5d4c09b018f01ba28dec4fe4e28ceaca57f1252a1\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Dec 13 14:51:11.382676 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1164652227.mount: Deactivated successfully. Dec 13 14:51:11.407214 env[1300]: time="2024-12-13T14:51:11.407119433Z" level=info msg="CreateContainer within sandbox \"368137faecde7b6c00e443a5d4c09b018f01ba28dec4fe4e28ceaca57f1252a1\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"d02d4e32637f9fad6a56e48560ccc89142b90961ced16df517d4b1e91ff82c08\"" Dec 13 14:51:11.409328 env[1300]: time="2024-12-13T14:51:11.409234676Z" level=info msg="StartContainer for \"d02d4e32637f9fad6a56e48560ccc89142b90961ced16df517d4b1e91ff82c08\"" Dec 13 14:51:11.498876 env[1300]: time="2024-12-13T14:51:11.497198289Z" level=info msg="StartContainer for \"d02d4e32637f9fad6a56e48560ccc89142b90961ced16df517d4b1e91ff82c08\" returns successfully" Dec 13 14:51:12.034691 kubelet[1623]: E1213 14:51:12.034526 1623 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:51:12.067346 kubelet[1623]: E1213 14:51:12.067318 1623 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:51:13.067900 kubelet[1623]: E1213 14:51:13.067834 1623 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:51:14.068781 kubelet[1623]: E1213 14:51:14.068694 1623 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:51:15.070649 kubelet[1623]: E1213 14:51:15.070593 1623 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:51:16.071481 kubelet[1623]: E1213 14:51:16.071410 1623 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:51:17.073155 kubelet[1623]: E1213 14:51:17.073084 1623 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:51:18.073994 kubelet[1623]: E1213 14:51:18.073930 1623 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:51:18.681353 kubelet[1623]: I1213 14:51:18.681311 1623 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nginx-deployment-6d5f899847-4lwsp" podStartSLOduration=14.742698728 podStartE2EDuration="21.681174447s" podCreationTimestamp="2024-12-13 14:50:57 +0000 UTC" firstStartedPulling="2024-12-13 14:51:04.423108078 +0000 UTC m=+33.510066316" lastFinishedPulling="2024-12-13 14:51:11.36158379 +0000 UTC m=+40.448542035" observedRunningTime="2024-12-13 14:51:12.361861226 +0000 UTC m=+41.448819472" watchObservedRunningTime="2024-12-13 14:51:18.681174447 +0000 UTC m=+47.768132699" Dec 13 14:51:18.682024 kubelet[1623]: I1213 14:51:18.681996 1623 topology_manager.go:215] "Topology Admit Handler" podUID="f16eedbe-35b4-466c-9475-a627e9179aef" podNamespace="default" podName="nfs-server-provisioner-0" Dec 13 14:51:18.785840 kubelet[1623]: I1213 14:51:18.785804 1623 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/f16eedbe-35b4-466c-9475-a627e9179aef-data\") pod \"nfs-server-provisioner-0\" (UID: \"f16eedbe-35b4-466c-9475-a627e9179aef\") " pod="default/nfs-server-provisioner-0" Dec 13 14:51:18.786165 kubelet[1623]: I1213 14:51:18.786137 1623 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gr9nm\" (UniqueName: \"kubernetes.io/projected/f16eedbe-35b4-466c-9475-a627e9179aef-kube-api-access-gr9nm\") pod \"nfs-server-provisioner-0\" (UID: \"f16eedbe-35b4-466c-9475-a627e9179aef\") " pod="default/nfs-server-provisioner-0" Dec 13 14:51:18.992314 env[1300]: time="2024-12-13T14:51:18.991260056Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:f16eedbe-35b4-466c-9475-a627e9179aef,Namespace:default,Attempt:0,}" Dec 13 14:51:19.055592 systemd-networkd[1075]: lxcc8eca54fd7ad: Link UP Dec 13 14:51:19.068035 kernel: eth0: renamed from tmp04b41 Dec 13 14:51:19.075056 kubelet[1623]: E1213 14:51:19.074986 1623 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:51:19.081359 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 14:51:19.081453 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcc8eca54fd7ad: link becomes ready Dec 13 14:51:19.081636 systemd-networkd[1075]: lxcc8eca54fd7ad: Gained carrier Dec 13 14:51:19.347647 env[1300]: time="2024-12-13T14:51:19.347056939Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:51:19.348030 env[1300]: time="2024-12-13T14:51:19.347195466Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:51:19.348247 env[1300]: time="2024-12-13T14:51:19.348001207Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:51:19.348776 env[1300]: time="2024-12-13T14:51:19.348672473Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/04b41a516bf3cdaacdd473c11615011764faf023eadab8e24bed27ae6f136ffe pid=2812 runtime=io.containerd.runc.v2 Dec 13 14:51:19.441125 env[1300]: time="2024-12-13T14:51:19.441037199Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:f16eedbe-35b4-466c-9475-a627e9179aef,Namespace:default,Attempt:0,} returns sandbox id \"04b41a516bf3cdaacdd473c11615011764faf023eadab8e24bed27ae6f136ffe\"" Dec 13 14:51:19.443528 env[1300]: time="2024-12-13T14:51:19.443472100Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Dec 13 14:51:20.076000 kubelet[1623]: E1213 14:51:20.075913 1623 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:51:20.360523 systemd-networkd[1075]: lxcc8eca54fd7ad: Gained IPv6LL Dec 13 14:51:21.076496 kubelet[1623]: E1213 14:51:21.076417 1623 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:51:22.076651 kubelet[1623]: E1213 14:51:22.076594 1623 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:51:23.077462 kubelet[1623]: E1213 14:51:23.077354 1623 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:51:23.745619 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3468921260.mount: Deactivated successfully. Dec 13 14:51:24.078431 kubelet[1623]: E1213 14:51:24.077997 1623 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:51:25.078285 kubelet[1623]: E1213 14:51:25.078216 1623 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:51:26.079797 kubelet[1623]: E1213 14:51:26.079293 1623 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:51:27.083206 kubelet[1623]: E1213 14:51:27.083133 1623 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:51:27.371704 env[1300]: time="2024-12-13T14:51:27.371371819Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:51:27.375548 env[1300]: time="2024-12-13T14:51:27.375476119Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:51:27.379074 env[1300]: time="2024-12-13T14:51:27.379027206Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:51:27.382830 env[1300]: time="2024-12-13T14:51:27.382793149Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:51:27.383957 env[1300]: time="2024-12-13T14:51:27.383909423Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Dec 13 14:51:27.388834 env[1300]: time="2024-12-13T14:51:27.388795451Z" level=info msg="CreateContainer within sandbox \"04b41a516bf3cdaacdd473c11615011764faf023eadab8e24bed27ae6f136ffe\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Dec 13 14:51:27.404418 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3990501793.mount: Deactivated successfully. Dec 13 14:51:27.413523 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2864285915.mount: Deactivated successfully. Dec 13 14:51:27.422607 env[1300]: time="2024-12-13T14:51:27.422521106Z" level=info msg="CreateContainer within sandbox \"04b41a516bf3cdaacdd473c11615011764faf023eadab8e24bed27ae6f136ffe\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"b81998712b4096440324a84f9b1d6e13528888a6df4f65b4b445deeebbb8b202\"" Dec 13 14:51:27.423566 env[1300]: time="2024-12-13T14:51:27.423396388Z" level=info msg="StartContainer for \"b81998712b4096440324a84f9b1d6e13528888a6df4f65b4b445deeebbb8b202\"" Dec 13 14:51:27.577138 env[1300]: time="2024-12-13T14:51:27.577069846Z" level=info msg="StartContainer for \"b81998712b4096440324a84f9b1d6e13528888a6df4f65b4b445deeebbb8b202\" returns successfully" Dec 13 14:51:28.083456 kubelet[1623]: E1213 14:51:28.083368 1623 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:51:28.419684 kubelet[1623]: I1213 14:51:28.419613 1623 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=2.477219814 podStartE2EDuration="10.419504303s" podCreationTimestamp="2024-12-13 14:51:18 +0000 UTC" firstStartedPulling="2024-12-13 14:51:19.443040939 +0000 UTC m=+48.529999174" lastFinishedPulling="2024-12-13 14:51:27.385325418 +0000 UTC m=+56.472283663" observedRunningTime="2024-12-13 14:51:28.419274911 +0000 UTC m=+57.506233155" watchObservedRunningTime="2024-12-13 14:51:28.419504303 +0000 UTC m=+57.506462547" Dec 13 14:51:29.084172 kubelet[1623]: E1213 14:51:29.084101 1623 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:51:30.085536 kubelet[1623]: E1213 14:51:30.085426 1623 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:51:31.086467 kubelet[1623]: E1213 14:51:31.086341 1623 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:51:32.033265 kubelet[1623]: E1213 14:51:32.033204 1623 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:51:32.086724 kubelet[1623]: E1213 14:51:32.086600 1623 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:51:33.087693 kubelet[1623]: E1213 14:51:33.087626 1623 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:51:34.088957 kubelet[1623]: E1213 14:51:34.088902 1623 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:51:35.090039 kubelet[1623]: E1213 14:51:35.089957 1623 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:51:36.091159 kubelet[1623]: E1213 14:51:36.091085 1623 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:51:37.092098 kubelet[1623]: E1213 14:51:37.092024 1623 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:51:37.268109 kubelet[1623]: I1213 14:51:37.268049 1623 topology_manager.go:215] "Topology Admit Handler" podUID="39f97f38-319e-482f-b9a9-2c1395f6153c" podNamespace="default" podName="test-pod-1" Dec 13 14:51:37.322726 kubelet[1623]: I1213 14:51:37.322673 1623 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25hl4\" (UniqueName: \"kubernetes.io/projected/39f97f38-319e-482f-b9a9-2c1395f6153c-kube-api-access-25hl4\") pod \"test-pod-1\" (UID: \"39f97f38-319e-482f-b9a9-2c1395f6153c\") " pod="default/test-pod-1" Dec 13 14:51:37.322993 kubelet[1623]: I1213 14:51:37.322757 1623 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-f8f2d506-b547-451e-a001-1bc6c6b475fb\" (UniqueName: \"kubernetes.io/nfs/39f97f38-319e-482f-b9a9-2c1395f6153c-pvc-f8f2d506-b547-451e-a001-1bc6c6b475fb\") pod \"test-pod-1\" (UID: \"39f97f38-319e-482f-b9a9-2c1395f6153c\") " pod="default/test-pod-1" Dec 13 14:51:37.476023 kernel: FS-Cache: Loaded Dec 13 14:51:37.540221 kernel: RPC: Registered named UNIX socket transport module. Dec 13 14:51:37.540412 kernel: RPC: Registered udp transport module. Dec 13 14:51:37.540464 kernel: RPC: Registered tcp transport module. Dec 13 14:51:37.541383 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Dec 13 14:51:37.630005 kernel: FS-Cache: Netfs 'nfs' registered for caching Dec 13 14:51:37.880465 kernel: NFS: Registering the id_resolver key type Dec 13 14:51:37.880854 kernel: Key type id_resolver registered Dec 13 14:51:37.883000 kernel: Key type id_legacy registered Dec 13 14:51:37.939903 nfsidmap[2937]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'gb1.brightbox.com' Dec 13 14:51:37.946770 nfsidmap[2940]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'gb1.brightbox.com' Dec 13 14:51:38.093044 kubelet[1623]: E1213 14:51:38.092930 1623 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:51:38.177243 env[1300]: time="2024-12-13T14:51:38.177071754Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:39f97f38-319e-482f-b9a9-2c1395f6153c,Namespace:default,Attempt:0,}" Dec 13 14:51:38.233727 systemd-networkd[1075]: lxc9ce1cf5f4cd4: Link UP Dec 13 14:51:38.241186 kernel: eth0: renamed from tmp4712b Dec 13 14:51:38.251271 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 14:51:38.251368 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc9ce1cf5f4cd4: link becomes ready Dec 13 14:51:38.251432 systemd-networkd[1075]: lxc9ce1cf5f4cd4: Gained carrier Dec 13 14:51:38.473167 env[1300]: time="2024-12-13T14:51:38.472533318Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:51:38.473439 env[1300]: time="2024-12-13T14:51:38.472644980Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:51:38.473439 env[1300]: time="2024-12-13T14:51:38.472674632Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:51:38.473439 env[1300]: time="2024-12-13T14:51:38.473022883Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4712be9646ececb4cf49aba863cf7279faba4dddd51ed89b3540832fc683dded pid=2977 runtime=io.containerd.runc.v2 Dec 13 14:51:38.502163 systemd[1]: run-containerd-runc-k8s.io-4712be9646ececb4cf49aba863cf7279faba4dddd51ed89b3540832fc683dded-runc.uWzZOr.mount: Deactivated successfully. Dec 13 14:51:38.592006 env[1300]: time="2024-12-13T14:51:38.591729637Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:39f97f38-319e-482f-b9a9-2c1395f6153c,Namespace:default,Attempt:0,} returns sandbox id \"4712be9646ececb4cf49aba863cf7279faba4dddd51ed89b3540832fc683dded\"" Dec 13 14:51:38.594746 env[1300]: time="2024-12-13T14:51:38.594713075Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Dec 13 14:51:38.962626 env[1300]: time="2024-12-13T14:51:38.962577365Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:51:38.964409 env[1300]: time="2024-12-13T14:51:38.964371165Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:51:38.966541 env[1300]: time="2024-12-13T14:51:38.966489465Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:51:38.969016 env[1300]: time="2024-12-13T14:51:38.968956536Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx@sha256:e04edf30a4ea4c5a4107110797c72d3ee8a654415f00acd4019be17218afd9a1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:51:38.970241 env[1300]: time="2024-12-13T14:51:38.970180702Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3\"" Dec 13 14:51:38.973360 env[1300]: time="2024-12-13T14:51:38.973312173Z" level=info msg="CreateContainer within sandbox \"4712be9646ececb4cf49aba863cf7279faba4dddd51ed89b3540832fc683dded\" for container &ContainerMetadata{Name:test,Attempt:0,}" Dec 13 14:51:38.988875 env[1300]: time="2024-12-13T14:51:38.988788766Z" level=info msg="CreateContainer within sandbox \"4712be9646ececb4cf49aba863cf7279faba4dddd51ed89b3540832fc683dded\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"efad13e89b5878cc9e41d0753e6bc8ff8d11c062e71f94a50dfdfb3bc57e8055\"" Dec 13 14:51:38.989490 env[1300]: time="2024-12-13T14:51:38.989399938Z" level=info msg="StartContainer for \"efad13e89b5878cc9e41d0753e6bc8ff8d11c062e71f94a50dfdfb3bc57e8055\"" Dec 13 14:51:39.087631 env[1300]: time="2024-12-13T14:51:39.087536329Z" level=info msg="StartContainer for \"efad13e89b5878cc9e41d0753e6bc8ff8d11c062e71f94a50dfdfb3bc57e8055\" returns successfully" Dec 13 14:51:39.093545 kubelet[1623]: E1213 14:51:39.093491 1623 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:51:39.442474 kubelet[1623]: I1213 14:51:39.442090 1623 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=19.065765341 podStartE2EDuration="19.442042991s" podCreationTimestamp="2024-12-13 14:51:20 +0000 UTC" firstStartedPulling="2024-12-13 14:51:38.594356317 +0000 UTC m=+67.681314553" lastFinishedPulling="2024-12-13 14:51:38.970633963 +0000 UTC m=+68.057592203" observedRunningTime="2024-12-13 14:51:39.439809724 +0000 UTC m=+68.526767989" watchObservedRunningTime="2024-12-13 14:51:39.442042991 +0000 UTC m=+68.529001247" Dec 13 14:51:39.624527 systemd-networkd[1075]: lxc9ce1cf5f4cd4: Gained IPv6LL Dec 13 14:51:40.093864 kubelet[1623]: E1213 14:51:40.093816 1623 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:51:41.094912 kubelet[1623]: E1213 14:51:41.094766 1623 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:51:42.096873 kubelet[1623]: E1213 14:51:42.096741 1623 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:51:43.098629 kubelet[1623]: E1213 14:51:43.098550 1623 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:51:44.099116 kubelet[1623]: E1213 14:51:44.099059 1623 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:51:45.101307 kubelet[1623]: E1213 14:51:45.101232 1623 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:51:45.884959 systemd[1]: run-containerd-runc-k8s.io-d02fb8923cbe191552b028d59866f9b98e798699ac851bab17bb435cfc5453b9-runc.iU1K90.mount: Deactivated successfully. Dec 13 14:51:45.913185 env[1300]: time="2024-12-13T14:51:45.913115155Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 14:51:45.920481 env[1300]: time="2024-12-13T14:51:45.920410802Z" level=info msg="StopContainer for \"d02fb8923cbe191552b028d59866f9b98e798699ac851bab17bb435cfc5453b9\" with timeout 2 (s)" Dec 13 14:51:45.920860 env[1300]: time="2024-12-13T14:51:45.920813004Z" level=info msg="Stop container \"d02fb8923cbe191552b028d59866f9b98e798699ac851bab17bb435cfc5453b9\" with signal terminated" Dec 13 14:51:45.932610 systemd-networkd[1075]: lxc_health: Link DOWN Dec 13 14:51:45.932622 systemd-networkd[1075]: lxc_health: Lost carrier Dec 13 14:51:45.993657 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d02fb8923cbe191552b028d59866f9b98e798699ac851bab17bb435cfc5453b9-rootfs.mount: Deactivated successfully. Dec 13 14:51:46.007402 env[1300]: time="2024-12-13T14:51:46.007241307Z" level=info msg="shim disconnected" id=d02fb8923cbe191552b028d59866f9b98e798699ac851bab17bb435cfc5453b9 Dec 13 14:51:46.007402 env[1300]: time="2024-12-13T14:51:46.007320958Z" level=warning msg="cleaning up after shim disconnected" id=d02fb8923cbe191552b028d59866f9b98e798699ac851bab17bb435cfc5453b9 namespace=k8s.io Dec 13 14:51:46.007402 env[1300]: time="2024-12-13T14:51:46.007338893Z" level=info msg="cleaning up dead shim" Dec 13 14:51:46.020283 env[1300]: time="2024-12-13T14:51:46.020216815Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:51:46Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3107 runtime=io.containerd.runc.v2\n" Dec 13 14:51:46.022268 env[1300]: time="2024-12-13T14:51:46.022220278Z" level=info msg="StopContainer for \"d02fb8923cbe191552b028d59866f9b98e798699ac851bab17bb435cfc5453b9\" returns successfully" Dec 13 14:51:46.023224 env[1300]: time="2024-12-13T14:51:46.023174049Z" level=info msg="StopPodSandbox for \"0d0320d98c5f9b71b27ef90ddfee766481e94d487f270f4836ef3b61eda9c3ea\"" Dec 13 14:51:46.023311 env[1300]: time="2024-12-13T14:51:46.023263317Z" level=info msg="Container to stop \"75936ed9c0a6ce689e393d792e6177809b7d336d4308fae269b0b8df7ffbce08\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:51:46.023311 env[1300]: time="2024-12-13T14:51:46.023292004Z" level=info msg="Container to stop \"e55356e12c40bf55f395e58d0da792dc491362d79f158640adc2c1d7cd80972d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:51:46.023431 env[1300]: time="2024-12-13T14:51:46.023310422Z" level=info msg="Container to stop \"d02fb8923cbe191552b028d59866f9b98e798699ac851bab17bb435cfc5453b9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:51:46.023431 env[1300]: time="2024-12-13T14:51:46.023329468Z" level=info msg="Container to stop \"d43ef6b7a07b71319b9392bceac4ede8981f1054e4785c32af6e1bda0f9d6e24\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:51:46.023431 env[1300]: time="2024-12-13T14:51:46.023347589Z" level=info msg="Container to stop \"b7563026dad6743308294b30bb1197f84111a7fd7cec1d539b6219e8e84c7ae5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:51:46.026216 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0d0320d98c5f9b71b27ef90ddfee766481e94d487f270f4836ef3b61eda9c3ea-shm.mount: Deactivated successfully. Dec 13 14:51:46.062053 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0d0320d98c5f9b71b27ef90ddfee766481e94d487f270f4836ef3b61eda9c3ea-rootfs.mount: Deactivated successfully. Dec 13 14:51:46.068588 env[1300]: time="2024-12-13T14:51:46.068517566Z" level=info msg="shim disconnected" id=0d0320d98c5f9b71b27ef90ddfee766481e94d487f270f4836ef3b61eda9c3ea Dec 13 14:51:46.068780 env[1300]: time="2024-12-13T14:51:46.068588568Z" level=warning msg="cleaning up after shim disconnected" id=0d0320d98c5f9b71b27ef90ddfee766481e94d487f270f4836ef3b61eda9c3ea namespace=k8s.io Dec 13 14:51:46.068780 env[1300]: time="2024-12-13T14:51:46.068605613Z" level=info msg="cleaning up dead shim" Dec 13 14:51:46.081155 env[1300]: time="2024-12-13T14:51:46.081090708Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:51:46Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3142 runtime=io.containerd.runc.v2\n" Dec 13 14:51:46.082001 env[1300]: time="2024-12-13T14:51:46.081928082Z" level=info msg="TearDown network for sandbox \"0d0320d98c5f9b71b27ef90ddfee766481e94d487f270f4836ef3b61eda9c3ea\" successfully" Dec 13 14:51:46.082112 env[1300]: time="2024-12-13T14:51:46.082001644Z" level=info msg="StopPodSandbox for \"0d0320d98c5f9b71b27ef90ddfee766481e94d487f270f4836ef3b61eda9c3ea\" returns successfully" Dec 13 14:51:46.101855 kubelet[1623]: E1213 14:51:46.101805 1623 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:51:46.184983 kubelet[1623]: I1213 14:51:46.184927 1623 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1d744a84-2197-4261-825f-25fbd8bac166-hostproc\") pod \"1d744a84-2197-4261-825f-25fbd8bac166\" (UID: \"1d744a84-2197-4261-825f-25fbd8bac166\") " Dec 13 14:51:46.185193 kubelet[1623]: I1213 14:51:46.185036 1623 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1d744a84-2197-4261-825f-25fbd8bac166-cilium-config-path\") pod \"1d744a84-2197-4261-825f-25fbd8bac166\" (UID: \"1d744a84-2197-4261-825f-25fbd8bac166\") " Dec 13 14:51:46.185193 kubelet[1623]: I1213 14:51:46.185075 1623 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1d744a84-2197-4261-825f-25fbd8bac166-cilium-cgroup\") pod \"1d744a84-2197-4261-825f-25fbd8bac166\" (UID: \"1d744a84-2197-4261-825f-25fbd8bac166\") " Dec 13 14:51:46.185193 kubelet[1623]: I1213 14:51:46.185108 1623 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fpq6r\" (UniqueName: \"kubernetes.io/projected/1d744a84-2197-4261-825f-25fbd8bac166-kube-api-access-fpq6r\") pod \"1d744a84-2197-4261-825f-25fbd8bac166\" (UID: \"1d744a84-2197-4261-825f-25fbd8bac166\") " Dec 13 14:51:46.185193 kubelet[1623]: I1213 14:51:46.185136 1623 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1d744a84-2197-4261-825f-25fbd8bac166-bpf-maps\") pod \"1d744a84-2197-4261-825f-25fbd8bac166\" (UID: \"1d744a84-2197-4261-825f-25fbd8bac166\") " Dec 13 14:51:46.185193 kubelet[1623]: I1213 14:51:46.185181 1623 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1d744a84-2197-4261-825f-25fbd8bac166-cilium-run\") pod \"1d744a84-2197-4261-825f-25fbd8bac166\" (UID: \"1d744a84-2197-4261-825f-25fbd8bac166\") " Dec 13 14:51:46.185512 kubelet[1623]: I1213 14:51:46.185214 1623 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1d744a84-2197-4261-825f-25fbd8bac166-host-proc-sys-kernel\") pod \"1d744a84-2197-4261-825f-25fbd8bac166\" (UID: \"1d744a84-2197-4261-825f-25fbd8bac166\") " Dec 13 14:51:46.185512 kubelet[1623]: I1213 14:51:46.185241 1623 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1d744a84-2197-4261-825f-25fbd8bac166-lib-modules\") pod \"1d744a84-2197-4261-825f-25fbd8bac166\" (UID: \"1d744a84-2197-4261-825f-25fbd8bac166\") " Dec 13 14:51:46.185512 kubelet[1623]: I1213 14:51:46.185271 1623 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1d744a84-2197-4261-825f-25fbd8bac166-hubble-tls\") pod \"1d744a84-2197-4261-825f-25fbd8bac166\" (UID: \"1d744a84-2197-4261-825f-25fbd8bac166\") " Dec 13 14:51:46.185512 kubelet[1623]: I1213 14:51:46.185312 1623 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1d744a84-2197-4261-825f-25fbd8bac166-xtables-lock\") pod \"1d744a84-2197-4261-825f-25fbd8bac166\" (UID: \"1d744a84-2197-4261-825f-25fbd8bac166\") " Dec 13 14:51:46.185512 kubelet[1623]: I1213 14:51:46.185339 1623 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1d744a84-2197-4261-825f-25fbd8bac166-host-proc-sys-net\") pod \"1d744a84-2197-4261-825f-25fbd8bac166\" (UID: \"1d744a84-2197-4261-825f-25fbd8bac166\") " Dec 13 14:51:46.185512 kubelet[1623]: I1213 14:51:46.185388 1623 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1d744a84-2197-4261-825f-25fbd8bac166-etc-cni-netd\") pod \"1d744a84-2197-4261-825f-25fbd8bac166\" (UID: \"1d744a84-2197-4261-825f-25fbd8bac166\") " Dec 13 14:51:46.185843 kubelet[1623]: I1213 14:51:46.185418 1623 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1d744a84-2197-4261-825f-25fbd8bac166-cni-path\") pod \"1d744a84-2197-4261-825f-25fbd8bac166\" (UID: \"1d744a84-2197-4261-825f-25fbd8bac166\") " Dec 13 14:51:46.185843 kubelet[1623]: I1213 14:51:46.185467 1623 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1d744a84-2197-4261-825f-25fbd8bac166-clustermesh-secrets\") pod \"1d744a84-2197-4261-825f-25fbd8bac166\" (UID: \"1d744a84-2197-4261-825f-25fbd8bac166\") " Dec 13 14:51:46.186456 kubelet[1623]: I1213 14:51:46.186110 1623 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1d744a84-2197-4261-825f-25fbd8bac166-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "1d744a84-2197-4261-825f-25fbd8bac166" (UID: "1d744a84-2197-4261-825f-25fbd8bac166"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:51:46.186456 kubelet[1623]: I1213 14:51:46.186186 1623 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1d744a84-2197-4261-825f-25fbd8bac166-hostproc" (OuterVolumeSpecName: "hostproc") pod "1d744a84-2197-4261-825f-25fbd8bac166" (UID: "1d744a84-2197-4261-825f-25fbd8bac166"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:51:46.189984 kubelet[1623]: I1213 14:51:46.186936 1623 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1d744a84-2197-4261-825f-25fbd8bac166-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "1d744a84-2197-4261-825f-25fbd8bac166" (UID: "1d744a84-2197-4261-825f-25fbd8bac166"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:51:46.190614 kubelet[1623]: I1213 14:51:46.190581 1623 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1d744a84-2197-4261-825f-25fbd8bac166-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "1d744a84-2197-4261-825f-25fbd8bac166" (UID: "1d744a84-2197-4261-825f-25fbd8bac166"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:51:46.191475 kubelet[1623]: I1213 14:51:46.191444 1623 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d744a84-2197-4261-825f-25fbd8bac166-kube-api-access-fpq6r" (OuterVolumeSpecName: "kube-api-access-fpq6r") pod "1d744a84-2197-4261-825f-25fbd8bac166" (UID: "1d744a84-2197-4261-825f-25fbd8bac166"). InnerVolumeSpecName "kube-api-access-fpq6r". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 14:51:46.191662 kubelet[1623]: I1213 14:51:46.191628 1623 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1d744a84-2197-4261-825f-25fbd8bac166-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "1d744a84-2197-4261-825f-25fbd8bac166" (UID: "1d744a84-2197-4261-825f-25fbd8bac166"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:51:46.192149 kubelet[1623]: I1213 14:51:46.191804 1623 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1d744a84-2197-4261-825f-25fbd8bac166-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "1d744a84-2197-4261-825f-25fbd8bac166" (UID: "1d744a84-2197-4261-825f-25fbd8bac166"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:51:46.192286 kubelet[1623]: I1213 14:51:46.191837 1623 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1d744a84-2197-4261-825f-25fbd8bac166-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "1d744a84-2197-4261-825f-25fbd8bac166" (UID: "1d744a84-2197-4261-825f-25fbd8bac166"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:51:46.192429 kubelet[1623]: I1213 14:51:46.191856 1623 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1d744a84-2197-4261-825f-25fbd8bac166-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "1d744a84-2197-4261-825f-25fbd8bac166" (UID: "1d744a84-2197-4261-825f-25fbd8bac166"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:51:46.192581 kubelet[1623]: I1213 14:51:46.191885 1623 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1d744a84-2197-4261-825f-25fbd8bac166-cni-path" (OuterVolumeSpecName: "cni-path") pod "1d744a84-2197-4261-825f-25fbd8bac166" (UID: "1d744a84-2197-4261-825f-25fbd8bac166"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:51:46.192757 kubelet[1623]: I1213 14:51:46.191907 1623 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1d744a84-2197-4261-825f-25fbd8bac166-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "1d744a84-2197-4261-825f-25fbd8bac166" (UID: "1d744a84-2197-4261-825f-25fbd8bac166"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:51:46.193109 kubelet[1623]: I1213 14:51:46.193079 1623 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d744a84-2197-4261-825f-25fbd8bac166-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "1d744a84-2197-4261-825f-25fbd8bac166" (UID: "1d744a84-2197-4261-825f-25fbd8bac166"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 14:51:46.194105 kubelet[1623]: I1213 14:51:46.194065 1623 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d744a84-2197-4261-825f-25fbd8bac166-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "1d744a84-2197-4261-825f-25fbd8bac166" (UID: "1d744a84-2197-4261-825f-25fbd8bac166"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 14:51:46.197269 kubelet[1623]: I1213 14:51:46.197231 1623 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d744a84-2197-4261-825f-25fbd8bac166-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "1d744a84-2197-4261-825f-25fbd8bac166" (UID: "1d744a84-2197-4261-825f-25fbd8bac166"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 14:51:46.286600 kubelet[1623]: I1213 14:51:46.286542 1623 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1d744a84-2197-4261-825f-25fbd8bac166-cilium-run\") on node \"10.230.34.126\" DevicePath \"\"" Dec 13 14:51:46.286863 kubelet[1623]: I1213 14:51:46.286839 1623 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1d744a84-2197-4261-825f-25fbd8bac166-host-proc-sys-kernel\") on node \"10.230.34.126\" DevicePath \"\"" Dec 13 14:51:46.287112 kubelet[1623]: I1213 14:51:46.287070 1623 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-fpq6r\" (UniqueName: \"kubernetes.io/projected/1d744a84-2197-4261-825f-25fbd8bac166-kube-api-access-fpq6r\") on node \"10.230.34.126\" DevicePath \"\"" Dec 13 14:51:46.287259 kubelet[1623]: I1213 14:51:46.287237 1623 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1d744a84-2197-4261-825f-25fbd8bac166-bpf-maps\") on node \"10.230.34.126\" DevicePath \"\"" Dec 13 14:51:46.287448 kubelet[1623]: I1213 14:51:46.287414 1623 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1d744a84-2197-4261-825f-25fbd8bac166-xtables-lock\") on node \"10.230.34.126\" DevicePath \"\"" Dec 13 14:51:46.287581 kubelet[1623]: I1213 14:51:46.287561 1623 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1d744a84-2197-4261-825f-25fbd8bac166-lib-modules\") on node \"10.230.34.126\" DevicePath \"\"" Dec 13 14:51:46.287759 kubelet[1623]: I1213 14:51:46.287737 1623 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1d744a84-2197-4261-825f-25fbd8bac166-hubble-tls\") on node \"10.230.34.126\" DevicePath \"\"" Dec 13 14:51:46.287914 kubelet[1623]: I1213 14:51:46.287893 1623 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1d744a84-2197-4261-825f-25fbd8bac166-cni-path\") on node \"10.230.34.126\" DevicePath \"\"" Dec 13 14:51:46.288099 kubelet[1623]: I1213 14:51:46.288077 1623 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1d744a84-2197-4261-825f-25fbd8bac166-clustermesh-secrets\") on node \"10.230.34.126\" DevicePath \"\"" Dec 13 14:51:46.288284 kubelet[1623]: I1213 14:51:46.288261 1623 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1d744a84-2197-4261-825f-25fbd8bac166-host-proc-sys-net\") on node \"10.230.34.126\" DevicePath \"\"" Dec 13 14:51:46.288494 kubelet[1623]: I1213 14:51:46.288462 1623 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1d744a84-2197-4261-825f-25fbd8bac166-etc-cni-netd\") on node \"10.230.34.126\" DevicePath \"\"" Dec 13 14:51:46.288656 kubelet[1623]: I1213 14:51:46.288622 1623 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1d744a84-2197-4261-825f-25fbd8bac166-cilium-config-path\") on node \"10.230.34.126\" DevicePath \"\"" Dec 13 14:51:46.288876 kubelet[1623]: I1213 14:51:46.288854 1623 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1d744a84-2197-4261-825f-25fbd8bac166-cilium-cgroup\") on node \"10.230.34.126\" DevicePath \"\"" Dec 13 14:51:46.289046 kubelet[1623]: I1213 14:51:46.289012 1623 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1d744a84-2197-4261-825f-25fbd8bac166-hostproc\") on node \"10.230.34.126\" DevicePath \"\"" Dec 13 14:51:46.446292 kubelet[1623]: I1213 14:51:46.444376 1623 scope.go:117] "RemoveContainer" containerID="d02fb8923cbe191552b028d59866f9b98e798699ac851bab17bb435cfc5453b9" Dec 13 14:51:46.446446 env[1300]: time="2024-12-13T14:51:46.446085636Z" level=info msg="RemoveContainer for \"d02fb8923cbe191552b028d59866f9b98e798699ac851bab17bb435cfc5453b9\"" Dec 13 14:51:46.450092 env[1300]: time="2024-12-13T14:51:46.450021572Z" level=info msg="RemoveContainer for \"d02fb8923cbe191552b028d59866f9b98e798699ac851bab17bb435cfc5453b9\" returns successfully" Dec 13 14:51:46.450501 kubelet[1623]: I1213 14:51:46.450464 1623 scope.go:117] "RemoveContainer" containerID="b7563026dad6743308294b30bb1197f84111a7fd7cec1d539b6219e8e84c7ae5" Dec 13 14:51:46.452874 env[1300]: time="2024-12-13T14:51:46.452586787Z" level=info msg="RemoveContainer for \"b7563026dad6743308294b30bb1197f84111a7fd7cec1d539b6219e8e84c7ae5\"" Dec 13 14:51:46.455605 env[1300]: time="2024-12-13T14:51:46.455569754Z" level=info msg="RemoveContainer for \"b7563026dad6743308294b30bb1197f84111a7fd7cec1d539b6219e8e84c7ae5\" returns successfully" Dec 13 14:51:46.455929 kubelet[1623]: I1213 14:51:46.455904 1623 scope.go:117] "RemoveContainer" containerID="e55356e12c40bf55f395e58d0da792dc491362d79f158640adc2c1d7cd80972d" Dec 13 14:51:46.457409 env[1300]: time="2024-12-13T14:51:46.457372612Z" level=info msg="RemoveContainer for \"e55356e12c40bf55f395e58d0da792dc491362d79f158640adc2c1d7cd80972d\"" Dec 13 14:51:46.461025 env[1300]: time="2024-12-13T14:51:46.460938410Z" level=info msg="RemoveContainer for \"e55356e12c40bf55f395e58d0da792dc491362d79f158640adc2c1d7cd80972d\" returns successfully" Dec 13 14:51:46.461367 kubelet[1623]: I1213 14:51:46.461303 1623 scope.go:117] "RemoveContainer" containerID="75936ed9c0a6ce689e393d792e6177809b7d336d4308fae269b0b8df7ffbce08" Dec 13 14:51:46.463018 env[1300]: time="2024-12-13T14:51:46.462945711Z" level=info msg="RemoveContainer for \"75936ed9c0a6ce689e393d792e6177809b7d336d4308fae269b0b8df7ffbce08\"" Dec 13 14:51:46.465799 env[1300]: time="2024-12-13T14:51:46.465762730Z" level=info msg="RemoveContainer for \"75936ed9c0a6ce689e393d792e6177809b7d336d4308fae269b0b8df7ffbce08\" returns successfully" Dec 13 14:51:46.465998 kubelet[1623]: I1213 14:51:46.465948 1623 scope.go:117] "RemoveContainer" containerID="d43ef6b7a07b71319b9392bceac4ede8981f1054e4785c32af6e1bda0f9d6e24" Dec 13 14:51:46.467912 env[1300]: time="2024-12-13T14:51:46.467874557Z" level=info msg="RemoveContainer for \"d43ef6b7a07b71319b9392bceac4ede8981f1054e4785c32af6e1bda0f9d6e24\"" Dec 13 14:51:46.470834 env[1300]: time="2024-12-13T14:51:46.470792428Z" level=info msg="RemoveContainer for \"d43ef6b7a07b71319b9392bceac4ede8981f1054e4785c32af6e1bda0f9d6e24\" returns successfully" Dec 13 14:51:46.471046 kubelet[1623]: I1213 14:51:46.471007 1623 scope.go:117] "RemoveContainer" containerID="d02fb8923cbe191552b028d59866f9b98e798699ac851bab17bb435cfc5453b9" Dec 13 14:51:46.471860 env[1300]: time="2024-12-13T14:51:46.471744609Z" level=error msg="ContainerStatus for \"d02fb8923cbe191552b028d59866f9b98e798699ac851bab17bb435cfc5453b9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d02fb8923cbe191552b028d59866f9b98e798699ac851bab17bb435cfc5453b9\": not found" Dec 13 14:51:46.473288 kubelet[1623]: E1213 14:51:46.473254 1623 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d02fb8923cbe191552b028d59866f9b98e798699ac851bab17bb435cfc5453b9\": not found" containerID="d02fb8923cbe191552b028d59866f9b98e798699ac851bab17bb435cfc5453b9" Dec 13 14:51:46.473788 kubelet[1623]: I1213 14:51:46.473736 1623 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d02fb8923cbe191552b028d59866f9b98e798699ac851bab17bb435cfc5453b9"} err="failed to get container status \"d02fb8923cbe191552b028d59866f9b98e798699ac851bab17bb435cfc5453b9\": rpc error: code = NotFound desc = an error occurred when try to find container \"d02fb8923cbe191552b028d59866f9b98e798699ac851bab17bb435cfc5453b9\": not found" Dec 13 14:51:46.473996 kubelet[1623]: I1213 14:51:46.473947 1623 scope.go:117] "RemoveContainer" containerID="b7563026dad6743308294b30bb1197f84111a7fd7cec1d539b6219e8e84c7ae5" Dec 13 14:51:46.474991 env[1300]: time="2024-12-13T14:51:46.474906518Z" level=error msg="ContainerStatus for \"b7563026dad6743308294b30bb1197f84111a7fd7cec1d539b6219e8e84c7ae5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b7563026dad6743308294b30bb1197f84111a7fd7cec1d539b6219e8e84c7ae5\": not found" Dec 13 14:51:46.475506 kubelet[1623]: E1213 14:51:46.475463 1623 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b7563026dad6743308294b30bb1197f84111a7fd7cec1d539b6219e8e84c7ae5\": not found" containerID="b7563026dad6743308294b30bb1197f84111a7fd7cec1d539b6219e8e84c7ae5" Dec 13 14:51:46.475588 kubelet[1623]: I1213 14:51:46.475510 1623 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b7563026dad6743308294b30bb1197f84111a7fd7cec1d539b6219e8e84c7ae5"} err="failed to get container status \"b7563026dad6743308294b30bb1197f84111a7fd7cec1d539b6219e8e84c7ae5\": rpc error: code = NotFound desc = an error occurred when try to find container \"b7563026dad6743308294b30bb1197f84111a7fd7cec1d539b6219e8e84c7ae5\": not found" Dec 13 14:51:46.475588 kubelet[1623]: I1213 14:51:46.475550 1623 scope.go:117] "RemoveContainer" containerID="e55356e12c40bf55f395e58d0da792dc491362d79f158640adc2c1d7cd80972d" Dec 13 14:51:46.475826 env[1300]: time="2024-12-13T14:51:46.475749496Z" level=error msg="ContainerStatus for \"e55356e12c40bf55f395e58d0da792dc491362d79f158640adc2c1d7cd80972d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e55356e12c40bf55f395e58d0da792dc491362d79f158640adc2c1d7cd80972d\": not found" Dec 13 14:51:46.476258 kubelet[1623]: E1213 14:51:46.476216 1623 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e55356e12c40bf55f395e58d0da792dc491362d79f158640adc2c1d7cd80972d\": not found" containerID="e55356e12c40bf55f395e58d0da792dc491362d79f158640adc2c1d7cd80972d" Dec 13 14:51:46.477090 kubelet[1623]: I1213 14:51:46.477062 1623 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e55356e12c40bf55f395e58d0da792dc491362d79f158640adc2c1d7cd80972d"} err="failed to get container status \"e55356e12c40bf55f395e58d0da792dc491362d79f158640adc2c1d7cd80972d\": rpc error: code = NotFound desc = an error occurred when try to find container \"e55356e12c40bf55f395e58d0da792dc491362d79f158640adc2c1d7cd80972d\": not found" Dec 13 14:51:46.477229 kubelet[1623]: I1213 14:51:46.477205 1623 scope.go:117] "RemoveContainer" containerID="75936ed9c0a6ce689e393d792e6177809b7d336d4308fae269b0b8df7ffbce08" Dec 13 14:51:46.480272 env[1300]: time="2024-12-13T14:51:46.480192504Z" level=error msg="ContainerStatus for \"75936ed9c0a6ce689e393d792e6177809b7d336d4308fae269b0b8df7ffbce08\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"75936ed9c0a6ce689e393d792e6177809b7d336d4308fae269b0b8df7ffbce08\": not found" Dec 13 14:51:46.481123 kubelet[1623]: E1213 14:51:46.481094 1623 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"75936ed9c0a6ce689e393d792e6177809b7d336d4308fae269b0b8df7ffbce08\": not found" containerID="75936ed9c0a6ce689e393d792e6177809b7d336d4308fae269b0b8df7ffbce08" Dec 13 14:51:46.481214 kubelet[1623]: I1213 14:51:46.481141 1623 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"75936ed9c0a6ce689e393d792e6177809b7d336d4308fae269b0b8df7ffbce08"} err="failed to get container status \"75936ed9c0a6ce689e393d792e6177809b7d336d4308fae269b0b8df7ffbce08\": rpc error: code = NotFound desc = an error occurred when try to find container \"75936ed9c0a6ce689e393d792e6177809b7d336d4308fae269b0b8df7ffbce08\": not found" Dec 13 14:51:46.481214 kubelet[1623]: I1213 14:51:46.481192 1623 scope.go:117] "RemoveContainer" containerID="d43ef6b7a07b71319b9392bceac4ede8981f1054e4785c32af6e1bda0f9d6e24" Dec 13 14:51:46.481474 env[1300]: time="2024-12-13T14:51:46.481411928Z" level=error msg="ContainerStatus for \"d43ef6b7a07b71319b9392bceac4ede8981f1054e4785c32af6e1bda0f9d6e24\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d43ef6b7a07b71319b9392bceac4ede8981f1054e4785c32af6e1bda0f9d6e24\": not found" Dec 13 14:51:46.481682 kubelet[1623]: E1213 14:51:46.481622 1623 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d43ef6b7a07b71319b9392bceac4ede8981f1054e4785c32af6e1bda0f9d6e24\": not found" containerID="d43ef6b7a07b71319b9392bceac4ede8981f1054e4785c32af6e1bda0f9d6e24" Dec 13 14:51:46.481790 kubelet[1623]: I1213 14:51:46.481701 1623 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d43ef6b7a07b71319b9392bceac4ede8981f1054e4785c32af6e1bda0f9d6e24"} err="failed to get container status \"d43ef6b7a07b71319b9392bceac4ede8981f1054e4785c32af6e1bda0f9d6e24\": rpc error: code = NotFound desc = an error occurred when try to find container \"d43ef6b7a07b71319b9392bceac4ede8981f1054e4785c32af6e1bda0f9d6e24\": not found" Dec 13 14:51:46.880216 systemd[1]: var-lib-kubelet-pods-1d744a84\x2d2197\x2d4261\x2d825f\x2d25fbd8bac166-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dfpq6r.mount: Deactivated successfully. Dec 13 14:51:46.880504 systemd[1]: var-lib-kubelet-pods-1d744a84\x2d2197\x2d4261\x2d825f\x2d25fbd8bac166-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 14:51:46.880688 systemd[1]: var-lib-kubelet-pods-1d744a84\x2d2197\x2d4261\x2d825f\x2d25fbd8bac166-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 14:51:47.102348 kubelet[1623]: E1213 14:51:47.102262 1623 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:51:47.166359 kubelet[1623]: E1213 14:51:47.166328 1623 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 14:51:48.104182 kubelet[1623]: E1213 14:51:48.104119 1623 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:51:48.233770 kubelet[1623]: I1213 14:51:48.233727 1623 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="1d744a84-2197-4261-825f-25fbd8bac166" path="/var/lib/kubelet/pods/1d744a84-2197-4261-825f-25fbd8bac166/volumes" Dec 13 14:51:49.105787 kubelet[1623]: E1213 14:51:49.105694 1623 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:51:50.026479 kubelet[1623]: I1213 14:51:50.026376 1623 topology_manager.go:215] "Topology Admit Handler" podUID="ff0bc178-7590-43cb-83bd-a34ee367c5ea" podNamespace="kube-system" podName="cilium-nrq58" Dec 13 14:51:50.026678 kubelet[1623]: E1213 14:51:50.026564 1623 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1d744a84-2197-4261-825f-25fbd8bac166" containerName="mount-bpf-fs" Dec 13 14:51:50.026678 kubelet[1623]: E1213 14:51:50.026593 1623 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1d744a84-2197-4261-825f-25fbd8bac166" containerName="cilium-agent" Dec 13 14:51:50.026678 kubelet[1623]: E1213 14:51:50.026615 1623 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1d744a84-2197-4261-825f-25fbd8bac166" containerName="mount-cgroup" Dec 13 14:51:50.026678 kubelet[1623]: E1213 14:51:50.026627 1623 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1d744a84-2197-4261-825f-25fbd8bac166" containerName="apply-sysctl-overwrites" Dec 13 14:51:50.026678 kubelet[1623]: E1213 14:51:50.026639 1623 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1d744a84-2197-4261-825f-25fbd8bac166" containerName="clean-cilium-state" Dec 13 14:51:50.027011 kubelet[1623]: I1213 14:51:50.026707 1623 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d744a84-2197-4261-825f-25fbd8bac166" containerName="cilium-agent" Dec 13 14:51:50.050998 kubelet[1623]: I1213 14:51:50.050934 1623 topology_manager.go:215] "Topology Admit Handler" podUID="025ef501-c2c8-4a38-b071-e6548d1665e9" podNamespace="kube-system" podName="cilium-operator-5cc964979-zrr9b" Dec 13 14:51:50.107127 kubelet[1623]: E1213 14:51:50.107012 1623 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:51:50.114552 kubelet[1623]: I1213 14:51:50.114520 1623 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ff0bc178-7590-43cb-83bd-a34ee367c5ea-cni-path\") pod \"cilium-nrq58\" (UID: \"ff0bc178-7590-43cb-83bd-a34ee367c5ea\") " pod="kube-system/cilium-nrq58" Dec 13 14:51:50.114679 kubelet[1623]: I1213 14:51:50.114577 1623 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ff0bc178-7590-43cb-83bd-a34ee367c5ea-clustermesh-secrets\") pod \"cilium-nrq58\" (UID: \"ff0bc178-7590-43cb-83bd-a34ee367c5ea\") " pod="kube-system/cilium-nrq58" Dec 13 14:51:50.114679 kubelet[1623]: I1213 14:51:50.114611 1623 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/ff0bc178-7590-43cb-83bd-a34ee367c5ea-cilium-ipsec-secrets\") pod \"cilium-nrq58\" (UID: \"ff0bc178-7590-43cb-83bd-a34ee367c5ea\") " pod="kube-system/cilium-nrq58" Dec 13 14:51:50.114679 kubelet[1623]: I1213 14:51:50.114655 1623 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ff0bc178-7590-43cb-83bd-a34ee367c5ea-cilium-config-path\") pod \"cilium-nrq58\" (UID: \"ff0bc178-7590-43cb-83bd-a34ee367c5ea\") " pod="kube-system/cilium-nrq58" Dec 13 14:51:50.114870 kubelet[1623]: I1213 14:51:50.114691 1623 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ff0bc178-7590-43cb-83bd-a34ee367c5ea-hubble-tls\") pod \"cilium-nrq58\" (UID: \"ff0bc178-7590-43cb-83bd-a34ee367c5ea\") " pod="kube-system/cilium-nrq58" Dec 13 14:51:50.114870 kubelet[1623]: I1213 14:51:50.114737 1623 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/025ef501-c2c8-4a38-b071-e6548d1665e9-cilium-config-path\") pod \"cilium-operator-5cc964979-zrr9b\" (UID: \"025ef501-c2c8-4a38-b071-e6548d1665e9\") " pod="kube-system/cilium-operator-5cc964979-zrr9b" Dec 13 14:51:50.114870 kubelet[1623]: I1213 14:51:50.114769 1623 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-624f5\" (UniqueName: \"kubernetes.io/projected/025ef501-c2c8-4a38-b071-e6548d1665e9-kube-api-access-624f5\") pod \"cilium-operator-5cc964979-zrr9b\" (UID: \"025ef501-c2c8-4a38-b071-e6548d1665e9\") " pod="kube-system/cilium-operator-5cc964979-zrr9b" Dec 13 14:51:50.114870 kubelet[1623]: I1213 14:51:50.114808 1623 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ff0bc178-7590-43cb-83bd-a34ee367c5ea-lib-modules\") pod \"cilium-nrq58\" (UID: \"ff0bc178-7590-43cb-83bd-a34ee367c5ea\") " pod="kube-system/cilium-nrq58" Dec 13 14:51:50.114870 kubelet[1623]: I1213 14:51:50.114843 1623 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ff0bc178-7590-43cb-83bd-a34ee367c5ea-etc-cni-netd\") pod \"cilium-nrq58\" (UID: \"ff0bc178-7590-43cb-83bd-a34ee367c5ea\") " pod="kube-system/cilium-nrq58" Dec 13 14:51:50.115221 kubelet[1623]: I1213 14:51:50.114874 1623 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ff0bc178-7590-43cb-83bd-a34ee367c5ea-xtables-lock\") pod \"cilium-nrq58\" (UID: \"ff0bc178-7590-43cb-83bd-a34ee367c5ea\") " pod="kube-system/cilium-nrq58" Dec 13 14:51:50.115221 kubelet[1623]: I1213 14:51:50.114906 1623 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tnk22\" (UniqueName: \"kubernetes.io/projected/ff0bc178-7590-43cb-83bd-a34ee367c5ea-kube-api-access-tnk22\") pod \"cilium-nrq58\" (UID: \"ff0bc178-7590-43cb-83bd-a34ee367c5ea\") " pod="kube-system/cilium-nrq58" Dec 13 14:51:50.115221 kubelet[1623]: I1213 14:51:50.114945 1623 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ff0bc178-7590-43cb-83bd-a34ee367c5ea-bpf-maps\") pod \"cilium-nrq58\" (UID: \"ff0bc178-7590-43cb-83bd-a34ee367c5ea\") " pod="kube-system/cilium-nrq58" Dec 13 14:51:50.115221 kubelet[1623]: I1213 14:51:50.115002 1623 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ff0bc178-7590-43cb-83bd-a34ee367c5ea-hostproc\") pod \"cilium-nrq58\" (UID: \"ff0bc178-7590-43cb-83bd-a34ee367c5ea\") " pod="kube-system/cilium-nrq58" Dec 13 14:51:50.115221 kubelet[1623]: I1213 14:51:50.115050 1623 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ff0bc178-7590-43cb-83bd-a34ee367c5ea-host-proc-sys-net\") pod \"cilium-nrq58\" (UID: \"ff0bc178-7590-43cb-83bd-a34ee367c5ea\") " pod="kube-system/cilium-nrq58" Dec 13 14:51:50.115221 kubelet[1623]: I1213 14:51:50.115081 1623 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ff0bc178-7590-43cb-83bd-a34ee367c5ea-host-proc-sys-kernel\") pod \"cilium-nrq58\" (UID: \"ff0bc178-7590-43cb-83bd-a34ee367c5ea\") " pod="kube-system/cilium-nrq58" Dec 13 14:51:50.115561 kubelet[1623]: I1213 14:51:50.115149 1623 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ff0bc178-7590-43cb-83bd-a34ee367c5ea-cilium-run\") pod \"cilium-nrq58\" (UID: \"ff0bc178-7590-43cb-83bd-a34ee367c5ea\") " pod="kube-system/cilium-nrq58" Dec 13 14:51:50.115561 kubelet[1623]: I1213 14:51:50.115184 1623 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ff0bc178-7590-43cb-83bd-a34ee367c5ea-cilium-cgroup\") pod \"cilium-nrq58\" (UID: \"ff0bc178-7590-43cb-83bd-a34ee367c5ea\") " pod="kube-system/cilium-nrq58" Dec 13 14:51:50.333205 env[1300]: time="2024-12-13T14:51:50.333000079Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-nrq58,Uid:ff0bc178-7590-43cb-83bd-a34ee367c5ea,Namespace:kube-system,Attempt:0,}" Dec 13 14:51:50.356427 env[1300]: time="2024-12-13T14:51:50.356321171Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:51:50.356855 env[1300]: time="2024-12-13T14:51:50.356787498Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:51:50.359693 env[1300]: time="2024-12-13T14:51:50.359647916Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-zrr9b,Uid:025ef501-c2c8-4a38-b071-e6548d1665e9,Namespace:kube-system,Attempt:0,}" Dec 13 14:51:50.360417 env[1300]: time="2024-12-13T14:51:50.360368881Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:51:50.361040 env[1300]: time="2024-12-13T14:51:50.360779952Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5f1f139cda0efc7b0cc33c727b56fb2ae1daa639ca3c0b8829524dd27de2a700 pid=3172 runtime=io.containerd.runc.v2 Dec 13 14:51:50.378710 env[1300]: time="2024-12-13T14:51:50.378572398Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:51:50.378710 env[1300]: time="2024-12-13T14:51:50.378626623Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:51:50.378941 env[1300]: time="2024-12-13T14:51:50.378644069Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:51:50.379143 env[1300]: time="2024-12-13T14:51:50.378932325Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0bc1546a5fef3bb4f064d52748a9d17250139883a78f174c9a84561e691e6252 pid=3195 runtime=io.containerd.runc.v2 Dec 13 14:51:50.446081 env[1300]: time="2024-12-13T14:51:50.446022765Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-nrq58,Uid:ff0bc178-7590-43cb-83bd-a34ee367c5ea,Namespace:kube-system,Attempt:0,} returns sandbox id \"5f1f139cda0efc7b0cc33c727b56fb2ae1daa639ca3c0b8829524dd27de2a700\"" Dec 13 14:51:50.451324 env[1300]: time="2024-12-13T14:51:50.451282703Z" level=info msg="CreateContainer within sandbox \"5f1f139cda0efc7b0cc33c727b56fb2ae1daa639ca3c0b8829524dd27de2a700\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 14:51:50.486498 env[1300]: time="2024-12-13T14:51:50.486421818Z" level=info msg="CreateContainer within sandbox \"5f1f139cda0efc7b0cc33c727b56fb2ae1daa639ca3c0b8829524dd27de2a700\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"55096a083a3c09cface21065b40effc6e470c454926935f51da04b3773699030\"" Dec 13 14:51:50.487721 env[1300]: time="2024-12-13T14:51:50.487682757Z" level=info msg="StartContainer for \"55096a083a3c09cface21065b40effc6e470c454926935f51da04b3773699030\"" Dec 13 14:51:50.492179 env[1300]: time="2024-12-13T14:51:50.492130266Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-zrr9b,Uid:025ef501-c2c8-4a38-b071-e6548d1665e9,Namespace:kube-system,Attempt:0,} returns sandbox id \"0bc1546a5fef3bb4f064d52748a9d17250139883a78f174c9a84561e691e6252\"" Dec 13 14:51:50.495343 env[1300]: time="2024-12-13T14:51:50.495288051Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Dec 13 14:51:50.559177 env[1300]: time="2024-12-13T14:51:50.559115170Z" level=info msg="StartContainer for \"55096a083a3c09cface21065b40effc6e470c454926935f51da04b3773699030\" returns successfully" Dec 13 14:51:50.611338 env[1300]: time="2024-12-13T14:51:50.611186924Z" level=info msg="shim disconnected" id=55096a083a3c09cface21065b40effc6e470c454926935f51da04b3773699030 Dec 13 14:51:50.611629 env[1300]: time="2024-12-13T14:51:50.611595453Z" level=warning msg="cleaning up after shim disconnected" id=55096a083a3c09cface21065b40effc6e470c454926935f51da04b3773699030 namespace=k8s.io Dec 13 14:51:50.611776 env[1300]: time="2024-12-13T14:51:50.611746681Z" level=info msg="cleaning up dead shim" Dec 13 14:51:50.622788 env[1300]: time="2024-12-13T14:51:50.622708474Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:51:50Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3298 runtime=io.containerd.runc.v2\n" Dec 13 14:51:51.108641 kubelet[1623]: E1213 14:51:51.108506 1623 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:51:51.465803 env[1300]: time="2024-12-13T14:51:51.465732305Z" level=info msg="CreateContainer within sandbox \"5f1f139cda0efc7b0cc33c727b56fb2ae1daa639ca3c0b8829524dd27de2a700\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 14:51:51.483349 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1805137903.mount: Deactivated successfully. Dec 13 14:51:51.493976 env[1300]: time="2024-12-13T14:51:51.493884593Z" level=info msg="CreateContainer within sandbox \"5f1f139cda0efc7b0cc33c727b56fb2ae1daa639ca3c0b8829524dd27de2a700\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"7790cde5a958bd593afa72d21c69cefc6f6b336b045d32c3d059fab6704a4335\"" Dec 13 14:51:51.494991 env[1300]: time="2024-12-13T14:51:51.494904535Z" level=info msg="StartContainer for \"7790cde5a958bd593afa72d21c69cefc6f6b336b045d32c3d059fab6704a4335\"" Dec 13 14:51:51.584580 env[1300]: time="2024-12-13T14:51:51.584486802Z" level=info msg="StartContainer for \"7790cde5a958bd593afa72d21c69cefc6f6b336b045d32c3d059fab6704a4335\" returns successfully" Dec 13 14:51:51.619452 env[1300]: time="2024-12-13T14:51:51.619363141Z" level=info msg="shim disconnected" id=7790cde5a958bd593afa72d21c69cefc6f6b336b045d32c3d059fab6704a4335 Dec 13 14:51:51.619452 env[1300]: time="2024-12-13T14:51:51.619436249Z" level=warning msg="cleaning up after shim disconnected" id=7790cde5a958bd593afa72d21c69cefc6f6b336b045d32c3d059fab6704a4335 namespace=k8s.io Dec 13 14:51:51.619452 env[1300]: time="2024-12-13T14:51:51.619454534Z" level=info msg="cleaning up dead shim" Dec 13 14:51:51.631115 env[1300]: time="2024-12-13T14:51:51.631062256Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:51:51Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3360 runtime=io.containerd.runc.v2\n" Dec 13 14:51:52.033027 kubelet[1623]: E1213 14:51:52.032933 1623 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:51:52.109493 kubelet[1623]: E1213 14:51:52.109395 1623 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:51:52.168370 kubelet[1623]: E1213 14:51:52.168326 1623 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 14:51:52.223123 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7790cde5a958bd593afa72d21c69cefc6f6b336b045d32c3d059fab6704a4335-rootfs.mount: Deactivated successfully. Dec 13 14:51:52.472779 env[1300]: time="2024-12-13T14:51:52.472679632Z" level=info msg="CreateContainer within sandbox \"5f1f139cda0efc7b0cc33c727b56fb2ae1daa639ca3c0b8829524dd27de2a700\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 14:51:52.492583 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3160288111.mount: Deactivated successfully. Dec 13 14:51:52.504413 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3325289968.mount: Deactivated successfully. Dec 13 14:51:52.512568 env[1300]: time="2024-12-13T14:51:52.512517184Z" level=info msg="CreateContainer within sandbox \"5f1f139cda0efc7b0cc33c727b56fb2ae1daa639ca3c0b8829524dd27de2a700\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"3a86d5f215abe6c2829e57c6d1516c122b98afafb9a745eb7815347751ad9e56\"" Dec 13 14:51:52.515264 env[1300]: time="2024-12-13T14:51:52.515212911Z" level=info msg="StartContainer for \"3a86d5f215abe6c2829e57c6d1516c122b98afafb9a745eb7815347751ad9e56\"" Dec 13 14:51:52.599696 env[1300]: time="2024-12-13T14:51:52.599642329Z" level=info msg="StartContainer for \"3a86d5f215abe6c2829e57c6d1516c122b98afafb9a745eb7815347751ad9e56\" returns successfully" Dec 13 14:51:52.628428 env[1300]: time="2024-12-13T14:51:52.628382863Z" level=info msg="shim disconnected" id=3a86d5f215abe6c2829e57c6d1516c122b98afafb9a745eb7815347751ad9e56 Dec 13 14:51:52.628792 env[1300]: time="2024-12-13T14:51:52.628760715Z" level=warning msg="cleaning up after shim disconnected" id=3a86d5f215abe6c2829e57c6d1516c122b98afafb9a745eb7815347751ad9e56 namespace=k8s.io Dec 13 14:51:52.628987 env[1300]: time="2024-12-13T14:51:52.628921998Z" level=info msg="cleaning up dead shim" Dec 13 14:51:52.641070 env[1300]: time="2024-12-13T14:51:52.641035360Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:51:52Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3418 runtime=io.containerd.runc.v2\n" Dec 13 14:51:53.110443 kubelet[1623]: E1213 14:51:53.110388 1623 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:51:53.366559 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2267682804.mount: Deactivated successfully. Dec 13 14:51:53.457263 kubelet[1623]: I1213 14:51:53.456939 1623 setters.go:568] "Node became not ready" node="10.230.34.126" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-12-13T14:51:53Z","lastTransitionTime":"2024-12-13T14:51:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Dec 13 14:51:53.474897 env[1300]: time="2024-12-13T14:51:53.474849421Z" level=info msg="StopPodSandbox for \"5f1f139cda0efc7b0cc33c727b56fb2ae1daa639ca3c0b8829524dd27de2a700\"" Dec 13 14:51:53.475815 env[1300]: time="2024-12-13T14:51:53.475765335Z" level=info msg="Container to stop \"7790cde5a958bd593afa72d21c69cefc6f6b336b045d32c3d059fab6704a4335\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:51:53.476006 env[1300]: time="2024-12-13T14:51:53.475937463Z" level=info msg="Container to stop \"55096a083a3c09cface21065b40effc6e470c454926935f51da04b3773699030\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:51:53.477657 env[1300]: time="2024-12-13T14:51:53.476121891Z" level=info msg="Container to stop \"3a86d5f215abe6c2829e57c6d1516c122b98afafb9a745eb7815347751ad9e56\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:51:53.480899 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5f1f139cda0efc7b0cc33c727b56fb2ae1daa639ca3c0b8829524dd27de2a700-shm.mount: Deactivated successfully. Dec 13 14:51:53.525554 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5f1f139cda0efc7b0cc33c727b56fb2ae1daa639ca3c0b8829524dd27de2a700-rootfs.mount: Deactivated successfully. Dec 13 14:51:53.541478 env[1300]: time="2024-12-13T14:51:53.541425839Z" level=info msg="shim disconnected" id=5f1f139cda0efc7b0cc33c727b56fb2ae1daa639ca3c0b8829524dd27de2a700 Dec 13 14:51:53.542458 env[1300]: time="2024-12-13T14:51:53.542425142Z" level=warning msg="cleaning up after shim disconnected" id=5f1f139cda0efc7b0cc33c727b56fb2ae1daa639ca3c0b8829524dd27de2a700 namespace=k8s.io Dec 13 14:51:53.542616 env[1300]: time="2024-12-13T14:51:53.542577537Z" level=info msg="cleaning up dead shim" Dec 13 14:51:53.556422 env[1300]: time="2024-12-13T14:51:53.556387011Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:51:53Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3452 runtime=io.containerd.runc.v2\n" Dec 13 14:51:53.556984 env[1300]: time="2024-12-13T14:51:53.556934670Z" level=info msg="TearDown network for sandbox \"5f1f139cda0efc7b0cc33c727b56fb2ae1daa639ca3c0b8829524dd27de2a700\" successfully" Dec 13 14:51:53.557142 env[1300]: time="2024-12-13T14:51:53.557105757Z" level=info msg="StopPodSandbox for \"5f1f139cda0efc7b0cc33c727b56fb2ae1daa639ca3c0b8829524dd27de2a700\" returns successfully" Dec 13 14:51:53.651294 kubelet[1623]: I1213 14:51:53.651251 1623 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ff0bc178-7590-43cb-83bd-a34ee367c5ea-cni-path\") pod \"ff0bc178-7590-43cb-83bd-a34ee367c5ea\" (UID: \"ff0bc178-7590-43cb-83bd-a34ee367c5ea\") " Dec 13 14:51:53.651627 kubelet[1623]: I1213 14:51:53.651576 1623 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ff0bc178-7590-43cb-83bd-a34ee367c5ea-xtables-lock\") pod \"ff0bc178-7590-43cb-83bd-a34ee367c5ea\" (UID: \"ff0bc178-7590-43cb-83bd-a34ee367c5ea\") " Dec 13 14:51:53.651831 kubelet[1623]: I1213 14:51:53.651799 1623 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ff0bc178-7590-43cb-83bd-a34ee367c5ea-cilium-run\") pod \"ff0bc178-7590-43cb-83bd-a34ee367c5ea\" (UID: \"ff0bc178-7590-43cb-83bd-a34ee367c5ea\") " Dec 13 14:51:53.652007 kubelet[1623]: I1213 14:51:53.651357 1623 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ff0bc178-7590-43cb-83bd-a34ee367c5ea-cni-path" (OuterVolumeSpecName: "cni-path") pod "ff0bc178-7590-43cb-83bd-a34ee367c5ea" (UID: "ff0bc178-7590-43cb-83bd-a34ee367c5ea"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:51:53.652113 kubelet[1623]: I1213 14:51:53.651640 1623 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ff0bc178-7590-43cb-83bd-a34ee367c5ea-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "ff0bc178-7590-43cb-83bd-a34ee367c5ea" (UID: "ff0bc178-7590-43cb-83bd-a34ee367c5ea"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:51:53.652113 kubelet[1623]: I1213 14:51:53.651856 1623 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ff0bc178-7590-43cb-83bd-a34ee367c5ea-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "ff0bc178-7590-43cb-83bd-a34ee367c5ea" (UID: "ff0bc178-7590-43cb-83bd-a34ee367c5ea"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:51:53.652383 kubelet[1623]: I1213 14:51:53.652349 1623 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/ff0bc178-7590-43cb-83bd-a34ee367c5ea-cilium-ipsec-secrets\") pod \"ff0bc178-7590-43cb-83bd-a34ee367c5ea\" (UID: \"ff0bc178-7590-43cb-83bd-a34ee367c5ea\") " Dec 13 14:51:53.652559 kubelet[1623]: I1213 14:51:53.652536 1623 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ff0bc178-7590-43cb-83bd-a34ee367c5ea-cilium-config-path\") pod \"ff0bc178-7590-43cb-83bd-a34ee367c5ea\" (UID: \"ff0bc178-7590-43cb-83bd-a34ee367c5ea\") " Dec 13 14:51:53.652734 kubelet[1623]: I1213 14:51:53.652709 1623 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ff0bc178-7590-43cb-83bd-a34ee367c5ea-etc-cni-netd\") pod \"ff0bc178-7590-43cb-83bd-a34ee367c5ea\" (UID: \"ff0bc178-7590-43cb-83bd-a34ee367c5ea\") " Dec 13 14:51:53.652960 kubelet[1623]: I1213 14:51:53.652900 1623 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ff0bc178-7590-43cb-83bd-a34ee367c5ea-host-proc-sys-kernel\") pod \"ff0bc178-7590-43cb-83bd-a34ee367c5ea\" (UID: \"ff0bc178-7590-43cb-83bd-a34ee367c5ea\") " Dec 13 14:51:53.653188 kubelet[1623]: I1213 14:51:53.653156 1623 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ff0bc178-7590-43cb-83bd-a34ee367c5ea-cilium-cgroup\") pod \"ff0bc178-7590-43cb-83bd-a34ee367c5ea\" (UID: \"ff0bc178-7590-43cb-83bd-a34ee367c5ea\") " Dec 13 14:51:53.653340 kubelet[1623]: I1213 14:51:53.653317 1623 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ff0bc178-7590-43cb-83bd-a34ee367c5ea-bpf-maps\") pod \"ff0bc178-7590-43cb-83bd-a34ee367c5ea\" (UID: \"ff0bc178-7590-43cb-83bd-a34ee367c5ea\") " Dec 13 14:51:53.653548 kubelet[1623]: I1213 14:51:53.653516 1623 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ff0bc178-7590-43cb-83bd-a34ee367c5ea-host-proc-sys-net\") pod \"ff0bc178-7590-43cb-83bd-a34ee367c5ea\" (UID: \"ff0bc178-7590-43cb-83bd-a34ee367c5ea\") " Dec 13 14:51:53.653714 kubelet[1623]: I1213 14:51:53.653692 1623 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ff0bc178-7590-43cb-83bd-a34ee367c5ea-clustermesh-secrets\") pod \"ff0bc178-7590-43cb-83bd-a34ee367c5ea\" (UID: \"ff0bc178-7590-43cb-83bd-a34ee367c5ea\") " Dec 13 14:51:53.653890 kubelet[1623]: I1213 14:51:53.653866 1623 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ff0bc178-7590-43cb-83bd-a34ee367c5ea-lib-modules\") pod \"ff0bc178-7590-43cb-83bd-a34ee367c5ea\" (UID: \"ff0bc178-7590-43cb-83bd-a34ee367c5ea\") " Dec 13 14:51:53.654071 kubelet[1623]: I1213 14:51:53.654048 1623 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ff0bc178-7590-43cb-83bd-a34ee367c5ea-hubble-tls\") pod \"ff0bc178-7590-43cb-83bd-a34ee367c5ea\" (UID: \"ff0bc178-7590-43cb-83bd-a34ee367c5ea\") " Dec 13 14:51:53.654295 kubelet[1623]: I1213 14:51:53.654262 1623 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tnk22\" (UniqueName: \"kubernetes.io/projected/ff0bc178-7590-43cb-83bd-a34ee367c5ea-kube-api-access-tnk22\") pod \"ff0bc178-7590-43cb-83bd-a34ee367c5ea\" (UID: \"ff0bc178-7590-43cb-83bd-a34ee367c5ea\") " Dec 13 14:51:53.654548 kubelet[1623]: I1213 14:51:53.654515 1623 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ff0bc178-7590-43cb-83bd-a34ee367c5ea-hostproc\") pod \"ff0bc178-7590-43cb-83bd-a34ee367c5ea\" (UID: \"ff0bc178-7590-43cb-83bd-a34ee367c5ea\") " Dec 13 14:51:53.654744 kubelet[1623]: I1213 14:51:53.654701 1623 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ff0bc178-7590-43cb-83bd-a34ee367c5ea-xtables-lock\") on node \"10.230.34.126\" DevicePath \"\"" Dec 13 14:51:53.654906 kubelet[1623]: I1213 14:51:53.654883 1623 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ff0bc178-7590-43cb-83bd-a34ee367c5ea-cilium-run\") on node \"10.230.34.126\" DevicePath \"\"" Dec 13 14:51:53.655093 kubelet[1623]: I1213 14:51:53.655071 1623 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ff0bc178-7590-43cb-83bd-a34ee367c5ea-cni-path\") on node \"10.230.34.126\" DevicePath \"\"" Dec 13 14:51:53.655319 kubelet[1623]: I1213 14:51:53.655273 1623 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ff0bc178-7590-43cb-83bd-a34ee367c5ea-hostproc" (OuterVolumeSpecName: "hostproc") pod "ff0bc178-7590-43cb-83bd-a34ee367c5ea" (UID: "ff0bc178-7590-43cb-83bd-a34ee367c5ea"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:51:53.657046 kubelet[1623]: I1213 14:51:53.657009 1623 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ff0bc178-7590-43cb-83bd-a34ee367c5ea-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "ff0bc178-7590-43cb-83bd-a34ee367c5ea" (UID: "ff0bc178-7590-43cb-83bd-a34ee367c5ea"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:51:53.657364 kubelet[1623]: I1213 14:51:53.657234 1623 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ff0bc178-7590-43cb-83bd-a34ee367c5ea-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "ff0bc178-7590-43cb-83bd-a34ee367c5ea" (UID: "ff0bc178-7590-43cb-83bd-a34ee367c5ea"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:51:53.659084 kubelet[1623]: I1213 14:51:53.657513 1623 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ff0bc178-7590-43cb-83bd-a34ee367c5ea-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "ff0bc178-7590-43cb-83bd-a34ee367c5ea" (UID: "ff0bc178-7590-43cb-83bd-a34ee367c5ea"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:51:53.659250 kubelet[1623]: I1213 14:51:53.657543 1623 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ff0bc178-7590-43cb-83bd-a34ee367c5ea-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "ff0bc178-7590-43cb-83bd-a34ee367c5ea" (UID: "ff0bc178-7590-43cb-83bd-a34ee367c5ea"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:51:53.659392 kubelet[1623]: I1213 14:51:53.657578 1623 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ff0bc178-7590-43cb-83bd-a34ee367c5ea-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "ff0bc178-7590-43cb-83bd-a34ee367c5ea" (UID: "ff0bc178-7590-43cb-83bd-a34ee367c5ea"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:51:53.659551 kubelet[1623]: I1213 14:51:53.658368 1623 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ff0bc178-7590-43cb-83bd-a34ee367c5ea-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "ff0bc178-7590-43cb-83bd-a34ee367c5ea" (UID: "ff0bc178-7590-43cb-83bd-a34ee367c5ea"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:51:53.671521 kubelet[1623]: I1213 14:51:53.671487 1623 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff0bc178-7590-43cb-83bd-a34ee367c5ea-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "ff0bc178-7590-43cb-83bd-a34ee367c5ea" (UID: "ff0bc178-7590-43cb-83bd-a34ee367c5ea"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 14:51:53.672001 kubelet[1623]: I1213 14:51:53.671938 1623 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff0bc178-7590-43cb-83bd-a34ee367c5ea-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "ff0bc178-7590-43cb-83bd-a34ee367c5ea" (UID: "ff0bc178-7590-43cb-83bd-a34ee367c5ea"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 14:51:53.674026 kubelet[1623]: I1213 14:51:53.673932 1623 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff0bc178-7590-43cb-83bd-a34ee367c5ea-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "ff0bc178-7590-43cb-83bd-a34ee367c5ea" (UID: "ff0bc178-7590-43cb-83bd-a34ee367c5ea"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 14:51:53.675108 kubelet[1623]: I1213 14:51:53.675076 1623 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff0bc178-7590-43cb-83bd-a34ee367c5ea-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "ff0bc178-7590-43cb-83bd-a34ee367c5ea" (UID: "ff0bc178-7590-43cb-83bd-a34ee367c5ea"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 14:51:53.675426 kubelet[1623]: I1213 14:51:53.675394 1623 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff0bc178-7590-43cb-83bd-a34ee367c5ea-kube-api-access-tnk22" (OuterVolumeSpecName: "kube-api-access-tnk22") pod "ff0bc178-7590-43cb-83bd-a34ee367c5ea" (UID: "ff0bc178-7590-43cb-83bd-a34ee367c5ea"). InnerVolumeSpecName "kube-api-access-tnk22". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 14:51:53.756368 kubelet[1623]: I1213 14:51:53.756322 1623 reconciler_common.go:300] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/ff0bc178-7590-43cb-83bd-a34ee367c5ea-cilium-ipsec-secrets\") on node \"10.230.34.126\" DevicePath \"\"" Dec 13 14:51:53.756653 kubelet[1623]: I1213 14:51:53.756629 1623 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ff0bc178-7590-43cb-83bd-a34ee367c5ea-cilium-config-path\") on node \"10.230.34.126\" DevicePath \"\"" Dec 13 14:51:53.756820 kubelet[1623]: I1213 14:51:53.756797 1623 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ff0bc178-7590-43cb-83bd-a34ee367c5ea-etc-cni-netd\") on node \"10.230.34.126\" DevicePath \"\"" Dec 13 14:51:53.757035 kubelet[1623]: I1213 14:51:53.757005 1623 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ff0bc178-7590-43cb-83bd-a34ee367c5ea-host-proc-sys-kernel\") on node \"10.230.34.126\" DevicePath \"\"" Dec 13 14:51:53.757229 kubelet[1623]: I1213 14:51:53.757205 1623 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ff0bc178-7590-43cb-83bd-a34ee367c5ea-cilium-cgroup\") on node \"10.230.34.126\" DevicePath \"\"" Dec 13 14:51:53.757394 kubelet[1623]: I1213 14:51:53.757373 1623 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ff0bc178-7590-43cb-83bd-a34ee367c5ea-bpf-maps\") on node \"10.230.34.126\" DevicePath \"\"" Dec 13 14:51:53.757542 kubelet[1623]: I1213 14:51:53.757520 1623 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ff0bc178-7590-43cb-83bd-a34ee367c5ea-host-proc-sys-net\") on node \"10.230.34.126\" DevicePath \"\"" Dec 13 14:51:53.757702 kubelet[1623]: I1213 14:51:53.757680 1623 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ff0bc178-7590-43cb-83bd-a34ee367c5ea-clustermesh-secrets\") on node \"10.230.34.126\" DevicePath \"\"" Dec 13 14:51:53.757858 kubelet[1623]: I1213 14:51:53.757836 1623 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ff0bc178-7590-43cb-83bd-a34ee367c5ea-lib-modules\") on node \"10.230.34.126\" DevicePath \"\"" Dec 13 14:51:53.758020 kubelet[1623]: I1213 14:51:53.757998 1623 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ff0bc178-7590-43cb-83bd-a34ee367c5ea-hubble-tls\") on node \"10.230.34.126\" DevicePath \"\"" Dec 13 14:51:53.758212 kubelet[1623]: I1213 14:51:53.758190 1623 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-tnk22\" (UniqueName: \"kubernetes.io/projected/ff0bc178-7590-43cb-83bd-a34ee367c5ea-kube-api-access-tnk22\") on node \"10.230.34.126\" DevicePath \"\"" Dec 13 14:51:53.758369 kubelet[1623]: I1213 14:51:53.758348 1623 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ff0bc178-7590-43cb-83bd-a34ee367c5ea-hostproc\") on node \"10.230.34.126\" DevicePath \"\"" Dec 13 14:51:54.111738 kubelet[1623]: E1213 14:51:54.111583 1623 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:51:54.223504 systemd[1]: var-lib-kubelet-pods-ff0bc178\x2d7590\x2d43cb\x2d83bd\x2da34ee367c5ea-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 14:51:54.223763 systemd[1]: var-lib-kubelet-pods-ff0bc178\x2d7590\x2d43cb\x2d83bd\x2da34ee367c5ea-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dtnk22.mount: Deactivated successfully. Dec 13 14:51:54.224007 systemd[1]: var-lib-kubelet-pods-ff0bc178\x2d7590\x2d43cb\x2d83bd\x2da34ee367c5ea-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Dec 13 14:51:54.224192 systemd[1]: var-lib-kubelet-pods-ff0bc178\x2d7590\x2d43cb\x2d83bd\x2da34ee367c5ea-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 14:51:54.478683 kubelet[1623]: I1213 14:51:54.478632 1623 scope.go:117] "RemoveContainer" containerID="3a86d5f215abe6c2829e57c6d1516c122b98afafb9a745eb7815347751ad9e56" Dec 13 14:51:54.481030 env[1300]: time="2024-12-13T14:51:54.480985780Z" level=info msg="RemoveContainer for \"3a86d5f215abe6c2829e57c6d1516c122b98afafb9a745eb7815347751ad9e56\"" Dec 13 14:51:54.485381 env[1300]: time="2024-12-13T14:51:54.485341193Z" level=info msg="RemoveContainer for \"3a86d5f215abe6c2829e57c6d1516c122b98afafb9a745eb7815347751ad9e56\" returns successfully" Dec 13 14:51:54.485698 kubelet[1623]: I1213 14:51:54.485669 1623 scope.go:117] "RemoveContainer" containerID="7790cde5a958bd593afa72d21c69cefc6f6b336b045d32c3d059fab6704a4335" Dec 13 14:51:54.486878 env[1300]: time="2024-12-13T14:51:54.486842672Z" level=info msg="RemoveContainer for \"7790cde5a958bd593afa72d21c69cefc6f6b336b045d32c3d059fab6704a4335\"" Dec 13 14:51:54.490106 env[1300]: time="2024-12-13T14:51:54.490061484Z" level=info msg="RemoveContainer for \"7790cde5a958bd593afa72d21c69cefc6f6b336b045d32c3d059fab6704a4335\" returns successfully" Dec 13 14:51:54.491223 kubelet[1623]: I1213 14:51:54.491100 1623 scope.go:117] "RemoveContainer" containerID="55096a083a3c09cface21065b40effc6e470c454926935f51da04b3773699030" Dec 13 14:51:54.492426 env[1300]: time="2024-12-13T14:51:54.492388704Z" level=info msg="RemoveContainer for \"55096a083a3c09cface21065b40effc6e470c454926935f51da04b3773699030\"" Dec 13 14:51:54.496010 env[1300]: time="2024-12-13T14:51:54.495942902Z" level=info msg="RemoveContainer for \"55096a083a3c09cface21065b40effc6e470c454926935f51da04b3773699030\" returns successfully" Dec 13 14:51:54.525506 kubelet[1623]: I1213 14:51:54.525447 1623 topology_manager.go:215] "Topology Admit Handler" podUID="0214fb13-2ce4-4fac-8302-d7e986404acb" podNamespace="kube-system" podName="cilium-xz2v2" Dec 13 14:51:54.525774 kubelet[1623]: E1213 14:51:54.525733 1623 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ff0bc178-7590-43cb-83bd-a34ee367c5ea" containerName="mount-bpf-fs" Dec 13 14:51:54.525962 kubelet[1623]: E1213 14:51:54.525940 1623 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ff0bc178-7590-43cb-83bd-a34ee367c5ea" containerName="mount-cgroup" Dec 13 14:51:54.526213 kubelet[1623]: E1213 14:51:54.526189 1623 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ff0bc178-7590-43cb-83bd-a34ee367c5ea" containerName="apply-sysctl-overwrites" Dec 13 14:51:54.526412 kubelet[1623]: I1213 14:51:54.526379 1623 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff0bc178-7590-43cb-83bd-a34ee367c5ea" containerName="mount-bpf-fs" Dec 13 14:51:54.563566 kubelet[1623]: I1213 14:51:54.563515 1623 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0214fb13-2ce4-4fac-8302-d7e986404acb-host-proc-sys-kernel\") pod \"cilium-xz2v2\" (UID: \"0214fb13-2ce4-4fac-8302-d7e986404acb\") " pod="kube-system/cilium-xz2v2" Dec 13 14:51:54.563850 kubelet[1623]: I1213 14:51:54.563799 1623 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9cpf7\" (UniqueName: \"kubernetes.io/projected/0214fb13-2ce4-4fac-8302-d7e986404acb-kube-api-access-9cpf7\") pod \"cilium-xz2v2\" (UID: \"0214fb13-2ce4-4fac-8302-d7e986404acb\") " pod="kube-system/cilium-xz2v2" Dec 13 14:51:54.564130 kubelet[1623]: I1213 14:51:54.564101 1623 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0214fb13-2ce4-4fac-8302-d7e986404acb-hostproc\") pod \"cilium-xz2v2\" (UID: \"0214fb13-2ce4-4fac-8302-d7e986404acb\") " pod="kube-system/cilium-xz2v2" Dec 13 14:51:54.564363 kubelet[1623]: I1213 14:51:54.564323 1623 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/0214fb13-2ce4-4fac-8302-d7e986404acb-cilium-ipsec-secrets\") pod \"cilium-xz2v2\" (UID: \"0214fb13-2ce4-4fac-8302-d7e986404acb\") " pod="kube-system/cilium-xz2v2" Dec 13 14:51:54.564545 kubelet[1623]: I1213 14:51:54.564513 1623 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0214fb13-2ce4-4fac-8302-d7e986404acb-cilium-run\") pod \"cilium-xz2v2\" (UID: \"0214fb13-2ce4-4fac-8302-d7e986404acb\") " pod="kube-system/cilium-xz2v2" Dec 13 14:51:54.564746 kubelet[1623]: I1213 14:51:54.564708 1623 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0214fb13-2ce4-4fac-8302-d7e986404acb-bpf-maps\") pod \"cilium-xz2v2\" (UID: \"0214fb13-2ce4-4fac-8302-d7e986404acb\") " pod="kube-system/cilium-xz2v2" Dec 13 14:51:54.564959 kubelet[1623]: I1213 14:51:54.564923 1623 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0214fb13-2ce4-4fac-8302-d7e986404acb-xtables-lock\") pod \"cilium-xz2v2\" (UID: \"0214fb13-2ce4-4fac-8302-d7e986404acb\") " pod="kube-system/cilium-xz2v2" Dec 13 14:51:54.565200 kubelet[1623]: I1213 14:51:54.565152 1623 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0214fb13-2ce4-4fac-8302-d7e986404acb-cni-path\") pod \"cilium-xz2v2\" (UID: \"0214fb13-2ce4-4fac-8302-d7e986404acb\") " pod="kube-system/cilium-xz2v2" Dec 13 14:51:54.565406 kubelet[1623]: I1213 14:51:54.565373 1623 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0214fb13-2ce4-4fac-8302-d7e986404acb-etc-cni-netd\") pod \"cilium-xz2v2\" (UID: \"0214fb13-2ce4-4fac-8302-d7e986404acb\") " pod="kube-system/cilium-xz2v2" Dec 13 14:51:54.565587 kubelet[1623]: I1213 14:51:54.565556 1623 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0214fb13-2ce4-4fac-8302-d7e986404acb-clustermesh-secrets\") pod \"cilium-xz2v2\" (UID: \"0214fb13-2ce4-4fac-8302-d7e986404acb\") " pod="kube-system/cilium-xz2v2" Dec 13 14:51:54.565790 kubelet[1623]: I1213 14:51:54.565759 1623 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0214fb13-2ce4-4fac-8302-d7e986404acb-hubble-tls\") pod \"cilium-xz2v2\" (UID: \"0214fb13-2ce4-4fac-8302-d7e986404acb\") " pod="kube-system/cilium-xz2v2" Dec 13 14:51:54.566029 kubelet[1623]: I1213 14:51:54.566006 1623 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0214fb13-2ce4-4fac-8302-d7e986404acb-cilium-cgroup\") pod \"cilium-xz2v2\" (UID: \"0214fb13-2ce4-4fac-8302-d7e986404acb\") " pod="kube-system/cilium-xz2v2" Dec 13 14:51:54.566254 kubelet[1623]: I1213 14:51:54.566216 1623 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0214fb13-2ce4-4fac-8302-d7e986404acb-host-proc-sys-net\") pod \"cilium-xz2v2\" (UID: \"0214fb13-2ce4-4fac-8302-d7e986404acb\") " pod="kube-system/cilium-xz2v2" Dec 13 14:51:54.566459 kubelet[1623]: I1213 14:51:54.566426 1623 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0214fb13-2ce4-4fac-8302-d7e986404acb-cilium-config-path\") pod \"cilium-xz2v2\" (UID: \"0214fb13-2ce4-4fac-8302-d7e986404acb\") " pod="kube-system/cilium-xz2v2" Dec 13 14:51:54.566645 kubelet[1623]: I1213 14:51:54.566614 1623 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0214fb13-2ce4-4fac-8302-d7e986404acb-lib-modules\") pod \"cilium-xz2v2\" (UID: \"0214fb13-2ce4-4fac-8302-d7e986404acb\") " pod="kube-system/cilium-xz2v2" Dec 13 14:51:54.597069 env[1300]: time="2024-12-13T14:51:54.596954125Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:51:54.600006 env[1300]: time="2024-12-13T14:51:54.599834745Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:51:54.601999 env[1300]: time="2024-12-13T14:51:54.601869655Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:51:54.603076 env[1300]: time="2024-12-13T14:51:54.603017058Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Dec 13 14:51:54.606007 env[1300]: time="2024-12-13T14:51:54.605948729Z" level=info msg="CreateContainer within sandbox \"0bc1546a5fef3bb4f064d52748a9d17250139883a78f174c9a84561e691e6252\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Dec 13 14:51:54.619041 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2035481809.mount: Deactivated successfully. Dec 13 14:51:54.628870 env[1300]: time="2024-12-13T14:51:54.628818379Z" level=info msg="CreateContainer within sandbox \"0bc1546a5fef3bb4f064d52748a9d17250139883a78f174c9a84561e691e6252\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"ba59f6518bc9a0f8e94ef4003d608a7136c692860777d2d65c196f068ee8f869\"" Dec 13 14:51:54.629661 env[1300]: time="2024-12-13T14:51:54.629626141Z" level=info msg="StartContainer for \"ba59f6518bc9a0f8e94ef4003d608a7136c692860777d2d65c196f068ee8f869\"" Dec 13 14:51:54.723822 env[1300]: time="2024-12-13T14:51:54.723667168Z" level=info msg="StartContainer for \"ba59f6518bc9a0f8e94ef4003d608a7136c692860777d2d65c196f068ee8f869\" returns successfully" Dec 13 14:51:54.834305 env[1300]: time="2024-12-13T14:51:54.834001997Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xz2v2,Uid:0214fb13-2ce4-4fac-8302-d7e986404acb,Namespace:kube-system,Attempt:0,}" Dec 13 14:51:54.868654 env[1300]: time="2024-12-13T14:51:54.868539016Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:51:54.869132 env[1300]: time="2024-12-13T14:51:54.869075302Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:51:54.869426 env[1300]: time="2024-12-13T14:51:54.869185125Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:51:54.872333 env[1300]: time="2024-12-13T14:51:54.872269603Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/889c6d91e206bbb1c33a442070ed9fe29f4af4f77c0a888114ee8e9534058ec2 pid=3519 runtime=io.containerd.runc.v2 Dec 13 14:51:54.950913 env[1300]: time="2024-12-13T14:51:54.950834094Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xz2v2,Uid:0214fb13-2ce4-4fac-8302-d7e986404acb,Namespace:kube-system,Attempt:0,} returns sandbox id \"889c6d91e206bbb1c33a442070ed9fe29f4af4f77c0a888114ee8e9534058ec2\"" Dec 13 14:51:54.954924 env[1300]: time="2024-12-13T14:51:54.954882807Z" level=info msg="CreateContainer within sandbox \"889c6d91e206bbb1c33a442070ed9fe29f4af4f77c0a888114ee8e9534058ec2\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 14:51:54.966875 env[1300]: time="2024-12-13T14:51:54.966828857Z" level=info msg="CreateContainer within sandbox \"889c6d91e206bbb1c33a442070ed9fe29f4af4f77c0a888114ee8e9534058ec2\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"0dcb524aff77fb15364fbef338e8e51bcf3fa24a66b6d087f63bdaeaf7053a3b\"" Dec 13 14:51:54.971675 env[1300]: time="2024-12-13T14:51:54.969944811Z" level=info msg="StartContainer for \"0dcb524aff77fb15364fbef338e8e51bcf3fa24a66b6d087f63bdaeaf7053a3b\"" Dec 13 14:51:55.055229 env[1300]: time="2024-12-13T14:51:55.055142243Z" level=info msg="StartContainer for \"0dcb524aff77fb15364fbef338e8e51bcf3fa24a66b6d087f63bdaeaf7053a3b\" returns successfully" Dec 13 14:51:55.113310 kubelet[1623]: E1213 14:51:55.113111 1623 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:51:55.216751 env[1300]: time="2024-12-13T14:51:55.216657621Z" level=info msg="shim disconnected" id=0dcb524aff77fb15364fbef338e8e51bcf3fa24a66b6d087f63bdaeaf7053a3b Dec 13 14:51:55.216751 env[1300]: time="2024-12-13T14:51:55.216742301Z" level=warning msg="cleaning up after shim disconnected" id=0dcb524aff77fb15364fbef338e8e51bcf3fa24a66b6d087f63bdaeaf7053a3b namespace=k8s.io Dec 13 14:51:55.216751 env[1300]: time="2024-12-13T14:51:55.216760650Z" level=info msg="cleaning up dead shim" Dec 13 14:51:55.242506 env[1300]: time="2024-12-13T14:51:55.242444134Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:51:55Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3603 runtime=io.containerd.runc.v2\n" Dec 13 14:51:55.490218 env[1300]: time="2024-12-13T14:51:55.490130531Z" level=info msg="CreateContainer within sandbox \"889c6d91e206bbb1c33a442070ed9fe29f4af4f77c0a888114ee8e9534058ec2\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 14:51:55.505955 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3974160975.mount: Deactivated successfully. Dec 13 14:51:55.514776 env[1300]: time="2024-12-13T14:51:55.514656237Z" level=info msg="CreateContainer within sandbox \"889c6d91e206bbb1c33a442070ed9fe29f4af4f77c0a888114ee8e9534058ec2\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"bf91cdcf3349acee74fa45b106b030c50fda4a4bffb5ecd37227f6b0d400263b\"" Dec 13 14:51:55.515377 env[1300]: time="2024-12-13T14:51:55.515340539Z" level=info msg="StartContainer for \"bf91cdcf3349acee74fa45b106b030c50fda4a4bffb5ecd37227f6b0d400263b\"" Dec 13 14:51:55.528840 kubelet[1623]: I1213 14:51:55.527881 1623 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-5cc964979-zrr9b" podStartSLOduration=2.417836582 podStartE2EDuration="6.527772263s" podCreationTimestamp="2024-12-13 14:51:49 +0000 UTC" firstStartedPulling="2024-12-13 14:51:50.493547561 +0000 UTC m=+79.580505808" lastFinishedPulling="2024-12-13 14:51:54.603483244 +0000 UTC m=+83.690441489" observedRunningTime="2024-12-13 14:51:55.501750977 +0000 UTC m=+84.588709224" watchObservedRunningTime="2024-12-13 14:51:55.527772263 +0000 UTC m=+84.614730509" Dec 13 14:51:55.596906 env[1300]: time="2024-12-13T14:51:55.596686563Z" level=info msg="StartContainer for \"bf91cdcf3349acee74fa45b106b030c50fda4a4bffb5ecd37227f6b0d400263b\" returns successfully" Dec 13 14:51:55.627239 env[1300]: time="2024-12-13T14:51:55.627177593Z" level=info msg="shim disconnected" id=bf91cdcf3349acee74fa45b106b030c50fda4a4bffb5ecd37227f6b0d400263b Dec 13 14:51:55.627239 env[1300]: time="2024-12-13T14:51:55.627238949Z" level=warning msg="cleaning up after shim disconnected" id=bf91cdcf3349acee74fa45b106b030c50fda4a4bffb5ecd37227f6b0d400263b namespace=k8s.io Dec 13 14:51:55.627573 env[1300]: time="2024-12-13T14:51:55.627257304Z" level=info msg="cleaning up dead shim" Dec 13 14:51:55.637802 env[1300]: time="2024-12-13T14:51:55.637725816Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:51:55Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3664 runtime=io.containerd.runc.v2\n" Dec 13 14:51:56.114001 kubelet[1623]: E1213 14:51:56.113907 1623 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:51:56.223626 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bf91cdcf3349acee74fa45b106b030c50fda4a4bffb5ecd37227f6b0d400263b-rootfs.mount: Deactivated successfully. Dec 13 14:51:56.233425 kubelet[1623]: I1213 14:51:56.233391 1623 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="ff0bc178-7590-43cb-83bd-a34ee367c5ea" path="/var/lib/kubelet/pods/ff0bc178-7590-43cb-83bd-a34ee367c5ea/volumes" Dec 13 14:51:56.494479 env[1300]: time="2024-12-13T14:51:56.494415398Z" level=info msg="CreateContainer within sandbox \"889c6d91e206bbb1c33a442070ed9fe29f4af4f77c0a888114ee8e9534058ec2\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 14:51:56.514622 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4055913530.mount: Deactivated successfully. Dec 13 14:51:56.523983 env[1300]: time="2024-12-13T14:51:56.523855492Z" level=info msg="CreateContainer within sandbox \"889c6d91e206bbb1c33a442070ed9fe29f4af4f77c0a888114ee8e9534058ec2\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"1b6287adcf6025e503ff8966652d6e29a414bf2417bcae146c98481739353ed6\"" Dec 13 14:51:56.524916 env[1300]: time="2024-12-13T14:51:56.524880045Z" level=info msg="StartContainer for \"1b6287adcf6025e503ff8966652d6e29a414bf2417bcae146c98481739353ed6\"" Dec 13 14:51:56.604790 env[1300]: time="2024-12-13T14:51:56.604737150Z" level=info msg="StartContainer for \"1b6287adcf6025e503ff8966652d6e29a414bf2417bcae146c98481739353ed6\" returns successfully" Dec 13 14:51:56.733323 env[1300]: time="2024-12-13T14:51:56.733258940Z" level=info msg="shim disconnected" id=1b6287adcf6025e503ff8966652d6e29a414bf2417bcae146c98481739353ed6 Dec 13 14:51:56.733727 env[1300]: time="2024-12-13T14:51:56.733694273Z" level=warning msg="cleaning up after shim disconnected" id=1b6287adcf6025e503ff8966652d6e29a414bf2417bcae146c98481739353ed6 namespace=k8s.io Dec 13 14:51:56.733874 env[1300]: time="2024-12-13T14:51:56.733846065Z" level=info msg="cleaning up dead shim" Dec 13 14:51:56.746024 env[1300]: time="2024-12-13T14:51:56.745525727Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:51:56Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3721 runtime=io.containerd.runc.v2\n" Dec 13 14:51:57.114901 kubelet[1623]: E1213 14:51:57.114416 1623 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:51:57.169813 kubelet[1623]: E1213 14:51:57.169749 1623 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 14:51:57.224215 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1b6287adcf6025e503ff8966652d6e29a414bf2417bcae146c98481739353ed6-rootfs.mount: Deactivated successfully. Dec 13 14:51:57.498943 env[1300]: time="2024-12-13T14:51:57.498871404Z" level=info msg="CreateContainer within sandbox \"889c6d91e206bbb1c33a442070ed9fe29f4af4f77c0a888114ee8e9534058ec2\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 14:51:57.646527 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount516713404.mount: Deactivated successfully. Dec 13 14:51:57.656692 env[1300]: time="2024-12-13T14:51:57.656599980Z" level=info msg="CreateContainer within sandbox \"889c6d91e206bbb1c33a442070ed9fe29f4af4f77c0a888114ee8e9534058ec2\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"7a3c8fceb01fae54c46ef4d2ea2f2a17d9d55a9a4c9b52b86684569abb02ebfc\"" Dec 13 14:51:57.657678 env[1300]: time="2024-12-13T14:51:57.657631416Z" level=info msg="StartContainer for \"7a3c8fceb01fae54c46ef4d2ea2f2a17d9d55a9a4c9b52b86684569abb02ebfc\"" Dec 13 14:51:57.751184 env[1300]: time="2024-12-13T14:51:57.750757967Z" level=info msg="StartContainer for \"7a3c8fceb01fae54c46ef4d2ea2f2a17d9d55a9a4c9b52b86684569abb02ebfc\" returns successfully" Dec 13 14:51:57.773232 env[1300]: time="2024-12-13T14:51:57.773142215Z" level=info msg="shim disconnected" id=7a3c8fceb01fae54c46ef4d2ea2f2a17d9d55a9a4c9b52b86684569abb02ebfc Dec 13 14:51:57.773526 env[1300]: time="2024-12-13T14:51:57.773491589Z" level=warning msg="cleaning up after shim disconnected" id=7a3c8fceb01fae54c46ef4d2ea2f2a17d9d55a9a4c9b52b86684569abb02ebfc namespace=k8s.io Dec 13 14:51:57.773677 env[1300]: time="2024-12-13T14:51:57.773647842Z" level=info msg="cleaning up dead shim" Dec 13 14:51:57.785569 env[1300]: time="2024-12-13T14:51:57.785498453Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:51:57Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3778 runtime=io.containerd.runc.v2\ntime=\"2024-12-13T14:51:57Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" Dec 13 14:51:58.115564 kubelet[1623]: E1213 14:51:58.115061 1623 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:51:58.223865 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7a3c8fceb01fae54c46ef4d2ea2f2a17d9d55a9a4c9b52b86684569abb02ebfc-rootfs.mount: Deactivated successfully. Dec 13 14:51:58.504659 env[1300]: time="2024-12-13T14:51:58.504596957Z" level=info msg="CreateContainer within sandbox \"889c6d91e206bbb1c33a442070ed9fe29f4af4f77c0a888114ee8e9534058ec2\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 14:51:58.522127 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2159767785.mount: Deactivated successfully. Dec 13 14:51:58.531671 env[1300]: time="2024-12-13T14:51:58.531592132Z" level=info msg="CreateContainer within sandbox \"889c6d91e206bbb1c33a442070ed9fe29f4af4f77c0a888114ee8e9534058ec2\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"bb1e87ce6ccf5630ef07740bd3a28208ef49d8274df9bb03f20edad4836d61f8\"" Dec 13 14:51:58.532439 env[1300]: time="2024-12-13T14:51:58.532393025Z" level=info msg="StartContainer for \"bb1e87ce6ccf5630ef07740bd3a28208ef49d8274df9bb03f20edad4836d61f8\"" Dec 13 14:51:58.613737 env[1300]: time="2024-12-13T14:51:58.613655841Z" level=info msg="StartContainer for \"bb1e87ce6ccf5630ef07740bd3a28208ef49d8274df9bb03f20edad4836d61f8\" returns successfully" Dec 13 14:51:59.116033 kubelet[1623]: E1213 14:51:59.115897 1623 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:51:59.326011 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Dec 13 14:51:59.548373 kubelet[1623]: I1213 14:51:59.548328 1623 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-xz2v2" podStartSLOduration=5.548253406 podStartE2EDuration="5.548253406s" podCreationTimestamp="2024-12-13 14:51:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:51:59.544909501 +0000 UTC m=+88.631867750" watchObservedRunningTime="2024-12-13 14:51:59.548253406 +0000 UTC m=+88.635211655" Dec 13 14:51:59.874587 systemd[1]: run-containerd-runc-k8s.io-bb1e87ce6ccf5630ef07740bd3a28208ef49d8274df9bb03f20edad4836d61f8-runc.MmGgvS.mount: Deactivated successfully. Dec 13 14:52:00.116428 kubelet[1623]: E1213 14:52:00.116344 1623 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:52:01.117532 kubelet[1623]: E1213 14:52:01.117445 1623 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:52:02.119329 kubelet[1623]: E1213 14:52:02.119277 1623 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:52:02.132808 systemd[1]: run-containerd-runc-k8s.io-bb1e87ce6ccf5630ef07740bd3a28208ef49d8274df9bb03f20edad4836d61f8-runc.qvrb1Z.mount: Deactivated successfully. Dec 13 14:52:02.765667 systemd-networkd[1075]: lxc_health: Link UP Dec 13 14:52:02.772064 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Dec 13 14:52:02.771905 systemd-networkd[1075]: lxc_health: Gained carrier Dec 13 14:52:03.120887 kubelet[1623]: E1213 14:52:03.120727 1623 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:52:04.121208 kubelet[1623]: E1213 14:52:04.121136 1623 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:52:04.328387 systemd-networkd[1075]: lxc_health: Gained IPv6LL Dec 13 14:52:04.394747 systemd[1]: run-containerd-runc-k8s.io-bb1e87ce6ccf5630ef07740bd3a28208ef49d8274df9bb03f20edad4836d61f8-runc.sxet45.mount: Deactivated successfully. Dec 13 14:52:05.123168 kubelet[1623]: E1213 14:52:05.123101 1623 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:52:06.123668 kubelet[1623]: E1213 14:52:06.123568 1623 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:52:06.654775 systemd[1]: run-containerd-runc-k8s.io-bb1e87ce6ccf5630ef07740bd3a28208ef49d8274df9bb03f20edad4836d61f8-runc.9vUock.mount: Deactivated successfully. Dec 13 14:52:07.124040 kubelet[1623]: E1213 14:52:07.123854 1623 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:52:08.124569 kubelet[1623]: E1213 14:52:08.124453 1623 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:52:08.936991 systemd[1]: run-containerd-runc-k8s.io-bb1e87ce6ccf5630ef07740bd3a28208ef49d8274df9bb03f20edad4836d61f8-runc.5L8pnf.mount: Deactivated successfully. Dec 13 14:52:09.125177 kubelet[1623]: E1213 14:52:09.125061 1623 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:52:10.125628 kubelet[1623]: E1213 14:52:10.125535 1623 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:52:11.126109 kubelet[1623]: E1213 14:52:11.125996 1623 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"