Dec 13 06:49:46.904506 kernel: Linux version 5.15.173-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Thu Dec 12 23:50:37 -00 2024 Dec 13 06:49:46.904554 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=66bd2580285375a2ba5b0e34ba63606314bcd90aaed1de1996371bdcb032485c Dec 13 06:49:46.904576 kernel: BIOS-provided physical RAM map: Dec 13 06:49:46.904587 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Dec 13 06:49:46.904596 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Dec 13 06:49:46.904605 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Dec 13 06:49:46.904615 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdbfff] usable Dec 13 06:49:46.904625 kernel: BIOS-e820: [mem 0x000000007ffdc000-0x000000007fffffff] reserved Dec 13 06:49:46.904634 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Dec 13 06:49:46.904643 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Dec 13 06:49:46.904663 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Dec 13 06:49:46.904672 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Dec 13 06:49:46.904681 kernel: NX (Execute Disable) protection: active Dec 13 06:49:46.904691 kernel: SMBIOS 2.8 present. Dec 13 06:49:46.904702 kernel: DMI: Red Hat KVM/RHEL-AV, BIOS 1.16.0-3.module_el8.7.0+3346+68867adb 04/01/2014 Dec 13 06:49:46.904719 kernel: Hypervisor detected: KVM Dec 13 06:49:46.904732 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 13 06:49:46.904742 kernel: kvm-clock: cpu 0, msr 1919b001, primary cpu clock Dec 13 06:49:46.904752 kernel: kvm-clock: using sched offset of 4694340930 cycles Dec 13 06:49:46.904763 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 13 06:49:46.904774 kernel: tsc: Detected 2799.998 MHz processor Dec 13 06:49:46.904784 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 13 06:49:46.904794 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 13 06:49:46.904804 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Dec 13 06:49:46.904815 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 13 06:49:46.904829 kernel: Using GB pages for direct mapping Dec 13 06:49:46.904839 kernel: ACPI: Early table checksum verification disabled Dec 13 06:49:46.904849 kernel: ACPI: RSDP 0x00000000000F59E0 000014 (v00 BOCHS ) Dec 13 06:49:46.904859 kernel: ACPI: RSDT 0x000000007FFE47A5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 06:49:46.904869 kernel: ACPI: FACP 0x000000007FFE438D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 06:49:46.904879 kernel: ACPI: DSDT 0x000000007FFDFD80 00460D (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 06:49:46.904889 kernel: ACPI: FACS 0x000000007FFDFD40 000040 Dec 13 06:49:46.904899 kernel: ACPI: APIC 0x000000007FFE4481 0000F0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 06:49:46.904909 kernel: ACPI: SRAT 0x000000007FFE4571 0001D0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 06:49:46.904923 kernel: ACPI: MCFG 0x000000007FFE4741 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 06:49:46.904934 kernel: ACPI: WAET 0x000000007FFE477D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 06:49:46.904944 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe438d-0x7ffe4480] Dec 13 06:49:46.904954 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfd80-0x7ffe438c] Dec 13 06:49:46.904964 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfd40-0x7ffdfd7f] Dec 13 06:49:46.904974 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe4481-0x7ffe4570] Dec 13 06:49:46.904990 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe4571-0x7ffe4740] Dec 13 06:49:46.905005 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe4741-0x7ffe477c] Dec 13 06:49:46.905015 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe477d-0x7ffe47a4] Dec 13 06:49:46.905026 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Dec 13 06:49:46.905037 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Dec 13 06:49:46.905059 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Dec 13 06:49:46.905071 kernel: SRAT: PXM 0 -> APIC 0x03 -> Node 0 Dec 13 06:49:46.905081 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Dec 13 06:49:46.905116 kernel: SRAT: PXM 0 -> APIC 0x05 -> Node 0 Dec 13 06:49:46.905128 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Dec 13 06:49:46.905139 kernel: SRAT: PXM 0 -> APIC 0x07 -> Node 0 Dec 13 06:49:46.905149 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Dec 13 06:49:46.905160 kernel: SRAT: PXM 0 -> APIC 0x09 -> Node 0 Dec 13 06:49:46.905171 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Dec 13 06:49:46.905181 kernel: SRAT: PXM 0 -> APIC 0x0b -> Node 0 Dec 13 06:49:46.905192 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Dec 13 06:49:46.905203 kernel: SRAT: PXM 0 -> APIC 0x0d -> Node 0 Dec 13 06:49:46.905213 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Dec 13 06:49:46.905228 kernel: SRAT: PXM 0 -> APIC 0x0f -> Node 0 Dec 13 06:49:46.905239 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Dec 13 06:49:46.905250 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Dec 13 06:49:46.905261 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x20800fffff] hotplug Dec 13 06:49:46.905272 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdbfff] -> [mem 0x00000000-0x7ffdbfff] Dec 13 06:49:46.905282 kernel: NODE_DATA(0) allocated [mem 0x7ffd6000-0x7ffdbfff] Dec 13 06:49:46.905293 kernel: Zone ranges: Dec 13 06:49:46.905304 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 13 06:49:46.905320 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdbfff] Dec 13 06:49:46.905335 kernel: Normal empty Dec 13 06:49:46.905346 kernel: Movable zone start for each node Dec 13 06:49:46.905357 kernel: Early memory node ranges Dec 13 06:49:46.905368 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Dec 13 06:49:46.905378 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdbfff] Dec 13 06:49:46.905389 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdbfff] Dec 13 06:49:46.905400 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 06:49:46.905411 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Dec 13 06:49:46.905421 kernel: On node 0, zone DMA32: 36 pages in unavailable ranges Dec 13 06:49:46.905436 kernel: ACPI: PM-Timer IO Port: 0x608 Dec 13 06:49:46.905447 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 13 06:49:46.905458 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 13 06:49:46.905468 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Dec 13 06:49:46.905479 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 13 06:49:46.905490 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 13 06:49:46.905500 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 13 06:49:46.905511 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 13 06:49:46.905522 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 13 06:49:46.905540 kernel: TSC deadline timer available Dec 13 06:49:46.905551 kernel: smpboot: Allowing 16 CPUs, 14 hotplug CPUs Dec 13 06:49:46.905562 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Dec 13 06:49:46.905573 kernel: Booting paravirtualized kernel on KVM Dec 13 06:49:46.905584 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 13 06:49:46.905596 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:16 nr_node_ids:1 Dec 13 06:49:46.905607 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u262144 Dec 13 06:49:46.905617 kernel: pcpu-alloc: s188696 r8192 d32488 u262144 alloc=1*2097152 Dec 13 06:49:46.905628 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Dec 13 06:49:46.905642 kernel: kvm-guest: stealtime: cpu 0, msr 7da1c0c0 Dec 13 06:49:46.905658 kernel: kvm-guest: PV spinlocks enabled Dec 13 06:49:46.905681 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 13 06:49:46.905692 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515804 Dec 13 06:49:46.905702 kernel: Policy zone: DMA32 Dec 13 06:49:46.905721 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=66bd2580285375a2ba5b0e34ba63606314bcd90aaed1de1996371bdcb032485c Dec 13 06:49:46.905733 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 06:49:46.905743 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 06:49:46.905757 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 13 06:49:46.905768 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 06:49:46.905779 kernel: Memory: 1903832K/2096616K available (12294K kernel code, 2275K rwdata, 13716K rodata, 47476K init, 4108K bss, 192524K reserved, 0K cma-reserved) Dec 13 06:49:46.905789 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Dec 13 06:49:46.905800 kernel: Kernel/User page tables isolation: enabled Dec 13 06:49:46.905810 kernel: ftrace: allocating 34549 entries in 135 pages Dec 13 06:49:46.905820 kernel: ftrace: allocated 135 pages with 4 groups Dec 13 06:49:46.905831 kernel: rcu: Hierarchical RCU implementation. Dec 13 06:49:46.905842 kernel: rcu: RCU event tracing is enabled. Dec 13 06:49:46.905856 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Dec 13 06:49:46.905867 kernel: Rude variant of Tasks RCU enabled. Dec 13 06:49:46.905877 kernel: Tracing variant of Tasks RCU enabled. Dec 13 06:49:46.905900 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 06:49:46.905911 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Dec 13 06:49:46.905921 kernel: NR_IRQS: 33024, nr_irqs: 552, preallocated irqs: 16 Dec 13 06:49:46.905932 kernel: random: crng init done Dec 13 06:49:46.905955 kernel: Console: colour VGA+ 80x25 Dec 13 06:49:46.905967 kernel: printk: console [tty0] enabled Dec 13 06:49:46.905978 kernel: printk: console [ttyS0] enabled Dec 13 06:49:46.905989 kernel: ACPI: Core revision 20210730 Dec 13 06:49:46.906000 kernel: APIC: Switch to symmetric I/O mode setup Dec 13 06:49:46.906015 kernel: x2apic enabled Dec 13 06:49:46.906027 kernel: Switched APIC routing to physical x2apic. Dec 13 06:49:46.906038 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x285c3ee517e, max_idle_ns: 440795257231 ns Dec 13 06:49:46.906060 kernel: Calibrating delay loop (skipped) preset value.. 5599.99 BogoMIPS (lpj=2799998) Dec 13 06:49:46.906072 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Dec 13 06:49:46.906120 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Dec 13 06:49:46.906135 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Dec 13 06:49:46.906146 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 13 06:49:46.906157 kernel: Spectre V2 : Mitigation: Retpolines Dec 13 06:49:46.906169 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Dec 13 06:49:46.906180 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Dec 13 06:49:46.906191 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Dec 13 06:49:46.906203 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 13 06:49:46.906214 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Dec 13 06:49:46.906225 kernel: MDS: Mitigation: Clear CPU buffers Dec 13 06:49:46.906236 kernel: MMIO Stale Data: Unknown: No mitigations Dec 13 06:49:46.906253 kernel: SRBDS: Unknown: Dependent on hypervisor status Dec 13 06:49:46.906264 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 13 06:49:46.906276 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 13 06:49:46.906287 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 13 06:49:46.906298 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 13 06:49:46.906310 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Dec 13 06:49:46.906321 kernel: Freeing SMP alternatives memory: 32K Dec 13 06:49:46.906332 kernel: pid_max: default: 32768 minimum: 301 Dec 13 06:49:46.906343 kernel: LSM: Security Framework initializing Dec 13 06:49:46.906354 kernel: SELinux: Initializing. Dec 13 06:49:46.906366 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 13 06:49:46.906381 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 13 06:49:46.906393 kernel: smpboot: CPU0: Intel Xeon E3-12xx v2 (Ivy Bridge, IBRS) (family: 0x6, model: 0x3a, stepping: 0x9) Dec 13 06:49:46.906404 kernel: Performance Events: unsupported p6 CPU model 58 no PMU driver, software events only. Dec 13 06:49:46.906415 kernel: signal: max sigframe size: 1776 Dec 13 06:49:46.906427 kernel: rcu: Hierarchical SRCU implementation. Dec 13 06:49:46.906438 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Dec 13 06:49:46.906450 kernel: smp: Bringing up secondary CPUs ... Dec 13 06:49:46.906461 kernel: x86: Booting SMP configuration: Dec 13 06:49:46.906473 kernel: .... node #0, CPUs: #1 Dec 13 06:49:46.906488 kernel: kvm-clock: cpu 1, msr 1919b041, secondary cpu clock Dec 13 06:49:46.906499 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Dec 13 06:49:46.906510 kernel: kvm-guest: stealtime: cpu 1, msr 7da5c0c0 Dec 13 06:49:46.906522 kernel: smp: Brought up 1 node, 2 CPUs Dec 13 06:49:46.906533 kernel: smpboot: Max logical packages: 16 Dec 13 06:49:46.906544 kernel: smpboot: Total of 2 processors activated (11199.99 BogoMIPS) Dec 13 06:49:46.906556 kernel: devtmpfs: initialized Dec 13 06:49:46.906567 kernel: x86/mm: Memory block size: 128MB Dec 13 06:49:46.906578 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 06:49:46.906590 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Dec 13 06:49:46.906605 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 06:49:46.906616 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 06:49:46.906628 kernel: audit: initializing netlink subsys (disabled) Dec 13 06:49:46.906639 kernel: audit: type=2000 audit(1734072585.942:1): state=initialized audit_enabled=0 res=1 Dec 13 06:49:46.906650 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 06:49:46.906668 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 13 06:49:46.906679 kernel: cpuidle: using governor menu Dec 13 06:49:46.906691 kernel: ACPI: bus type PCI registered Dec 13 06:49:46.906702 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 06:49:46.906717 kernel: dca service started, version 1.12.1 Dec 13 06:49:46.906731 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Dec 13 06:49:46.906742 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved in E820 Dec 13 06:49:46.906754 kernel: PCI: Using configuration type 1 for base access Dec 13 06:49:46.906765 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 13 06:49:46.906776 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 06:49:46.906788 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 06:49:46.906799 kernel: ACPI: Added _OSI(Module Device) Dec 13 06:49:46.906810 kernel: ACPI: Added _OSI(Processor Device) Dec 13 06:49:46.906833 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 06:49:46.906845 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 06:49:46.906856 kernel: ACPI: Added _OSI(Linux-Dell-Video) Dec 13 06:49:46.906867 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Dec 13 06:49:46.906879 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Dec 13 06:49:46.906890 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 06:49:46.906901 kernel: ACPI: Interpreter enabled Dec 13 06:49:46.906912 kernel: ACPI: PM: (supports S0 S5) Dec 13 06:49:46.906924 kernel: ACPI: Using IOAPIC for interrupt routing Dec 13 06:49:46.906939 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 13 06:49:46.906951 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Dec 13 06:49:46.906962 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 13 06:49:46.907237 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 13 06:49:46.907387 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Dec 13 06:49:46.907528 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Dec 13 06:49:46.907545 kernel: PCI host bridge to bus 0000:00 Dec 13 06:49:46.907712 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 13 06:49:46.907842 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 13 06:49:46.907970 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 13 06:49:46.908125 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Dec 13 06:49:46.908255 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Dec 13 06:49:46.908382 kernel: pci_bus 0000:00: root bus resource [mem 0x20c0000000-0x28bfffffff window] Dec 13 06:49:46.908510 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 13 06:49:46.908684 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Dec 13 06:49:46.908849 kernel: pci 0000:00:01.0: [1013:00b8] type 00 class 0x030000 Dec 13 06:49:46.908994 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfa000000-0xfbffffff pref] Dec 13 06:49:46.909166 kernel: pci 0000:00:01.0: reg 0x14: [mem 0xfea50000-0xfea50fff] Dec 13 06:49:46.909308 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea40000-0xfea4ffff pref] Dec 13 06:49:46.909460 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 13 06:49:46.909633 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Dec 13 06:49:46.909798 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea51000-0xfea51fff] Dec 13 06:49:46.909966 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Dec 13 06:49:46.914194 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea52000-0xfea52fff] Dec 13 06:49:46.914352 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Dec 13 06:49:46.914499 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea53000-0xfea53fff] Dec 13 06:49:46.914685 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Dec 13 06:49:46.914837 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea54000-0xfea54fff] Dec 13 06:49:46.914986 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Dec 13 06:49:46.922198 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea55000-0xfea55fff] Dec 13 06:49:46.922375 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Dec 13 06:49:46.922531 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea56000-0xfea56fff] Dec 13 06:49:46.922709 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Dec 13 06:49:46.922851 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea57000-0xfea57fff] Dec 13 06:49:46.923000 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Dec 13 06:49:46.923180 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea58000-0xfea58fff] Dec 13 06:49:46.923336 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Dec 13 06:49:46.923477 kernel: pci 0000:00:03.0: reg 0x10: [io 0xd0c0-0xd0df] Dec 13 06:49:46.923617 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfea59000-0xfea59fff] Dec 13 06:49:46.923772 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Dec 13 06:49:46.923915 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfea00000-0xfea3ffff pref] Dec 13 06:49:46.924112 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Dec 13 06:49:46.924275 kernel: pci 0000:00:04.0: reg 0x10: [io 0xd000-0xd07f] Dec 13 06:49:46.924418 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfea5a000-0xfea5afff] Dec 13 06:49:46.924559 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfd004000-0xfd007fff 64bit pref] Dec 13 06:49:46.924711 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Dec 13 06:49:46.924860 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Dec 13 06:49:46.925034 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Dec 13 06:49:46.925212 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xd0e0-0xd0ff] Dec 13 06:49:46.925355 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea5b000-0xfea5bfff] Dec 13 06:49:46.925552 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Dec 13 06:49:46.925706 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Dec 13 06:49:46.925889 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 Dec 13 06:49:46.926067 kernel: pci 0000:01:00.0: reg 0x10: [mem 0xfda00000-0xfda000ff 64bit] Dec 13 06:49:46.926232 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Dec 13 06:49:46.926373 kernel: pci 0000:00:02.0: bridge window [io 0xc000-0xcfff] Dec 13 06:49:46.926512 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Dec 13 06:49:46.926651 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Dec 13 06:49:46.926822 kernel: pci_bus 0000:02: extended config space not accessible Dec 13 06:49:46.926988 kernel: pci 0000:02:01.0: [8086:25ab] type 00 class 0x088000 Dec 13 06:49:46.927200 kernel: pci 0000:02:01.0: reg 0x10: [mem 0xfd800000-0xfd80000f] Dec 13 06:49:46.927366 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Dec 13 06:49:46.927530 kernel: pci 0000:01:00.0: bridge window [io 0xc000-0xcfff] Dec 13 06:49:46.927702 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Dec 13 06:49:46.929137 kernel: pci 0000:01:00.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Dec 13 06:49:46.929340 kernel: pci 0000:03:00.0: [1b36:000d] type 00 class 0x0c0330 Dec 13 06:49:46.929531 kernel: pci 0000:03:00.0: reg 0x10: [mem 0xfe800000-0xfe803fff 64bit] Dec 13 06:49:46.929726 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Dec 13 06:49:46.929895 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Dec 13 06:49:46.930085 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Dec 13 06:49:46.930280 kernel: pci 0000:04:00.0: [1af4:1044] type 00 class 0x00ff00 Dec 13 06:49:46.930448 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Dec 13 06:49:46.930614 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Dec 13 06:49:46.930766 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Dec 13 06:49:46.930918 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Dec 13 06:49:46.931085 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Dec 13 06:49:46.931260 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Dec 13 06:49:46.931411 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Dec 13 06:49:46.931565 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Dec 13 06:49:46.931715 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Dec 13 06:49:46.931884 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Dec 13 06:49:46.932040 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Dec 13 06:49:46.932234 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Dec 13 06:49:46.932387 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Dec 13 06:49:46.932552 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Dec 13 06:49:46.932728 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Dec 13 06:49:46.932902 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Dec 13 06:49:46.933078 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Dec 13 06:49:46.933279 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Dec 13 06:49:46.933444 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Dec 13 06:49:46.933474 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 13 06:49:46.933486 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 13 06:49:46.933498 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 13 06:49:46.933510 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 13 06:49:46.933521 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Dec 13 06:49:46.933533 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Dec 13 06:49:46.933544 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Dec 13 06:49:46.933562 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Dec 13 06:49:46.933574 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Dec 13 06:49:46.933586 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Dec 13 06:49:46.933597 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Dec 13 06:49:46.933609 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Dec 13 06:49:46.933621 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Dec 13 06:49:46.933632 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Dec 13 06:49:46.933644 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Dec 13 06:49:46.933656 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Dec 13 06:49:46.933672 kernel: iommu: Default domain type: Translated Dec 13 06:49:46.933684 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 13 06:49:46.933859 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Dec 13 06:49:46.934024 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 13 06:49:46.934215 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Dec 13 06:49:46.934234 kernel: vgaarb: loaded Dec 13 06:49:46.934245 kernel: pps_core: LinuxPPS API ver. 1 registered Dec 13 06:49:46.934257 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Dec 13 06:49:46.934275 kernel: PTP clock support registered Dec 13 06:49:46.934287 kernel: PCI: Using ACPI for IRQ routing Dec 13 06:49:46.934298 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 13 06:49:46.934310 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Dec 13 06:49:46.934333 kernel: e820: reserve RAM buffer [mem 0x7ffdc000-0x7fffffff] Dec 13 06:49:46.934344 kernel: clocksource: Switched to clocksource kvm-clock Dec 13 06:49:46.934355 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 06:49:46.934367 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 06:49:46.934378 kernel: pnp: PnP ACPI init Dec 13 06:49:46.934584 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Dec 13 06:49:46.934603 kernel: pnp: PnP ACPI: found 5 devices Dec 13 06:49:46.934627 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 13 06:49:46.934639 kernel: NET: Registered PF_INET protocol family Dec 13 06:49:46.934651 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 06:49:46.934662 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Dec 13 06:49:46.934674 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 06:49:46.934686 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 13 06:49:46.934703 kernel: TCP bind hash table entries: 16384 (order: 6, 262144 bytes, linear) Dec 13 06:49:46.934714 kernel: TCP: Hash tables configured (established 16384 bind 16384) Dec 13 06:49:46.934726 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 13 06:49:46.934738 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 13 06:49:46.934749 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 06:49:46.934761 kernel: NET: Registered PF_XDP protocol family Dec 13 06:49:46.934922 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Dec 13 06:49:46.935164 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Dec 13 06:49:46.935339 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Dec 13 06:49:46.935502 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Dec 13 06:49:46.935653 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Dec 13 06:49:46.935804 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Dec 13 06:49:46.935958 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Dec 13 06:49:46.936176 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x1000-0x1fff] Dec 13 06:49:46.936336 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x2000-0x2fff] Dec 13 06:49:46.936487 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x3000-0x3fff] Dec 13 06:49:46.936637 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x4000-0x4fff] Dec 13 06:49:46.936787 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x5000-0x5fff] Dec 13 06:49:46.936939 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x6000-0x6fff] Dec 13 06:49:46.937112 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x7000-0x7fff] Dec 13 06:49:46.937273 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Dec 13 06:49:46.937430 kernel: pci 0000:01:00.0: bridge window [io 0xc000-0xcfff] Dec 13 06:49:46.937600 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Dec 13 06:49:46.937776 kernel: pci 0000:01:00.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Dec 13 06:49:46.937933 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Dec 13 06:49:46.938152 kernel: pci 0000:00:02.0: bridge window [io 0xc000-0xcfff] Dec 13 06:49:46.938306 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Dec 13 06:49:46.938457 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Dec 13 06:49:46.938608 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Dec 13 06:49:46.938758 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x1fff] Dec 13 06:49:46.938911 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Dec 13 06:49:46.939075 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Dec 13 06:49:46.939258 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Dec 13 06:49:46.939410 kernel: pci 0000:00:02.2: bridge window [io 0x2000-0x2fff] Dec 13 06:49:46.939561 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Dec 13 06:49:46.939711 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Dec 13 06:49:46.939861 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Dec 13 06:49:46.940012 kernel: pci 0000:00:02.3: bridge window [io 0x3000-0x3fff] Dec 13 06:49:46.940191 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Dec 13 06:49:46.940349 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Dec 13 06:49:46.940501 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Dec 13 06:49:46.940653 kernel: pci 0000:00:02.4: bridge window [io 0x4000-0x4fff] Dec 13 06:49:46.940805 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Dec 13 06:49:46.940958 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Dec 13 06:49:46.941143 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Dec 13 06:49:46.941308 kernel: pci 0000:00:02.5: bridge window [io 0x5000-0x5fff] Dec 13 06:49:46.941479 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Dec 13 06:49:46.941644 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Dec 13 06:49:46.941815 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Dec 13 06:49:46.941980 kernel: pci 0000:00:02.6: bridge window [io 0x6000-0x6fff] Dec 13 06:49:46.942166 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Dec 13 06:49:46.942337 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Dec 13 06:49:46.942501 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Dec 13 06:49:46.942682 kernel: pci 0000:00:02.7: bridge window [io 0x7000-0x7fff] Dec 13 06:49:46.942857 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Dec 13 06:49:46.943009 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Dec 13 06:49:46.960625 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 13 06:49:46.960782 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 13 06:49:46.960911 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 13 06:49:46.961067 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Dec 13 06:49:46.961222 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Dec 13 06:49:46.961354 kernel: pci_bus 0000:00: resource 9 [mem 0x20c0000000-0x28bfffffff window] Dec 13 06:49:46.961512 kernel: pci_bus 0000:01: resource 0 [io 0xc000-0xcfff] Dec 13 06:49:46.961669 kernel: pci_bus 0000:01: resource 1 [mem 0xfd800000-0xfdbfffff] Dec 13 06:49:46.961805 kernel: pci_bus 0000:01: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Dec 13 06:49:46.961953 kernel: pci_bus 0000:02: resource 0 [io 0xc000-0xcfff] Dec 13 06:49:46.962124 kernel: pci_bus 0000:02: resource 1 [mem 0xfd800000-0xfd9fffff] Dec 13 06:49:46.962278 kernel: pci_bus 0000:02: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Dec 13 06:49:46.962437 kernel: pci_bus 0000:03: resource 0 [io 0x1000-0x1fff] Dec 13 06:49:46.962583 kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff] Dec 13 06:49:46.962725 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Dec 13 06:49:46.962870 kernel: pci_bus 0000:04: resource 0 [io 0x2000-0x2fff] Dec 13 06:49:46.963008 kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff] Dec 13 06:49:46.963173 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Dec 13 06:49:46.963327 kernel: pci_bus 0000:05: resource 0 [io 0x3000-0x3fff] Dec 13 06:49:46.963465 kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff] Dec 13 06:49:46.963601 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Dec 13 06:49:46.963758 kernel: pci_bus 0000:06: resource 0 [io 0x4000-0x4fff] Dec 13 06:49:46.963902 kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff] Dec 13 06:49:46.964054 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Dec 13 06:49:46.964218 kernel: pci_bus 0000:07: resource 0 [io 0x5000-0x5fff] Dec 13 06:49:46.964365 kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff] Dec 13 06:49:46.964532 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Dec 13 06:49:46.964699 kernel: pci_bus 0000:08: resource 0 [io 0x6000-0x6fff] Dec 13 06:49:46.964851 kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff] Dec 13 06:49:46.965025 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Dec 13 06:49:46.965212 kernel: pci_bus 0000:09: resource 0 [io 0x7000-0x7fff] Dec 13 06:49:46.965369 kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff] Dec 13 06:49:46.965537 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Dec 13 06:49:46.965557 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Dec 13 06:49:46.965570 kernel: PCI: CLS 0 bytes, default 64 Dec 13 06:49:46.965583 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Dec 13 06:49:46.965595 kernel: software IO TLB: mapped [mem 0x0000000079800000-0x000000007d800000] (64MB) Dec 13 06:49:46.965608 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Dec 13 06:49:46.965626 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x285c3ee517e, max_idle_ns: 440795257231 ns Dec 13 06:49:46.965643 kernel: Initialise system trusted keyrings Dec 13 06:49:46.965655 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Dec 13 06:49:46.965668 kernel: Key type asymmetric registered Dec 13 06:49:46.965680 kernel: Asymmetric key parser 'x509' registered Dec 13 06:49:46.965692 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Dec 13 06:49:46.965704 kernel: io scheduler mq-deadline registered Dec 13 06:49:46.965723 kernel: io scheduler kyber registered Dec 13 06:49:46.965735 kernel: io scheduler bfq registered Dec 13 06:49:46.965908 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Dec 13 06:49:46.966080 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Dec 13 06:49:46.966254 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 06:49:46.966438 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Dec 13 06:49:46.966604 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Dec 13 06:49:46.966748 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 06:49:46.966904 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Dec 13 06:49:46.967084 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Dec 13 06:49:46.975278 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 06:49:46.975431 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Dec 13 06:49:46.975578 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Dec 13 06:49:46.975723 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 06:49:46.975868 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Dec 13 06:49:46.976020 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Dec 13 06:49:46.978761 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 06:49:46.978930 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Dec 13 06:49:46.979111 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Dec 13 06:49:46.979271 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 06:49:46.979418 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Dec 13 06:49:46.979581 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Dec 13 06:49:46.979731 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 06:49:46.979886 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Dec 13 06:49:46.980031 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Dec 13 06:49:46.980224 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 06:49:46.980245 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 13 06:49:46.980264 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Dec 13 06:49:46.980277 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Dec 13 06:49:46.980290 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 06:49:46.980302 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 13 06:49:46.980315 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 13 06:49:46.980339 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 13 06:49:46.980350 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 13 06:49:46.980362 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 13 06:49:46.980537 kernel: rtc_cmos 00:03: RTC can wake from S4 Dec 13 06:49:46.980681 kernel: rtc_cmos 00:03: registered as rtc0 Dec 13 06:49:46.980815 kernel: rtc_cmos 00:03: setting system clock to 2024-12-13T06:49:46 UTC (1734072586) Dec 13 06:49:46.980947 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Dec 13 06:49:46.980965 kernel: intel_pstate: CPU model not supported Dec 13 06:49:46.980978 kernel: NET: Registered PF_INET6 protocol family Dec 13 06:49:46.980990 kernel: Segment Routing with IPv6 Dec 13 06:49:46.981002 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 06:49:46.981014 kernel: NET: Registered PF_PACKET protocol family Dec 13 06:49:46.981032 kernel: Key type dns_resolver registered Dec 13 06:49:46.981056 kernel: IPI shorthand broadcast: enabled Dec 13 06:49:46.981070 kernel: sched_clock: Marking stable (950113581, 218286472)->(1428170090, -259770037) Dec 13 06:49:46.981082 kernel: registered taskstats version 1 Dec 13 06:49:46.981105 kernel: Loading compiled-in X.509 certificates Dec 13 06:49:46.981118 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.173-flatcar: d9defb0205602bee9bb670636cbe5c74194fdb5e' Dec 13 06:49:46.981130 kernel: Key type .fscrypt registered Dec 13 06:49:46.981142 kernel: Key type fscrypt-provisioning registered Dec 13 06:49:46.981154 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 06:49:46.981172 kernel: ima: Allocated hash algorithm: sha1 Dec 13 06:49:46.981184 kernel: ima: No architecture policies found Dec 13 06:49:46.981196 kernel: clk: Disabling unused clocks Dec 13 06:49:46.981209 kernel: Freeing unused kernel image (initmem) memory: 47476K Dec 13 06:49:46.981221 kernel: Write protecting the kernel read-only data: 28672k Dec 13 06:49:46.981233 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Dec 13 06:49:46.981245 kernel: Freeing unused kernel image (rodata/data gap) memory: 620K Dec 13 06:49:46.981258 kernel: Run /init as init process Dec 13 06:49:46.981270 kernel: with arguments: Dec 13 06:49:46.981286 kernel: /init Dec 13 06:49:46.981298 kernel: with environment: Dec 13 06:49:46.981310 kernel: HOME=/ Dec 13 06:49:46.981321 kernel: TERM=linux Dec 13 06:49:46.981341 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 06:49:46.981362 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 06:49:46.981379 systemd[1]: Detected virtualization kvm. Dec 13 06:49:46.981392 systemd[1]: Detected architecture x86-64. Dec 13 06:49:46.981422 systemd[1]: Running in initrd. Dec 13 06:49:46.981433 systemd[1]: No hostname configured, using default hostname. Dec 13 06:49:46.981445 systemd[1]: Hostname set to . Dec 13 06:49:46.981458 systemd[1]: Initializing machine ID from VM UUID. Dec 13 06:49:46.981474 systemd[1]: Queued start job for default target initrd.target. Dec 13 06:49:46.981487 systemd[1]: Started systemd-ask-password-console.path. Dec 13 06:49:46.981499 systemd[1]: Reached target cryptsetup.target. Dec 13 06:49:46.981511 systemd[1]: Reached target paths.target. Dec 13 06:49:46.981527 systemd[1]: Reached target slices.target. Dec 13 06:49:46.981539 systemd[1]: Reached target swap.target. Dec 13 06:49:46.981572 systemd[1]: Reached target timers.target. Dec 13 06:49:46.981586 systemd[1]: Listening on iscsid.socket. Dec 13 06:49:46.981599 systemd[1]: Listening on iscsiuio.socket. Dec 13 06:49:46.981611 systemd[1]: Listening on systemd-journald-audit.socket. Dec 13 06:49:46.981624 systemd[1]: Listening on systemd-journald-dev-log.socket. Dec 13 06:49:46.981642 systemd[1]: Listening on systemd-journald.socket. Dec 13 06:49:46.981661 systemd[1]: Listening on systemd-networkd.socket. Dec 13 06:49:46.981674 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 06:49:46.981687 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 06:49:46.981700 systemd[1]: Reached target sockets.target. Dec 13 06:49:46.981713 systemd[1]: Starting kmod-static-nodes.service... Dec 13 06:49:46.981725 systemd[1]: Finished network-cleanup.service. Dec 13 06:49:46.981738 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 06:49:46.981751 systemd[1]: Starting systemd-journald.service... Dec 13 06:49:46.981764 systemd[1]: Starting systemd-modules-load.service... Dec 13 06:49:46.981781 systemd[1]: Starting systemd-resolved.service... Dec 13 06:49:46.981794 systemd[1]: Starting systemd-vconsole-setup.service... Dec 13 06:49:46.981806 systemd[1]: Finished kmod-static-nodes.service. Dec 13 06:49:46.981819 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 06:49:46.981832 kernel: audit: type=1130 audit(1734072586.976:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:49:46.981854 systemd-journald[201]: Journal started Dec 13 06:49:46.981933 systemd-journald[201]: Runtime Journal (/run/log/journal/901a8a7feed84f18b1ebaa779d92240e) is 4.7M, max 38.1M, 33.3M free. Dec 13 06:49:46.976000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:49:46.922149 systemd-modules-load[202]: Inserted module 'overlay' Dec 13 06:49:47.008593 systemd[1]: Started systemd-resolved.service. Dec 13 06:49:47.008624 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 06:49:47.008643 kernel: audit: type=1130 audit(1734072586.991:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:49:47.008660 kernel: audit: type=1130 audit(1734072587.000:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:49:47.008685 systemd[1]: Started systemd-journald.service. Dec 13 06:49:47.008703 kernel: Bridge firewalling registered Dec 13 06:49:46.991000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:49:47.000000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:49:46.966665 systemd-resolved[203]: Positive Trust Anchors: Dec 13 06:49:46.966685 systemd-resolved[203]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 06:49:47.026946 kernel: audit: type=1130 audit(1734072587.010:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:49:47.026983 kernel: audit: type=1130 audit(1734072587.016:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:49:47.010000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:49:47.016000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:49:46.966728 systemd-resolved[203]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 06:49:46.973459 systemd-resolved[203]: Defaulting to hostname 'linux'. Dec 13 06:49:47.008340 systemd-modules-load[202]: Inserted module 'br_netfilter' Dec 13 06:49:47.010736 systemd[1]: Finished systemd-vconsole-setup.service. Dec 13 06:49:47.037033 kernel: SCSI subsystem initialized Dec 13 06:49:47.016350 systemd[1]: Reached target nss-lookup.target. Dec 13 06:49:47.033194 systemd[1]: Starting dracut-cmdline-ask.service... Dec 13 06:49:47.035865 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Dec 13 06:49:47.047750 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Dec 13 06:49:47.054223 kernel: audit: type=1130 audit(1734072587.048:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:49:47.048000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:49:47.060777 systemd[1]: Finished dracut-cmdline-ask.service. Dec 13 06:49:47.069840 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 06:49:47.069866 kernel: device-mapper: uevent: version 1.0.3 Dec 13 06:49:47.069883 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Dec 13 06:49:47.069899 kernel: audit: type=1130 audit(1734072587.064:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:49:47.064000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:49:47.070593 systemd-modules-load[202]: Inserted module 'dm_multipath' Dec 13 06:49:47.096903 kernel: audit: type=1130 audit(1734072587.072:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:49:47.072000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:49:47.071064 systemd[1]: Starting dracut-cmdline.service... Dec 13 06:49:47.099276 dracut-cmdline[219]: dracut-dracut-053 Dec 13 06:49:47.072010 systemd[1]: Finished systemd-modules-load.service. Dec 13 06:49:47.107924 kernel: audit: type=1130 audit(1734072587.101:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:49:47.101000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:49:47.108189 dracut-cmdline[219]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=66bd2580285375a2ba5b0e34ba63606314bcd90aaed1de1996371bdcb032485c Dec 13 06:49:47.073507 systemd[1]: Starting systemd-sysctl.service... Dec 13 06:49:47.099576 systemd[1]: Finished systemd-sysctl.service. Dec 13 06:49:47.186126 kernel: Loading iSCSI transport class v2.0-870. Dec 13 06:49:47.207214 kernel: iscsi: registered transport (tcp) Dec 13 06:49:47.235487 kernel: iscsi: registered transport (qla4xxx) Dec 13 06:49:47.235550 kernel: QLogic iSCSI HBA Driver Dec 13 06:49:47.283161 systemd[1]: Finished dracut-cmdline.service. Dec 13 06:49:47.283000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:49:47.284991 systemd[1]: Starting dracut-pre-udev.service... Dec 13 06:49:47.341173 kernel: raid6: sse2x4 gen() 13993 MB/s Dec 13 06:49:47.359130 kernel: raid6: sse2x4 xor() 8204 MB/s Dec 13 06:49:47.377135 kernel: raid6: sse2x2 gen() 9921 MB/s Dec 13 06:49:47.395160 kernel: raid6: sse2x2 xor() 8186 MB/s Dec 13 06:49:47.413158 kernel: raid6: sse2x1 gen() 9738 MB/s Dec 13 06:49:47.431701 kernel: raid6: sse2x1 xor() 7549 MB/s Dec 13 06:49:47.431777 kernel: raid6: using algorithm sse2x4 gen() 13993 MB/s Dec 13 06:49:47.431795 kernel: raid6: .... xor() 8204 MB/s, rmw enabled Dec 13 06:49:47.432946 kernel: raid6: using ssse3x2 recovery algorithm Dec 13 06:49:47.449132 kernel: xor: automatically using best checksumming function avx Dec 13 06:49:47.561164 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Dec 13 06:49:47.573372 systemd[1]: Finished dracut-pre-udev.service. Dec 13 06:49:47.573000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:49:47.574000 audit: BPF prog-id=7 op=LOAD Dec 13 06:49:47.574000 audit: BPF prog-id=8 op=LOAD Dec 13 06:49:47.575294 systemd[1]: Starting systemd-udevd.service... Dec 13 06:49:47.591176 systemd-udevd[401]: Using default interface naming scheme 'v252'. Dec 13 06:49:47.598640 systemd[1]: Started systemd-udevd.service. Dec 13 06:49:47.602000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:49:47.604043 systemd[1]: Starting dracut-pre-trigger.service... Dec 13 06:49:47.620851 dracut-pre-trigger[414]: rd.md=0: removing MD RAID activation Dec 13 06:49:47.661000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:49:47.660991 systemd[1]: Finished dracut-pre-trigger.service. Dec 13 06:49:47.662707 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 06:49:47.749000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:49:47.749347 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 06:49:47.834419 kernel: virtio_blk virtio1: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Dec 13 06:49:47.894733 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 06:49:47.894757 kernel: GPT:17805311 != 125829119 Dec 13 06:49:47.894773 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 06:49:47.894789 kernel: GPT:17805311 != 125829119 Dec 13 06:49:47.894803 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 06:49:47.894818 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 06:49:47.894834 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 06:49:47.894849 kernel: ACPI: bus type USB registered Dec 13 06:49:47.894869 kernel: usbcore: registered new interface driver usbfs Dec 13 06:49:47.894886 kernel: usbcore: registered new interface driver hub Dec 13 06:49:47.894901 kernel: usbcore: registered new device driver usb Dec 13 06:49:47.897108 kernel: AVX version of gcm_enc/dec engaged. Dec 13 06:49:47.897144 kernel: AES CTR mode by8 optimization enabled Dec 13 06:49:47.903116 kernel: libata version 3.00 loaded. Dec 13 06:49:47.932120 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (452) Dec 13 06:49:47.935797 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Dec 13 06:49:48.045326 kernel: ahci 0000:00:1f.2: version 3.0 Dec 13 06:49:48.045574 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Dec 13 06:49:48.045595 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Dec 13 06:49:48.045755 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Dec 13 06:49:48.045911 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Dec 13 06:49:48.046106 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 1 Dec 13 06:49:48.046273 kernel: xhci_hcd 0000:03:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Dec 13 06:49:48.046440 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Dec 13 06:49:48.046605 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 2 Dec 13 06:49:48.046786 kernel: xhci_hcd 0000:03:00.0: Host supports USB 3.0 SuperSpeed Dec 13 06:49:48.046945 kernel: hub 1-0:1.0: USB hub found Dec 13 06:49:48.047175 kernel: hub 1-0:1.0: 4 ports detected Dec 13 06:49:48.047367 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Dec 13 06:49:48.047634 kernel: hub 2-0:1.0: USB hub found Dec 13 06:49:48.047825 kernel: hub 2-0:1.0: 4 ports detected Dec 13 06:49:48.048073 kernel: scsi host0: ahci Dec 13 06:49:48.048268 kernel: scsi host1: ahci Dec 13 06:49:48.048453 kernel: scsi host2: ahci Dec 13 06:49:48.048625 kernel: scsi host3: ahci Dec 13 06:49:48.048801 kernel: scsi host4: ahci Dec 13 06:49:48.048976 kernel: scsi host5: ahci Dec 13 06:49:48.049175 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b100 irq 38 Dec 13 06:49:48.049194 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b180 irq 38 Dec 13 06:49:48.049211 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b200 irq 38 Dec 13 06:49:48.049227 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b280 irq 38 Dec 13 06:49:48.049242 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b300 irq 38 Dec 13 06:49:48.049258 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b380 irq 38 Dec 13 06:49:48.048221 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Dec 13 06:49:48.048860 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Dec 13 06:49:48.058195 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Dec 13 06:49:48.063181 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Dec 13 06:49:48.064827 systemd[1]: Starting disk-uuid.service... Dec 13 06:49:48.072253 disk-uuid[528]: Primary Header is updated. Dec 13 06:49:48.072253 disk-uuid[528]: Secondary Entries is updated. Dec 13 06:49:48.072253 disk-uuid[528]: Secondary Header is updated. Dec 13 06:49:48.078184 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 06:49:48.082138 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 06:49:48.207319 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Dec 13 06:49:48.290123 kernel: ata4: SATA link down (SStatus 0 SControl 300) Dec 13 06:49:48.290219 kernel: ata5: SATA link down (SStatus 0 SControl 300) Dec 13 06:49:48.293072 kernel: ata1: SATA link down (SStatus 0 SControl 300) Dec 13 06:49:48.296215 kernel: ata2: SATA link down (SStatus 0 SControl 300) Dec 13 06:49:48.296265 kernel: ata6: SATA link down (SStatus 0 SControl 300) Dec 13 06:49:48.297787 kernel: ata3: SATA link down (SStatus 0 SControl 300) Dec 13 06:49:48.347125 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 13 06:49:48.354217 kernel: usbcore: registered new interface driver usbhid Dec 13 06:49:48.354268 kernel: usbhid: USB HID core driver Dec 13 06:49:48.363320 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:03:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Dec 13 06:49:48.363401 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:03:00.0-1/input0 Dec 13 06:49:49.086139 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 06:49:49.086620 disk-uuid[529]: The operation has completed successfully. Dec 13 06:49:49.134346 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 06:49:49.141833 systemd[1]: Finished disk-uuid.service. Dec 13 06:49:49.142000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:49:49.142000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:49:49.145550 systemd[1]: Starting verity-setup.service... Dec 13 06:49:49.165125 kernel: device-mapper: verity: sha256 using implementation "sha256-avx" Dec 13 06:49:49.218270 systemd[1]: Found device dev-mapper-usr.device. Dec 13 06:49:49.221101 systemd[1]: Mounting sysusr-usr.mount... Dec 13 06:49:49.223931 systemd[1]: Finished verity-setup.service. Dec 13 06:49:49.224000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:49:49.310118 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Dec 13 06:49:49.311115 systemd[1]: Mounted sysusr-usr.mount. Dec 13 06:49:49.311919 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Dec 13 06:49:49.313016 systemd[1]: Starting ignition-setup.service... Dec 13 06:49:49.314862 systemd[1]: Starting parse-ip-for-networkd.service... Dec 13 06:49:49.331242 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 06:49:49.331283 kernel: BTRFS info (device vda6): using free space tree Dec 13 06:49:49.331301 kernel: BTRFS info (device vda6): has skinny extents Dec 13 06:49:49.345802 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 06:49:49.352252 systemd[1]: Finished ignition-setup.service. Dec 13 06:49:49.352000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:49:49.353946 systemd[1]: Starting ignition-fetch-offline.service... Dec 13 06:49:49.455235 systemd[1]: Finished parse-ip-for-networkd.service. Dec 13 06:49:49.455000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:49:49.457000 audit: BPF prog-id=9 op=LOAD Dec 13 06:49:49.458284 systemd[1]: Starting systemd-networkd.service... Dec 13 06:49:49.489689 systemd-networkd[708]: lo: Link UP Dec 13 06:49:49.489705 systemd-networkd[708]: lo: Gained carrier Dec 13 06:49:49.507000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:49:49.491026 systemd-networkd[708]: Enumeration completed Dec 13 06:49:49.491728 systemd-networkd[708]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 06:49:49.494237 systemd-networkd[708]: eth0: Link UP Dec 13 06:49:49.518000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:49:49.494243 systemd-networkd[708]: eth0: Gained carrier Dec 13 06:49:49.502230 systemd[1]: Started systemd-networkd.service. Dec 13 06:49:49.507386 systemd[1]: Reached target network.target. Dec 13 06:49:49.509130 systemd[1]: Starting iscsiuio.service... Dec 13 06:49:49.527890 iscsid[714]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Dec 13 06:49:49.527890 iscsid[714]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Dec 13 06:49:49.527890 iscsid[714]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Dec 13 06:49:49.527890 iscsid[714]: If using hardware iscsi like qla4xxx this message can be ignored. Dec 13 06:49:49.527890 iscsid[714]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Dec 13 06:49:49.527890 iscsid[714]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Dec 13 06:49:49.532000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:49:49.517908 systemd[1]: Started iscsiuio.service. Dec 13 06:49:49.519731 systemd[1]: Starting iscsid.service... Dec 13 06:49:49.531353 systemd[1]: Started iscsid.service. Dec 13 06:49:49.533419 systemd[1]: Starting dracut-initqueue.service... Dec 13 06:49:49.538224 systemd-networkd[708]: eth0: DHCPv4 address 10.243.75.202/30, gateway 10.243.75.201 acquired from 10.243.75.201 Dec 13 06:49:49.545460 ignition[622]: Ignition 2.14.0 Dec 13 06:49:49.545487 ignition[622]: Stage: fetch-offline Dec 13 06:49:49.545629 ignition[622]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 06:49:49.545692 ignition[622]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Dec 13 06:49:49.547404 ignition[622]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 06:49:49.550000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:49:49.547568 ignition[622]: parsed url from cmdline: "" Dec 13 06:49:49.549264 systemd[1]: Finished ignition-fetch-offline.service. Dec 13 06:49:49.547575 ignition[622]: no config URL provided Dec 13 06:49:49.551259 systemd[1]: Starting ignition-fetch.service... Dec 13 06:49:49.547584 ignition[622]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 06:49:49.559000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:49:49.547616 ignition[622]: no config at "/usr/lib/ignition/user.ign" Dec 13 06:49:49.558512 systemd[1]: Finished dracut-initqueue.service. Dec 13 06:49:49.547627 ignition[622]: failed to fetch config: resource requires networking Dec 13 06:49:49.559290 systemd[1]: Reached target remote-fs-pre.target. Dec 13 06:49:49.548141 ignition[622]: Ignition finished successfully Dec 13 06:49:49.559874 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 06:49:49.572526 ignition[720]: Ignition 2.14.0 Dec 13 06:49:49.575000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:49:49.560526 systemd[1]: Reached target remote-fs.target. Dec 13 06:49:49.572537 ignition[720]: Stage: fetch Dec 13 06:49:49.562183 systemd[1]: Starting dracut-pre-mount.service... Dec 13 06:49:49.572685 ignition[720]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 06:49:49.574785 systemd[1]: Finished dracut-pre-mount.service. Dec 13 06:49:49.572716 ignition[720]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Dec 13 06:49:49.573887 ignition[720]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 06:49:49.574036 ignition[720]: parsed url from cmdline: "" Dec 13 06:49:49.574043 ignition[720]: no config URL provided Dec 13 06:49:49.574052 ignition[720]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 06:49:49.574067 ignition[720]: no config at "/usr/lib/ignition/user.ign" Dec 13 06:49:49.577124 ignition[720]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Dec 13 06:49:49.577167 ignition[720]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Dec 13 06:49:49.577342 ignition[720]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Dec 13 06:49:49.594950 ignition[720]: GET result: OK Dec 13 06:49:49.595782 ignition[720]: parsing config with SHA512: 7e5248428cd328d241ae17229e8e83824d5a3c9896163e8b420761fa0a0854f0d9edcd4049b51b802851bb4fadb1072938afd146dccbea5c7adc1f85c740da1e Dec 13 06:49:49.602643 unknown[720]: fetched base config from "system" Dec 13 06:49:49.602662 unknown[720]: fetched base config from "system" Dec 13 06:49:49.603228 ignition[720]: fetch: fetch complete Dec 13 06:49:49.602686 unknown[720]: fetched user config from "openstack" Dec 13 06:49:49.603237 ignition[720]: fetch: fetch passed Dec 13 06:49:49.605000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:49:49.604709 systemd[1]: Finished ignition-fetch.service. Dec 13 06:49:49.603308 ignition[720]: Ignition finished successfully Dec 13 06:49:49.606665 systemd[1]: Starting ignition-kargs.service... Dec 13 06:49:49.618125 ignition[734]: Ignition 2.14.0 Dec 13 06:49:49.618143 ignition[734]: Stage: kargs Dec 13 06:49:49.618296 ignition[734]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 06:49:49.618329 ignition[734]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Dec 13 06:49:49.619533 ignition[734]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 06:49:49.622000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:49:49.622058 systemd[1]: Finished ignition-kargs.service. Dec 13 06:49:49.620638 ignition[734]: kargs: kargs passed Dec 13 06:49:49.620700 ignition[734]: Ignition finished successfully Dec 13 06:49:49.624174 systemd[1]: Starting ignition-disks.service... Dec 13 06:49:49.633618 ignition[739]: Ignition 2.14.0 Dec 13 06:49:49.633637 ignition[739]: Stage: disks Dec 13 06:49:49.633789 ignition[739]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 06:49:49.633822 ignition[739]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Dec 13 06:49:49.635041 ignition[739]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 06:49:49.636153 ignition[739]: disks: disks passed Dec 13 06:49:49.638000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:49:49.637447 systemd[1]: Finished ignition-disks.service. Dec 13 06:49:49.636215 ignition[739]: Ignition finished successfully Dec 13 06:49:49.638244 systemd[1]: Reached target initrd-root-device.target. Dec 13 06:49:49.639305 systemd[1]: Reached target local-fs-pre.target. Dec 13 06:49:49.640480 systemd[1]: Reached target local-fs.target. Dec 13 06:49:49.641710 systemd[1]: Reached target sysinit.target. Dec 13 06:49:49.642883 systemd[1]: Reached target basic.target. Dec 13 06:49:49.645191 systemd[1]: Starting systemd-fsck-root.service... Dec 13 06:49:49.666122 systemd-fsck[746]: ROOT: clean, 621/1628000 files, 124058/1617920 blocks Dec 13 06:49:49.670951 systemd[1]: Finished systemd-fsck-root.service. Dec 13 06:49:49.671000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:49:49.672660 systemd[1]: Mounting sysroot.mount... Dec 13 06:49:49.683132 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Dec 13 06:49:49.683729 systemd[1]: Mounted sysroot.mount. Dec 13 06:49:49.684512 systemd[1]: Reached target initrd-root-fs.target. Dec 13 06:49:49.686969 systemd[1]: Mounting sysroot-usr.mount... Dec 13 06:49:49.688144 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Dec 13 06:49:49.689079 systemd[1]: Starting flatcar-openstack-hostname.service... Dec 13 06:49:49.691931 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 06:49:49.691972 systemd[1]: Reached target ignition-diskful.target. Dec 13 06:49:49.694379 systemd[1]: Mounted sysroot-usr.mount. Dec 13 06:49:49.696361 systemd[1]: Starting initrd-setup-root.service... Dec 13 06:49:49.703936 initrd-setup-root[757]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 06:49:49.718237 initrd-setup-root[765]: cut: /sysroot/etc/group: No such file or directory Dec 13 06:49:49.726931 initrd-setup-root[773]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 06:49:49.736032 initrd-setup-root[781]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 06:49:49.800000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:49:49.800047 systemd[1]: Finished initrd-setup-root.service. Dec 13 06:49:49.801902 systemd[1]: Starting ignition-mount.service... Dec 13 06:49:49.805058 systemd[1]: Starting sysroot-boot.service... Dec 13 06:49:49.815175 bash[800]: umount: /sysroot/usr/share/oem: not mounted. Dec 13 06:49:49.834012 coreos-metadata[752]: Dec 13 06:49:49.833 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Dec 13 06:49:49.837927 systemd[1]: Finished sysroot-boot.service. Dec 13 06:49:49.838000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:49:49.839923 ignition[802]: INFO : Ignition 2.14.0 Dec 13 06:49:49.839923 ignition[802]: INFO : Stage: mount Dec 13 06:49:49.841376 ignition[802]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 06:49:49.841376 ignition[802]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Dec 13 06:49:49.843462 ignition[802]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 06:49:49.843462 ignition[802]: INFO : mount: mount passed Dec 13 06:49:49.843462 ignition[802]: INFO : Ignition finished successfully Dec 13 06:49:49.845000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:49:49.860202 coreos-metadata[752]: Dec 13 06:49:49.860 INFO Fetch successful Dec 13 06:49:49.844281 systemd[1]: Finished ignition-mount.service. Dec 13 06:49:49.861690 coreos-metadata[752]: Dec 13 06:49:49.860 INFO wrote hostname srv-zd2v7.gb1.brightbox.com to /sysroot/etc/hostname Dec 13 06:49:49.863333 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Dec 13 06:49:49.863476 systemd[1]: Finished flatcar-openstack-hostname.service. Dec 13 06:49:49.864000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:49:49.864000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:49:50.240785 systemd[1]: Mounting sysroot-usr-share-oem.mount... Dec 13 06:49:50.252124 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (810) Dec 13 06:49:50.257210 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 06:49:50.257244 kernel: BTRFS info (device vda6): using free space tree Dec 13 06:49:50.257262 kernel: BTRFS info (device vda6): has skinny extents Dec 13 06:49:50.262885 systemd[1]: Mounted sysroot-usr-share-oem.mount. Dec 13 06:49:50.264645 systemd[1]: Starting ignition-files.service... Dec 13 06:49:50.284385 ignition[830]: INFO : Ignition 2.14.0 Dec 13 06:49:50.285436 ignition[830]: INFO : Stage: files Dec 13 06:49:50.286327 ignition[830]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 06:49:50.287304 ignition[830]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Dec 13 06:49:50.289554 ignition[830]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 06:49:50.291357 ignition[830]: DEBUG : files: compiled without relabeling support, skipping Dec 13 06:49:50.292388 ignition[830]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 06:49:50.292388 ignition[830]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 06:49:50.295679 ignition[830]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 06:49:50.296857 ignition[830]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 06:49:50.298460 unknown[830]: wrote ssh authorized keys file for user: core Dec 13 06:49:50.299467 ignition[830]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 06:49:50.300519 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Dec 13 06:49:50.301538 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 06:49:50.301538 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 06:49:50.301538 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 06:49:50.301538 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 06:49:50.301538 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 06:49:50.301538 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 06:49:50.301538 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Dec 13 06:49:50.905201 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Dec 13 06:49:50.907238 systemd-networkd[708]: eth0: Gained IPv6LL Dec 13 06:49:51.695981 systemd-networkd[708]: eth0: Ignoring DHCPv6 address 2a02:1348:17c:d2f2:24:19ff:fef3:4bca/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:17c:d2f2:24:19ff:fef3:4bca/64 assigned by NDisc. Dec 13 06:49:51.695995 systemd-networkd[708]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Dec 13 06:49:58.301984 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 06:49:58.304110 ignition[830]: INFO : files: op(7): [started] processing unit "coreos-metadata-sshkeys@.service" Dec 13 06:49:58.305053 ignition[830]: INFO : files: op(7): [finished] processing unit "coreos-metadata-sshkeys@.service" Dec 13 06:49:58.306014 ignition[830]: INFO : files: op(8): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Dec 13 06:49:58.307916 ignition[830]: INFO : files: op(8): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Dec 13 06:49:58.315218 ignition[830]: INFO : files: createResultFile: createFiles: op(9): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 06:49:58.316704 ignition[830]: INFO : files: createResultFile: createFiles: op(9): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 06:49:58.316704 ignition[830]: INFO : files: files passed Dec 13 06:49:58.316704 ignition[830]: INFO : Ignition finished successfully Dec 13 06:49:58.320411 systemd[1]: Finished ignition-files.service. Dec 13 06:49:58.323000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:49:58.324909 kernel: kauditd_printk_skb: 28 callbacks suppressed Dec 13 06:49:58.324956 kernel: audit: type=1130 audit(1734072598.323:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:49:58.326592 systemd[1]: Starting initrd-setup-root-after-ignition.service... Dec 13 06:49:58.331524 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Dec 13 06:49:58.332706 systemd[1]: Starting ignition-quench.service... Dec 13 06:49:58.338000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:49:58.337427 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 06:49:58.348536 kernel: audit: type=1130 audit(1734072598.338:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:49:58.348594 kernel: audit: type=1131 audit(1734072598.338:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:49:58.338000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:49:58.337549 systemd[1]: Finished ignition-quench.service. Dec 13 06:49:58.351736 initrd-setup-root-after-ignition[855]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 06:49:58.353974 systemd[1]: Finished initrd-setup-root-after-ignition.service. Dec 13 06:49:58.360165 kernel: audit: type=1130 audit(1734072598.354:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:49:58.354000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:49:58.354943 systemd[1]: Reached target ignition-complete.target. Dec 13 06:49:58.361917 systemd[1]: Starting initrd-parse-etc.service... Dec 13 06:49:58.380945 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 06:49:58.382001 systemd[1]: Finished initrd-parse-etc.service. Dec 13 06:49:58.383000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:49:58.388114 kernel: audit: type=1130 audit(1734072598.383:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:49:58.387000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:49:58.388241 systemd[1]: Reached target initrd-fs.target. Dec 13 06:49:58.394080 kernel: audit: type=1131 audit(1734072598.387:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:49:58.393460 systemd[1]: Reached target initrd.target. Dec 13 06:49:58.394701 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Dec 13 06:49:58.395906 systemd[1]: Starting dracut-pre-pivot.service... Dec 13 06:49:58.412639 systemd[1]: Finished dracut-pre-pivot.service. Dec 13 06:49:58.413000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:49:58.414522 systemd[1]: Starting initrd-cleanup.service... Dec 13 06:49:58.420487 kernel: audit: type=1130 audit(1734072598.413:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:49:58.427993 systemd[1]: Stopped target nss-lookup.target. Dec 13 06:49:58.428734 systemd[1]: Stopped target remote-cryptsetup.target. Dec 13 06:49:58.430009 systemd[1]: Stopped target timers.target. Dec 13 06:49:58.431195 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 06:49:58.437525 kernel: audit: type=1131 audit(1734072598.432:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:49:58.432000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:49:58.431339 systemd[1]: Stopped dracut-pre-pivot.service. Dec 13 06:49:58.432528 systemd[1]: Stopped target initrd.target. Dec 13 06:49:58.438322 systemd[1]: Stopped target basic.target. Dec 13 06:49:58.439497 systemd[1]: Stopped target ignition-complete.target. Dec 13 06:49:58.440697 systemd[1]: Stopped target ignition-diskful.target. Dec 13 06:49:58.441920 systemd[1]: Stopped target initrd-root-device.target. Dec 13 06:49:58.443174 systemd[1]: Stopped target remote-fs.target. Dec 13 06:49:58.444368 systemd[1]: Stopped target remote-fs-pre.target. Dec 13 06:49:58.445619 systemd[1]: Stopped target sysinit.target. Dec 13 06:49:58.446766 systemd[1]: Stopped target local-fs.target. Dec 13 06:49:58.447975 systemd[1]: Stopped target local-fs-pre.target. Dec 13 06:49:58.449184 systemd[1]: Stopped target swap.target. Dec 13 06:49:58.468976 kernel: audit: type=1131 audit(1734072598.451:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:49:58.451000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:49:58.450198 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 06:49:58.450467 systemd[1]: Stopped dracut-pre-mount.service. Dec 13 06:49:58.475742 kernel: audit: type=1131 audit(1734072598.470:48): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:49:58.470000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:49:58.451580 systemd[1]: Stopped target cryptsetup.target. Dec 13 06:49:58.476000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:49:58.469749 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 06:49:58.477000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:49:58.470058 systemd[1]: Stopped dracut-initqueue.service. Dec 13 06:49:58.471072 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 06:49:58.471298 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Dec 13 06:49:58.476646 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 06:49:58.476922 systemd[1]: Stopped ignition-files.service. Dec 13 06:49:58.479273 systemd[1]: Stopping ignition-mount.service... Dec 13 06:49:58.485000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:49:58.488000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:49:58.483957 systemd[1]: Stopping iscsid.service... Dec 13 06:49:58.490000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:49:58.492235 iscsid[714]: iscsid shutting down. Dec 13 06:49:58.484510 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 06:49:58.497713 ignition[868]: INFO : Ignition 2.14.0 Dec 13 06:49:58.497713 ignition[868]: INFO : Stage: umount Dec 13 06:49:58.497713 ignition[868]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 06:49:58.497713 ignition[868]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Dec 13 06:49:58.497713 ignition[868]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 06:49:58.497713 ignition[868]: INFO : umount: umount passed Dec 13 06:49:58.497713 ignition[868]: INFO : Ignition finished successfully Dec 13 06:49:58.498000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:49:58.499000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:49:58.485175 systemd[1]: Stopped kmod-static-nodes.service. Dec 13 06:49:58.487042 systemd[1]: Stopping sysroot-boot.service... Dec 13 06:49:58.487808 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 06:49:58.511000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:49:58.511000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:49:58.488041 systemd[1]: Stopped systemd-udev-trigger.service. Dec 13 06:49:58.488893 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 06:49:58.514000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:49:58.515000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:49:58.515000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:49:58.516000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:49:58.489059 systemd[1]: Stopped dracut-pre-trigger.service. Dec 13 06:49:58.497466 systemd[1]: iscsid.service: Deactivated successfully. Dec 13 06:49:58.497631 systemd[1]: Stopped iscsid.service. Dec 13 06:49:58.499081 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 06:49:58.499306 systemd[1]: Stopped ignition-mount.service. Dec 13 06:49:58.501409 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 06:49:58.501533 systemd[1]: Finished initrd-cleanup.service. Dec 13 06:49:58.512627 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 06:49:58.512709 systemd[1]: Stopped ignition-disks.service. Dec 13 06:49:58.514547 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 06:49:58.514605 systemd[1]: Stopped ignition-kargs.service. Dec 13 06:49:58.527000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:49:58.515259 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 13 06:49:58.515315 systemd[1]: Stopped ignition-fetch.service. Dec 13 06:49:58.515922 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 06:49:58.515980 systemd[1]: Stopped ignition-fetch-offline.service. Dec 13 06:49:58.517318 systemd[1]: Stopped target paths.target. Dec 13 06:49:58.518961 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 06:49:58.535000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:49:58.522170 systemd[1]: Stopped systemd-ask-password-console.path. Dec 13 06:49:58.536000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:49:58.523272 systemd[1]: Stopped target slices.target. Dec 13 06:49:58.524421 systemd[1]: Stopped target sockets.target. Dec 13 06:49:58.525730 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 06:49:58.540000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:49:58.525824 systemd[1]: Closed iscsid.socket. Dec 13 06:49:58.526847 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 06:49:58.526909 systemd[1]: Stopped ignition-setup.service. Dec 13 06:49:58.528226 systemd[1]: Stopping iscsiuio.service... Dec 13 06:49:58.533318 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 06:49:58.546000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:49:58.533926 systemd[1]: iscsiuio.service: Deactivated successfully. Dec 13 06:49:58.534118 systemd[1]: Stopped iscsiuio.service. Dec 13 06:49:58.535449 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 06:49:58.535556 systemd[1]: Stopped sysroot-boot.service. Dec 13 06:49:58.536689 systemd[1]: Stopped target network.target. Dec 13 06:49:58.537852 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 06:49:58.537902 systemd[1]: Closed iscsiuio.socket. Dec 13 06:49:58.538982 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 06:49:58.539037 systemd[1]: Stopped initrd-setup-root.service. Dec 13 06:49:58.555000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:49:58.541202 systemd[1]: Stopping systemd-networkd.service... Dec 13 06:49:58.557000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:49:58.542005 systemd[1]: Stopping systemd-resolved.service... Dec 13 06:49:58.558000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:49:58.544166 systemd-networkd[708]: eth0: DHCPv6 lease lost Dec 13 06:49:58.559000 audit: BPF prog-id=9 op=UNLOAD Dec 13 06:49:58.545859 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 06:49:58.545996 systemd[1]: Stopped systemd-networkd.service. Dec 13 06:49:58.547383 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 06:49:58.547439 systemd[1]: Closed systemd-networkd.socket. Dec 13 06:49:58.550943 systemd[1]: Stopping network-cleanup.service... Dec 13 06:49:58.554729 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 06:49:58.564000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:49:58.554810 systemd[1]: Stopped parse-ip-for-networkd.service. Dec 13 06:49:58.556046 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 06:49:58.566000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:49:58.556162 systemd[1]: Stopped systemd-sysctl.service. Dec 13 06:49:58.557634 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 06:49:58.568000 audit: BPF prog-id=6 op=UNLOAD Dec 13 06:49:58.557698 systemd[1]: Stopped systemd-modules-load.service. Dec 13 06:49:58.558833 systemd[1]: Stopping systemd-udevd.service... Dec 13 06:49:58.562441 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 13 06:49:58.563299 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 06:49:58.563448 systemd[1]: Stopped systemd-resolved.service. Dec 13 06:49:58.565068 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 06:49:58.574000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:49:58.565330 systemd[1]: Stopped systemd-udevd.service. Dec 13 06:49:58.575000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:49:58.568036 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 06:49:58.576000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:49:58.568129 systemd[1]: Closed systemd-udevd-control.socket. Dec 13 06:49:58.571857 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 06:49:58.571905 systemd[1]: Closed systemd-udevd-kernel.socket. Dec 13 06:49:58.573076 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 06:49:58.573186 systemd[1]: Stopped dracut-pre-udev.service. Dec 13 06:49:58.574399 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 06:49:58.574458 systemd[1]: Stopped dracut-cmdline.service. Dec 13 06:49:58.575574 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 06:49:58.575629 systemd[1]: Stopped dracut-cmdline-ask.service. Dec 13 06:49:58.577629 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Dec 13 06:49:58.587902 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 06:49:58.587986 systemd[1]: Stopped systemd-vconsole-setup.service. Dec 13 06:49:58.589000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:49:58.589778 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 06:49:58.591000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:49:58.589924 systemd[1]: Stopped network-cleanup.service. Dec 13 06:49:58.592000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:49:58.592000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:49:58.591476 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 06:49:58.591588 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Dec 13 06:49:58.592617 systemd[1]: Reached target initrd-switch-root.target. Dec 13 06:49:58.594587 systemd[1]: Starting initrd-switch-root.service... Dec 13 06:49:58.610456 systemd[1]: Switching root. Dec 13 06:49:58.635575 systemd-journald[201]: Journal stopped Dec 13 06:50:02.506284 systemd-journald[201]: Received SIGTERM from PID 1 (systemd). Dec 13 06:50:02.506461 kernel: SELinux: Class mctp_socket not defined in policy. Dec 13 06:50:02.506501 kernel: SELinux: Class anon_inode not defined in policy. Dec 13 06:50:02.506536 kernel: SELinux: the above unknown classes and permissions will be allowed Dec 13 06:50:02.506561 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 06:50:02.506584 kernel: SELinux: policy capability open_perms=1 Dec 13 06:50:02.506612 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 06:50:02.506631 kernel: SELinux: policy capability always_check_network=0 Dec 13 06:50:02.506667 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 06:50:02.506702 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 06:50:02.506727 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 06:50:02.506745 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 06:50:02.506786 systemd[1]: Successfully loaded SELinux policy in 79.653ms. Dec 13 06:50:02.506844 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 16.170ms. Dec 13 06:50:02.506866 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 06:50:02.506885 systemd[1]: Detected virtualization kvm. Dec 13 06:50:02.506918 systemd[1]: Detected architecture x86-64. Dec 13 06:50:02.506946 systemd[1]: Detected first boot. Dec 13 06:50:02.506972 systemd[1]: Hostname set to . Dec 13 06:50:02.507002 systemd[1]: Initializing machine ID from VM UUID. Dec 13 06:50:02.507033 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Dec 13 06:50:02.507099 systemd[1]: Populated /etc with preset unit settings. Dec 13 06:50:02.507132 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 06:50:02.507161 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 06:50:02.507188 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 06:50:02.507220 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 13 06:50:02.507280 systemd[1]: Stopped initrd-switch-root.service. Dec 13 06:50:02.510388 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 13 06:50:02.510418 systemd[1]: Created slice system-addon\x2dconfig.slice. Dec 13 06:50:02.510453 systemd[1]: Created slice system-addon\x2drun.slice. Dec 13 06:50:02.510474 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Dec 13 06:50:02.510500 systemd[1]: Created slice system-getty.slice. Dec 13 06:50:02.510533 systemd[1]: Created slice system-modprobe.slice. Dec 13 06:50:02.510560 systemd[1]: Created slice system-serial\x2dgetty.slice. Dec 13 06:50:02.510585 systemd[1]: Created slice system-system\x2dcloudinit.slice. Dec 13 06:50:02.510606 systemd[1]: Created slice system-systemd\x2dfsck.slice. Dec 13 06:50:02.510631 systemd[1]: Created slice user.slice. Dec 13 06:50:02.510651 systemd[1]: Started systemd-ask-password-console.path. Dec 13 06:50:02.510686 systemd[1]: Started systemd-ask-password-wall.path. Dec 13 06:50:02.510714 systemd[1]: Set up automount boot.automount. Dec 13 06:50:02.510749 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Dec 13 06:50:02.510770 systemd[1]: Stopped target initrd-switch-root.target. Dec 13 06:50:02.510796 systemd[1]: Stopped target initrd-fs.target. Dec 13 06:50:02.510816 systemd[1]: Stopped target initrd-root-fs.target. Dec 13 06:50:02.510841 systemd[1]: Reached target integritysetup.target. Dec 13 06:50:02.510867 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 06:50:02.510905 systemd[1]: Reached target remote-fs.target. Dec 13 06:50:02.510931 systemd[1]: Reached target slices.target. Dec 13 06:50:02.510952 systemd[1]: Reached target swap.target. Dec 13 06:50:02.510970 systemd[1]: Reached target torcx.target. Dec 13 06:50:02.510989 systemd[1]: Reached target veritysetup.target. Dec 13 06:50:02.511013 systemd[1]: Listening on systemd-coredump.socket. Dec 13 06:50:02.511033 systemd[1]: Listening on systemd-initctl.socket. Dec 13 06:50:02.511058 systemd[1]: Listening on systemd-networkd.socket. Dec 13 06:50:02.511086 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 06:50:02.512310 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 06:50:02.512349 systemd[1]: Listening on systemd-userdbd.socket. Dec 13 06:50:02.512376 systemd[1]: Mounting dev-hugepages.mount... Dec 13 06:50:02.512396 systemd[1]: Mounting dev-mqueue.mount... Dec 13 06:50:02.512415 systemd[1]: Mounting media.mount... Dec 13 06:50:02.512440 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 06:50:02.512465 systemd[1]: Mounting sys-kernel-debug.mount... Dec 13 06:50:02.512490 systemd[1]: Mounting sys-kernel-tracing.mount... Dec 13 06:50:02.512512 systemd[1]: Mounting tmp.mount... Dec 13 06:50:02.512530 systemd[1]: Starting flatcar-tmpfiles.service... Dec 13 06:50:02.512559 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 06:50:02.512580 systemd[1]: Starting kmod-static-nodes.service... Dec 13 06:50:02.512605 systemd[1]: Starting modprobe@configfs.service... Dec 13 06:50:02.512624 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 06:50:02.512642 systemd[1]: Starting modprobe@drm.service... Dec 13 06:50:02.512668 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 06:50:02.512706 systemd[1]: Starting modprobe@fuse.service... Dec 13 06:50:02.512733 systemd[1]: Starting modprobe@loop.service... Dec 13 06:50:02.515175 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 06:50:02.515220 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 13 06:50:02.515243 systemd[1]: Stopped systemd-fsck-root.service. Dec 13 06:50:02.515262 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 13 06:50:02.515281 systemd[1]: Stopped systemd-fsck-usr.service. Dec 13 06:50:02.515306 systemd[1]: Stopped systemd-journald.service. Dec 13 06:50:02.515326 kernel: fuse: init (API version 7.34) Dec 13 06:50:02.515345 kernel: loop: module loaded Dec 13 06:50:02.515370 systemd[1]: Starting systemd-journald.service... Dec 13 06:50:02.515390 systemd[1]: Starting systemd-modules-load.service... Dec 13 06:50:02.515420 systemd[1]: Starting systemd-network-generator.service... Dec 13 06:50:02.515467 systemd[1]: Starting systemd-remount-fs.service... Dec 13 06:50:02.515504 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 06:50:02.515533 systemd[1]: verity-setup.service: Deactivated successfully. Dec 13 06:50:02.515554 systemd[1]: Stopped verity-setup.service. Dec 13 06:50:02.515579 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 06:50:02.515599 systemd[1]: Mounted dev-hugepages.mount. Dec 13 06:50:02.515618 systemd[1]: Mounted dev-mqueue.mount. Dec 13 06:50:02.515636 systemd[1]: Mounted media.mount. Dec 13 06:50:02.515667 systemd[1]: Mounted sys-kernel-debug.mount. Dec 13 06:50:02.515700 systemd[1]: Mounted sys-kernel-tracing.mount. Dec 13 06:50:02.515722 systemd[1]: Mounted tmp.mount. Dec 13 06:50:02.515740 systemd[1]: Finished kmod-static-nodes.service. Dec 13 06:50:02.515759 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 06:50:02.515778 systemd[1]: Finished modprobe@configfs.service. Dec 13 06:50:02.515803 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 06:50:02.515830 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 06:50:02.515864 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 06:50:02.515910 systemd-journald[976]: Journal started Dec 13 06:50:02.516031 systemd-journald[976]: Runtime Journal (/run/log/journal/901a8a7feed84f18b1ebaa779d92240e) is 4.7M, max 38.1M, 33.3M free. Dec 13 06:49:58.821000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 06:49:58.917000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 06:49:58.918000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 06:49:58.918000 audit: BPF prog-id=10 op=LOAD Dec 13 06:49:58.918000 audit: BPF prog-id=10 op=UNLOAD Dec 13 06:49:58.918000 audit: BPF prog-id=11 op=LOAD Dec 13 06:49:58.918000 audit: BPF prog-id=11 op=UNLOAD Dec 13 06:49:59.050000 audit[901]: AVC avc: denied { associate } for pid=901 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Dec 13 06:49:59.050000 audit[901]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c00011f8d2 a1=c00002ae40 a2=c000029100 a3=32 items=0 ppid=884 pid=901 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 06:49:59.050000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Dec 13 06:49:59.053000 audit[901]: AVC avc: denied { associate } for pid=901 comm="torcx-generator" name="bin" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Dec 13 06:49:59.053000 audit[901]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c00011f9a9 a2=1ed a3=0 items=2 ppid=884 pid=901 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 06:49:59.053000 audit: CWD cwd="/" Dec 13 06:49:59.053000 audit: PATH item=0 name=(null) inode=2 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:49:59.053000 audit: PATH item=1 name=(null) inode=3 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:49:59.053000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Dec 13 06:50:02.268000 audit: BPF prog-id=12 op=LOAD Dec 13 06:50:02.268000 audit: BPF prog-id=3 op=UNLOAD Dec 13 06:50:02.269000 audit: BPF prog-id=13 op=LOAD Dec 13 06:50:02.269000 audit: BPF prog-id=14 op=LOAD Dec 13 06:50:02.269000 audit: BPF prog-id=4 op=UNLOAD Dec 13 06:50:02.269000 audit: BPF prog-id=5 op=UNLOAD Dec 13 06:50:02.271000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:50:02.276000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:50:02.276000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:50:02.279000 audit: BPF prog-id=12 op=UNLOAD Dec 13 06:50:02.434000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:50:02.438000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:50:02.440000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:50:02.441000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:50:02.445000 audit: BPF prog-id=15 op=LOAD Dec 13 06:50:02.445000 audit: BPF prog-id=16 op=LOAD Dec 13 06:50:02.445000 audit: BPF prog-id=17 op=LOAD Dec 13 06:50:02.445000 audit: BPF prog-id=13 op=UNLOAD Dec 13 06:50:02.445000 audit: BPF prog-id=14 op=UNLOAD Dec 13 06:50:02.478000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:50:02.503000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Dec 13 06:50:02.503000 audit[976]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7fff1fa2e2d0 a2=4000 a3=7fff1fa2e36c items=0 ppid=1 pid=976 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 06:50:02.503000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Dec 13 06:50:02.504000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:50:02.509000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:50:02.509000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:50:02.515000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:50:02.515000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:49:59.046304 /usr/lib/systemd/system-generators/torcx-generator[901]: time="2024-12-13T06:49:59Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 06:50:02.264240 systemd[1]: Queued start job for default target multi-user.target. Dec 13 06:49:59.047010 /usr/lib/systemd/system-generators/torcx-generator[901]: time="2024-12-13T06:49:59Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Dec 13 06:50:02.264276 systemd[1]: Unnecessary job was removed for dev-vda6.device. Dec 13 06:49:59.047046 /usr/lib/systemd/system-generators/torcx-generator[901]: time="2024-12-13T06:49:59Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Dec 13 06:50:02.270676 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 13 06:49:59.047201 /usr/lib/systemd/system-generators/torcx-generator[901]: time="2024-12-13T06:49:59Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Dec 13 06:49:59.047219 /usr/lib/systemd/system-generators/torcx-generator[901]: time="2024-12-13T06:49:59Z" level=debug msg="skipped missing lower profile" missing profile=oem Dec 13 06:49:59.047276 /usr/lib/systemd/system-generators/torcx-generator[901]: time="2024-12-13T06:49:59Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Dec 13 06:49:59.047306 /usr/lib/systemd/system-generators/torcx-generator[901]: time="2024-12-13T06:49:59Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Dec 13 06:49:59.047704 /usr/lib/systemd/system-generators/torcx-generator[901]: time="2024-12-13T06:49:59Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Dec 13 06:49:59.047804 /usr/lib/systemd/system-generators/torcx-generator[901]: time="2024-12-13T06:49:59Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Dec 13 06:49:59.047831 /usr/lib/systemd/system-generators/torcx-generator[901]: time="2024-12-13T06:49:59Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Dec 13 06:49:59.049696 /usr/lib/systemd/system-generators/torcx-generator[901]: time="2024-12-13T06:49:59Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Dec 13 06:49:59.049753 /usr/lib/systemd/system-generators/torcx-generator[901]: time="2024-12-13T06:49:59Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Dec 13 06:49:59.049816 /usr/lib/systemd/system-generators/torcx-generator[901]: time="2024-12-13T06:49:59Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.6: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.6 Dec 13 06:49:59.049842 /usr/lib/systemd/system-generators/torcx-generator[901]: time="2024-12-13T06:49:59Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Dec 13 06:49:59.049872 /usr/lib/systemd/system-generators/torcx-generator[901]: time="2024-12-13T06:49:59Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.6: no such file or directory" path=/var/lib/torcx/store/3510.3.6 Dec 13 06:49:59.049897 /usr/lib/systemd/system-generators/torcx-generator[901]: time="2024-12-13T06:49:59Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Dec 13 06:50:01.711371 /usr/lib/systemd/system-generators/torcx-generator[901]: time="2024-12-13T06:50:01Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 06:50:01.711902 /usr/lib/systemd/system-generators/torcx-generator[901]: time="2024-12-13T06:50:01Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 06:50:01.712140 /usr/lib/systemd/system-generators/torcx-generator[901]: time="2024-12-13T06:50:01Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 06:50:01.712512 /usr/lib/systemd/system-generators/torcx-generator[901]: time="2024-12-13T06:50:01Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 06:50:01.712614 /usr/lib/systemd/system-generators/torcx-generator[901]: time="2024-12-13T06:50:01Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Dec 13 06:50:01.712752 /usr/lib/systemd/system-generators/torcx-generator[901]: time="2024-12-13T06:50:01Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Dec 13 06:50:02.534312 systemd[1]: Finished modprobe@drm.service. Dec 13 06:50:02.538660 systemd[1]: Started systemd-journald.service. Dec 13 06:50:02.535000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:50:02.535000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:50:02.539000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:50:02.541155 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 06:50:02.541346 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 06:50:02.541000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:50:02.541000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:50:02.542367 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 06:50:02.542537 systemd[1]: Finished modprobe@fuse.service. Dec 13 06:50:02.543000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:50:02.543000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:50:02.543490 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 06:50:02.543691 systemd[1]: Finished modprobe@loop.service. Dec 13 06:50:02.544000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:50:02.544000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:50:02.544762 systemd[1]: Finished systemd-modules-load.service. Dec 13 06:50:02.545000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:50:02.545727 systemd[1]: Finished systemd-network-generator.service. Dec 13 06:50:02.546000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:50:02.546681 systemd[1]: Finished systemd-remount-fs.service. Dec 13 06:50:02.547000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:50:02.548120 systemd[1]: Reached target network-pre.target. Dec 13 06:50:02.551181 systemd[1]: Mounting sys-fs-fuse-connections.mount... Dec 13 06:50:02.553214 systemd[1]: Mounting sys-kernel-config.mount... Dec 13 06:50:02.558271 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 06:50:02.562266 systemd[1]: Starting systemd-hwdb-update.service... Dec 13 06:50:02.564887 systemd[1]: Starting systemd-journal-flush.service... Dec 13 06:50:02.566273 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 06:50:02.569525 systemd[1]: Starting systemd-random-seed.service... Dec 13 06:50:02.570416 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 06:50:02.572464 systemd[1]: Starting systemd-sysctl.service... Dec 13 06:50:02.578295 systemd[1]: Mounted sys-fs-fuse-connections.mount. Dec 13 06:50:02.579301 systemd[1]: Mounted sys-kernel-config.mount. Dec 13 06:50:02.585922 systemd-journald[976]: Time spent on flushing to /var/log/journal/901a8a7feed84f18b1ebaa779d92240e is 35.216ms for 1270 entries. Dec 13 06:50:02.585922 systemd-journald[976]: System Journal (/var/log/journal/901a8a7feed84f18b1ebaa779d92240e) is 8.0M, max 584.8M, 576.8M free. Dec 13 06:50:02.639395 systemd-journald[976]: Received client request to flush runtime journal. Dec 13 06:50:02.595000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:50:02.609000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:50:02.614000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:50:02.595153 systemd[1]: Finished systemd-random-seed.service. Dec 13 06:50:02.641000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:50:02.596045 systemd[1]: Reached target first-boot-complete.target. Dec 13 06:50:02.608967 systemd[1]: Finished systemd-sysctl.service. Dec 13 06:50:02.613500 systemd[1]: Finished flatcar-tmpfiles.service. Dec 13 06:50:02.619043 systemd[1]: Starting systemd-sysusers.service... Dec 13 06:50:02.640585 systemd[1]: Finished systemd-journal-flush.service. Dec 13 06:50:02.662000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:50:02.661932 systemd[1]: Finished systemd-sysusers.service. Dec 13 06:50:02.718000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:50:02.717613 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 06:50:02.720719 systemd[1]: Starting systemd-udev-settle.service... Dec 13 06:50:02.732340 udevadm[1013]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Dec 13 06:50:03.245977 systemd[1]: Finished systemd-hwdb-update.service. Dec 13 06:50:03.246000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:50:03.248000 audit: BPF prog-id=18 op=LOAD Dec 13 06:50:03.248000 audit: BPF prog-id=19 op=LOAD Dec 13 06:50:03.248000 audit: BPF prog-id=7 op=UNLOAD Dec 13 06:50:03.248000 audit: BPF prog-id=8 op=UNLOAD Dec 13 06:50:03.250166 systemd[1]: Starting systemd-udevd.service... Dec 13 06:50:03.277561 systemd-udevd[1014]: Using default interface naming scheme 'v252'. Dec 13 06:50:03.308000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:50:03.312000 audit: BPF prog-id=20 op=LOAD Dec 13 06:50:03.307992 systemd[1]: Started systemd-udevd.service. Dec 13 06:50:03.314216 systemd[1]: Starting systemd-networkd.service... Dec 13 06:50:03.331137 kernel: kauditd_printk_skb: 101 callbacks suppressed Dec 13 06:50:03.331262 kernel: audit: type=1334 audit(1734072603.325:141): prog-id=21 op=LOAD Dec 13 06:50:03.331298 kernel: audit: type=1334 audit(1734072603.329:142): prog-id=22 op=LOAD Dec 13 06:50:03.325000 audit: BPF prog-id=21 op=LOAD Dec 13 06:50:03.329000 audit: BPF prog-id=22 op=LOAD Dec 13 06:50:03.330000 audit: BPF prog-id=23 op=LOAD Dec 13 06:50:03.334053 systemd[1]: Starting systemd-userdbd.service... Dec 13 06:50:03.335832 kernel: audit: type=1334 audit(1734072603.330:143): prog-id=23 op=LOAD Dec 13 06:50:03.391000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:50:03.390977 systemd[1]: Started systemd-userdbd.service. Dec 13 06:50:03.397177 kernel: audit: type=1130 audit(1734072603.391:144): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:50:03.417357 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Dec 13 06:50:03.478591 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Dec 13 06:50:03.508994 systemd-networkd[1024]: lo: Link UP Dec 13 06:50:03.509010 systemd-networkd[1024]: lo: Gained carrier Dec 13 06:50:03.509908 systemd-networkd[1024]: Enumeration completed Dec 13 06:50:03.510057 systemd[1]: Started systemd-networkd.service. Dec 13 06:50:03.510000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:50:03.524628 kernel: audit: type=1130 audit(1734072603.510:145): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:50:03.520454 systemd-networkd[1024]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 06:50:03.600313 systemd-networkd[1024]: eth0: Link UP Dec 13 06:50:03.600330 systemd-networkd[1024]: eth0: Gained carrier Dec 13 06:50:03.613336 systemd-networkd[1024]: eth0: DHCPv4 address 10.243.75.202/30, gateway 10.243.75.201 acquired from 10.243.75.201 Dec 13 06:50:03.626142 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Dec 13 06:50:03.633119 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 06:50:03.640122 kernel: ACPI: button: Power Button [PWRF] Dec 13 06:50:03.636000 audit[1015]: AVC avc: denied { confidentiality } for pid=1015 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Dec 13 06:50:03.660144 kernel: audit: type=1400 audit(1734072603.636:146): avc: denied { confidentiality } for pid=1015 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Dec 13 06:50:03.636000 audit[1015]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55dd4b4c7b30 a1=337fc a2=7feae791bbc5 a3=5 items=110 ppid=1014 pid=1015 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 06:50:03.689236 kernel: audit: type=1300 audit(1734072603.636:146): arch=c000003e syscall=175 success=yes exit=0 a0=55dd4b4c7b30 a1=337fc a2=7feae791bbc5 a3=5 items=110 ppid=1014 pid=1015 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 06:50:03.636000 audit: CWD cwd="/" Dec 13 06:50:03.692167 kernel: audit: type=1307 audit(1734072603.636:146): cwd="/" Dec 13 06:50:03.636000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:50:03.698112 kernel: audit: type=1302 audit(1734072603.636:146): item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:50:03.636000 audit: PATH item=1 name=(null) inode=15832 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:50:03.707242 kernel: audit: type=1302 audit(1734072603.636:146): item=1 name=(null) inode=15832 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:50:03.636000 audit: PATH item=2 name=(null) inode=15832 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:50:03.636000 audit: PATH item=3 name=(null) inode=15833 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:50:03.636000 audit: PATH item=4 name=(null) inode=15832 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:50:03.636000 audit: PATH item=5 name=(null) inode=15834 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:50:03.636000 audit: PATH item=6 name=(null) inode=15832 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:50:03.636000 audit: PATH item=7 name=(null) inode=15835 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:50:03.636000 audit: PATH item=8 name=(null) inode=15835 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:50:03.636000 audit: PATH item=9 name=(null) inode=15836 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:50:03.636000 audit: PATH item=10 name=(null) inode=15835 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:50:03.636000 audit: PATH item=11 name=(null) inode=15837 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:50:03.636000 audit: PATH item=12 name=(null) inode=15835 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:50:03.636000 audit: PATH item=13 name=(null) inode=15838 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:50:03.636000 audit: PATH item=14 name=(null) inode=15835 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:50:03.636000 audit: PATH item=15 name=(null) inode=15839 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:50:03.636000 audit: PATH item=16 name=(null) inode=15835 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:50:03.636000 audit: PATH item=17 name=(null) inode=15840 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:50:03.636000 audit: PATH item=18 name=(null) inode=15832 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:50:03.636000 audit: PATH item=19 name=(null) inode=15841 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:50:03.636000 audit: PATH item=20 name=(null) inode=15841 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:50:03.636000 audit: PATH item=21 name=(null) inode=15842 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:50:03.636000 audit: PATH item=22 name=(null) inode=15841 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:50:03.714155 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Dec 13 06:50:03.636000 audit: PATH item=23 name=(null) inode=15843 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:50:03.636000 audit: PATH item=24 name=(null) inode=15841 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:50:03.636000 audit: PATH item=25 name=(null) inode=15844 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:50:03.636000 audit: PATH item=26 name=(null) inode=15841 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:50:03.636000 audit: PATH item=27 name=(null) inode=15845 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:50:03.636000 audit: PATH item=28 name=(null) inode=15841 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:50:03.636000 audit: PATH item=29 name=(null) inode=15846 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:50:03.636000 audit: PATH item=30 name=(null) inode=15832 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:50:03.636000 audit: PATH item=31 name=(null) inode=15847 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:50:03.636000 audit: PATH item=32 name=(null) inode=15847 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:50:03.636000 audit: PATH item=33 name=(null) inode=15848 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:50:03.636000 audit: PATH item=34 name=(null) inode=15847 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:50:03.636000 audit: PATH item=35 name=(null) inode=15849 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:50:03.636000 audit: PATH item=36 name=(null) inode=15847 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:50:03.636000 audit: PATH item=37 name=(null) inode=15850 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:50:03.636000 audit: PATH item=38 name=(null) inode=15847 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:50:03.636000 audit: PATH item=39 name=(null) inode=15851 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:50:03.636000 audit: PATH item=40 name=(null) inode=15847 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:50:03.636000 audit: PATH item=41 name=(null) inode=15852 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:50:03.636000 audit: PATH item=42 name=(null) inode=15832 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:50:03.636000 audit: PATH item=43 name=(null) inode=15853 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:50:03.636000 audit: PATH item=44 name=(null) inode=15853 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:50:03.636000 audit: PATH item=45 name=(null) inode=15854 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:50:03.636000 audit: PATH item=46 name=(null) inode=15853 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:50:03.636000 audit: PATH item=47 name=(null) inode=15855 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:50:03.636000 audit: PATH item=48 name=(null) inode=15853 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:50:03.636000 audit: PATH item=49 name=(null) inode=15856 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:50:03.636000 audit: PATH item=50 name=(null) inode=15853 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:50:03.636000 audit: PATH item=51 name=(null) inode=15857 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:50:03.636000 audit: PATH item=52 name=(null) inode=15853 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:50:03.636000 audit: PATH item=53 name=(null) inode=15858 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:50:03.636000 audit: PATH item=54 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:50:03.636000 audit: PATH item=55 name=(null) inode=15859 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:50:03.636000 audit: PATH item=56 name=(null) inode=15859 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:50:03.636000 audit: PATH item=57 name=(null) inode=15860 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:50:03.636000 audit: PATH item=58 name=(null) inode=15859 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:50:03.636000 audit: PATH item=59 name=(null) inode=15861 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:50:03.636000 audit: PATH item=60 name=(null) inode=15859 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:50:03.636000 audit: PATH item=61 name=(null) inode=15862 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:50:03.636000 audit: PATH item=62 name=(null) inode=15862 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:50:03.636000 audit: PATH item=63 name=(null) inode=15863 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:50:03.636000 audit: PATH item=64 name=(null) inode=15862 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:50:03.636000 audit: PATH item=65 name=(null) inode=15864 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:50:03.636000 audit: PATH item=66 name=(null) inode=15862 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:50:03.636000 audit: PATH item=67 name=(null) inode=15865 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:50:03.636000 audit: PATH item=68 name=(null) inode=15862 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:50:03.636000 audit: PATH item=69 name=(null) inode=15866 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:50:03.636000 audit: PATH item=70 name=(null) inode=15862 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:50:03.636000 audit: PATH item=71 name=(null) inode=15867 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:50:03.636000 audit: PATH item=72 name=(null) inode=15859 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:50:03.636000 audit: PATH item=73 name=(null) inode=15868 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:50:03.636000 audit: PATH item=74 name=(null) inode=15868 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:50:03.636000 audit: PATH item=75 name=(null) inode=15869 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:50:03.636000 audit: PATH item=76 name=(null) inode=15868 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:50:03.636000 audit: PATH item=77 name=(null) inode=15870 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:50:03.636000 audit: PATH item=78 name=(null) inode=15868 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:50:03.636000 audit: PATH item=79 name=(null) inode=15871 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:50:03.636000 audit: PATH item=80 name=(null) inode=15868 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:50:03.636000 audit: PATH item=81 name=(null) inode=15872 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:50:03.636000 audit: PATH item=82 name=(null) inode=15868 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:50:03.636000 audit: PATH item=83 name=(null) inode=15873 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:50:03.636000 audit: PATH item=84 name=(null) inode=15859 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:50:03.636000 audit: PATH item=85 name=(null) inode=15874 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:50:03.636000 audit: PATH item=86 name=(null) inode=15874 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:50:03.636000 audit: PATH item=87 name=(null) inode=15875 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:50:03.636000 audit: PATH item=88 name=(null) inode=15874 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:50:03.636000 audit: PATH item=89 name=(null) inode=15876 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:50:03.636000 audit: PATH item=90 name=(null) inode=15874 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:50:03.636000 audit: PATH item=91 name=(null) inode=15877 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:50:03.636000 audit: PATH item=92 name=(null) inode=15874 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:50:03.636000 audit: PATH item=93 name=(null) inode=15878 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:50:03.636000 audit: PATH item=94 name=(null) inode=15874 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:50:03.636000 audit: PATH item=95 name=(null) inode=15879 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:50:03.636000 audit: PATH item=96 name=(null) inode=15859 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:50:03.636000 audit: PATH item=97 name=(null) inode=15880 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:50:03.636000 audit: PATH item=98 name=(null) inode=15880 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:50:03.636000 audit: PATH item=99 name=(null) inode=15881 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:50:03.636000 audit: PATH item=100 name=(null) inode=15880 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:50:03.636000 audit: PATH item=101 name=(null) inode=15882 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:50:03.636000 audit: PATH item=102 name=(null) inode=15880 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:50:03.636000 audit: PATH item=103 name=(null) inode=15883 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:50:03.636000 audit: PATH item=104 name=(null) inode=15880 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:50:03.636000 audit: PATH item=105 name=(null) inode=15884 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:50:03.636000 audit: PATH item=106 name=(null) inode=15880 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:50:03.636000 audit: PATH item=107 name=(null) inode=15885 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:50:03.636000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:50:03.636000 audit: PATH item=109 name=(null) inode=15886 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:50:03.636000 audit: PROCTITLE proctitle="(udev-worker)" Dec 13 06:50:03.731130 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Dec 13 06:50:03.740297 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Dec 13 06:50:03.740533 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Dec 13 06:50:03.866529 systemd[1]: Finished systemd-udev-settle.service. Dec 13 06:50:03.867000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:50:03.870384 systemd[1]: Starting lvm2-activation-early.service... Dec 13 06:50:03.896570 lvm[1043]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 06:50:03.932083 systemd[1]: Finished lvm2-activation-early.service. Dec 13 06:50:03.932000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:50:03.933047 systemd[1]: Reached target cryptsetup.target. Dec 13 06:50:03.935634 systemd[1]: Starting lvm2-activation.service... Dec 13 06:50:03.942229 lvm[1044]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 06:50:03.965799 systemd[1]: Finished lvm2-activation.service. Dec 13 06:50:03.966000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:50:03.966680 systemd[1]: Reached target local-fs-pre.target. Dec 13 06:50:03.967400 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 06:50:03.967452 systemd[1]: Reached target local-fs.target. Dec 13 06:50:03.968052 systemd[1]: Reached target machines.target. Dec 13 06:50:03.970862 systemd[1]: Starting ldconfig.service... Dec 13 06:50:03.972227 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 06:50:03.972335 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 06:50:03.974013 systemd[1]: Starting systemd-boot-update.service... Dec 13 06:50:03.980231 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Dec 13 06:50:03.983458 systemd[1]: Starting systemd-machine-id-commit.service... Dec 13 06:50:03.987300 systemd[1]: Starting systemd-sysext.service... Dec 13 06:50:03.997699 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1046 (bootctl) Dec 13 06:50:03.999909 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Dec 13 06:50:04.007829 systemd[1]: Unmounting usr-share-oem.mount... Dec 13 06:50:04.038000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:50:04.034324 systemd[1]: usr-share-oem.mount: Deactivated successfully. Dec 13 06:50:04.034598 systemd[1]: Unmounted usr-share-oem.mount. Dec 13 06:50:04.036879 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 06:50:04.038076 systemd[1]: Finished systemd-machine-id-commit.service. Dec 13 06:50:04.055905 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Dec 13 06:50:04.056000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:50:04.067327 kernel: loop0: detected capacity change from 0 to 211296 Dec 13 06:50:04.105150 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 06:50:04.129189 kernel: loop1: detected capacity change from 0 to 211296 Dec 13 06:50:04.144284 systemd-fsck[1056]: fsck.fat 4.2 (2021-01-31) Dec 13 06:50:04.144284 systemd-fsck[1056]: /dev/vda1: 789 files, 119291/258078 clusters Dec 13 06:50:04.147000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:50:04.146370 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Dec 13 06:50:04.149239 systemd[1]: Mounting boot.mount... Dec 13 06:50:04.162398 systemd[1]: Mounted boot.mount. Dec 13 06:50:04.164356 (sd-sysext)[1059]: Using extensions 'kubernetes'. Dec 13 06:50:04.165239 (sd-sysext)[1059]: Merged extensions into '/usr'. Dec 13 06:50:04.195668 systemd[1]: Finished systemd-boot-update.service. Dec 13 06:50:04.197000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:50:04.198187 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 06:50:04.201830 systemd[1]: Mounting usr-share-oem.mount... Dec 13 06:50:04.206668 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 06:50:04.208848 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 06:50:04.211936 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 06:50:04.214612 systemd[1]: Starting modprobe@loop.service... Dec 13 06:50:04.216333 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 06:50:04.216549 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 06:50:04.216788 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 06:50:04.222530 systemd[1]: Mounted usr-share-oem.mount. Dec 13 06:50:04.224758 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 06:50:04.224974 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 06:50:04.225000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:50:04.225000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:50:04.226224 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 06:50:04.226408 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 06:50:04.227000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:50:04.227000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:50:04.228468 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 06:50:04.228730 systemd[1]: Finished modprobe@loop.service. Dec 13 06:50:04.229000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:50:04.229000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:50:04.230394 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 06:50:04.230552 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 06:50:04.231955 systemd[1]: Finished systemd-sysext.service. Dec 13 06:50:04.232000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:50:04.234727 systemd[1]: Starting ensure-sysext.service... Dec 13 06:50:04.237055 systemd[1]: Starting systemd-tmpfiles-setup.service... Dec 13 06:50:04.246203 systemd[1]: Reloading. Dec 13 06:50:04.302333 systemd-tmpfiles[1067]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Dec 13 06:50:04.309402 systemd-tmpfiles[1067]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 06:50:04.320228 systemd-tmpfiles[1067]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 06:50:04.382332 /usr/lib/systemd/system-generators/torcx-generator[1086]: time="2024-12-13T06:50:04Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 06:50:04.383183 /usr/lib/systemd/system-generators/torcx-generator[1086]: time="2024-12-13T06:50:04Z" level=info msg="torcx already run" Dec 13 06:50:04.455042 ldconfig[1045]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 06:50:04.534101 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 06:50:04.534137 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 06:50:04.562551 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 06:50:04.640000 audit: BPF prog-id=24 op=LOAD Dec 13 06:50:04.640000 audit: BPF prog-id=20 op=UNLOAD Dec 13 06:50:04.640000 audit: BPF prog-id=25 op=LOAD Dec 13 06:50:04.641000 audit: BPF prog-id=21 op=UNLOAD Dec 13 06:50:04.641000 audit: BPF prog-id=26 op=LOAD Dec 13 06:50:04.641000 audit: BPF prog-id=27 op=LOAD Dec 13 06:50:04.641000 audit: BPF prog-id=22 op=UNLOAD Dec 13 06:50:04.641000 audit: BPF prog-id=23 op=UNLOAD Dec 13 06:50:04.644000 audit: BPF prog-id=28 op=LOAD Dec 13 06:50:04.644000 audit: BPF prog-id=29 op=LOAD Dec 13 06:50:04.644000 audit: BPF prog-id=18 op=UNLOAD Dec 13 06:50:04.644000 audit: BPF prog-id=19 op=UNLOAD Dec 13 06:50:04.646000 audit: BPF prog-id=30 op=LOAD Dec 13 06:50:04.646000 audit: BPF prog-id=15 op=UNLOAD Dec 13 06:50:04.646000 audit: BPF prog-id=31 op=LOAD Dec 13 06:50:04.646000 audit: BPF prog-id=32 op=LOAD Dec 13 06:50:04.646000 audit: BPF prog-id=16 op=UNLOAD Dec 13 06:50:04.646000 audit: BPF prog-id=17 op=UNLOAD Dec 13 06:50:04.651588 systemd[1]: Finished ldconfig.service. Dec 13 06:50:04.652000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:50:04.653819 systemd[1]: Finished systemd-tmpfiles-setup.service. Dec 13 06:50:04.654000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:50:04.659779 systemd[1]: Starting audit-rules.service... Dec 13 06:50:04.662204 systemd[1]: Starting clean-ca-certificates.service... Dec 13 06:50:04.665883 systemd[1]: Starting systemd-journal-catalog-update.service... Dec 13 06:50:04.670000 audit: BPF prog-id=33 op=LOAD Dec 13 06:50:04.672342 systemd[1]: Starting systemd-resolved.service... Dec 13 06:50:04.675000 audit: BPF prog-id=34 op=LOAD Dec 13 06:50:04.677076 systemd[1]: Starting systemd-timesyncd.service... Dec 13 06:50:04.679443 systemd[1]: Starting systemd-update-utmp.service... Dec 13 06:50:04.692534 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 06:50:04.697021 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 06:50:04.697000 audit[1143]: SYSTEM_BOOT pid=1143 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Dec 13 06:50:04.700228 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 06:50:04.702715 systemd[1]: Starting modprobe@loop.service... Dec 13 06:50:04.704071 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 06:50:04.704302 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 06:50:04.706733 systemd[1]: Finished clean-ca-certificates.service. Dec 13 06:50:04.708000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:50:04.708745 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 06:50:04.708937 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 06:50:04.709000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:50:04.709000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:50:04.711328 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 06:50:04.711521 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 06:50:04.712000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:50:04.712000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:50:04.715987 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 06:50:04.716191 systemd[1]: Finished modprobe@loop.service. Dec 13 06:50:04.716000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:50:04.716000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:50:04.721285 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 06:50:04.723424 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 06:50:04.726063 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 06:50:04.729939 systemd[1]: Starting modprobe@loop.service... Dec 13 06:50:04.730761 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 06:50:04.731060 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 06:50:04.731388 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 06:50:04.733690 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 06:50:04.733878 systemd[1]: Finished modprobe@loop.service. Dec 13 06:50:04.735000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:50:04.735000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:50:04.741452 systemd[1]: Finished systemd-update-utmp.service. Dec 13 06:50:04.742000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:50:04.744780 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 06:50:04.748211 systemd[1]: Starting modprobe@drm.service... Dec 13 06:50:04.750541 systemd[1]: Starting modprobe@loop.service... Dec 13 06:50:04.751510 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 06:50:04.751647 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 06:50:04.753285 systemd[1]: Starting systemd-networkd-wait-online.service... Dec 13 06:50:04.756293 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 06:50:04.757309 systemd[1]: Finished ensure-sysext.service. Dec 13 06:50:04.757000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:50:04.766011 systemd[1]: Finished systemd-journal-catalog-update.service. Dec 13 06:50:04.766000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:50:04.768939 systemd[1]: Starting systemd-update-done.service... Dec 13 06:50:04.771736 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 06:50:04.771929 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 06:50:04.772000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:50:04.772000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:50:04.773253 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 06:50:04.773439 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 06:50:04.773000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:50:04.773000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:50:04.774253 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 06:50:04.778967 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 06:50:04.779176 systemd[1]: Finished modprobe@drm.service. Dec 13 06:50:04.779000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:50:04.779000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:50:04.780349 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 06:50:04.780527 systemd[1]: Finished modprobe@loop.service. Dec 13 06:50:04.781000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:50:04.781000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:50:04.781346 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 06:50:04.786655 systemd[1]: Finished systemd-update-done.service. Dec 13 06:50:04.787000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:50:04.813000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Dec 13 06:50:04.813000 audit[1165]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffc057d25a0 a2=420 a3=0 items=0 ppid=1135 pid=1165 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 06:50:04.813000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 13 06:50:04.814287 augenrules[1165]: No rules Dec 13 06:50:04.814552 systemd[1]: Finished audit-rules.service. Dec 13 06:50:04.831900 systemd-resolved[1139]: Positive Trust Anchors: Dec 13 06:50:04.831924 systemd-resolved[1139]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 06:50:04.831961 systemd-resolved[1139]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 06:50:04.839614 systemd-resolved[1139]: Using system hostname 'srv-zd2v7.gb1.brightbox.com'. Dec 13 06:50:04.840449 systemd[1]: Started systemd-timesyncd.service. Dec 13 06:50:04.841328 systemd[1]: Reached target time-set.target. Dec 13 06:50:04.843974 systemd[1]: Started systemd-resolved.service. Dec 13 06:50:04.844707 systemd[1]: Reached target network.target. Dec 13 06:50:04.845319 systemd[1]: Reached target nss-lookup.target. Dec 13 06:50:04.845929 systemd[1]: Reached target sysinit.target. Dec 13 06:50:04.846644 systemd[1]: Started motdgen.path. Dec 13 06:50:04.847237 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Dec 13 06:50:04.848204 systemd[1]: Started logrotate.timer. Dec 13 06:50:04.848881 systemd[1]: Started mdadm.timer. Dec 13 06:50:04.849529 systemd[1]: Started systemd-tmpfiles-clean.timer. Dec 13 06:50:04.850240 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 06:50:04.850290 systemd[1]: Reached target paths.target. Dec 13 06:50:04.850867 systemd[1]: Reached target timers.target. Dec 13 06:50:04.851933 systemd[1]: Listening on dbus.socket. Dec 13 06:50:04.854518 systemd[1]: Starting docker.socket... Dec 13 06:50:04.858813 systemd[1]: Listening on sshd.socket. Dec 13 06:50:04.859566 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 06:50:04.860186 systemd[1]: Listening on docker.socket. Dec 13 06:50:04.860895 systemd[1]: Reached target sockets.target. Dec 13 06:50:04.861496 systemd[1]: Reached target basic.target. Dec 13 06:50:04.862169 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 06:50:04.862220 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 06:50:04.863808 systemd[1]: Starting containerd.service... Dec 13 06:50:04.865986 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Dec 13 06:50:04.868701 systemd[1]: Starting dbus.service... Dec 13 06:50:04.871775 systemd[1]: Starting enable-oem-cloudinit.service... Dec 13 06:50:04.877016 systemd[1]: Starting extend-filesystems.service... Dec 13 06:50:04.877954 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Dec 13 06:50:04.888413 jq[1178]: false Dec 13 06:50:04.883677 systemd[1]: Starting motdgen.service... Dec 13 06:50:04.891233 systemd[1]: Starting ssh-key-proc-cmdline.service... Dec 13 06:50:04.899468 systemd[1]: Starting sshd-keygen.service... Dec 13 06:50:04.909425 systemd[1]: Starting systemd-logind.service... Dec 13 06:50:04.911250 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 06:50:04.911438 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 06:50:04.912245 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 06:50:04.914601 systemd[1]: Starting update-engine.service... Dec 13 06:50:04.918271 systemd[1]: Starting update-ssh-keys-after-ignition.service... Dec 13 06:50:04.925234 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 06:50:04.940277 jq[1189]: true Dec 13 06:50:04.925687 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Dec 13 06:50:04.926341 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 06:50:04.927741 systemd[1]: Finished ssh-key-proc-cmdline.service. Dec 13 06:50:04.953018 jq[1194]: true Dec 13 06:50:04.970811 dbus-daemon[1175]: [system] SELinux support is enabled Dec 13 06:50:04.971148 systemd[1]: Started dbus.service. Dec 13 06:50:04.978450 dbus-daemon[1175]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1024 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Dec 13 06:50:04.975751 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 06:50:04.980357 dbus-daemon[1175]: [system] Successfully activated service 'org.freedesktop.systemd1' Dec 13 06:50:04.975791 systemd[1]: Reached target system-config.target. Dec 13 06:50:04.976524 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 06:50:04.976549 systemd[1]: Reached target user-config.target. Dec 13 06:50:04.985192 systemd[1]: Starting systemd-hostnamed.service... Dec 13 06:50:04.991941 extend-filesystems[1179]: Found loop1 Dec 13 06:50:04.991941 extend-filesystems[1179]: Found vda Dec 13 06:50:04.991941 extend-filesystems[1179]: Found vda1 Dec 13 06:50:04.991941 extend-filesystems[1179]: Found vda2 Dec 13 06:50:04.991941 extend-filesystems[1179]: Found vda3 Dec 13 06:50:04.991941 extend-filesystems[1179]: Found usr Dec 13 06:50:04.991941 extend-filesystems[1179]: Found vda4 Dec 13 06:50:04.991941 extend-filesystems[1179]: Found vda6 Dec 13 06:50:04.991941 extend-filesystems[1179]: Found vda7 Dec 13 06:50:04.991941 extend-filesystems[1179]: Found vda9 Dec 13 06:50:04.991941 extend-filesystems[1179]: Checking size of /dev/vda9 Dec 13 06:50:05.034767 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 06:50:05.034816 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 06:50:05.036367 extend-filesystems[1179]: Resized partition /dev/vda9 Dec 13 06:50:05.048582 systemd[1]: Created slice system-sshd.slice. Dec 13 06:50:05.054003 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 06:50:05.054317 systemd[1]: Finished motdgen.service. Dec 13 06:50:05.071594 extend-filesystems[1225]: resize2fs 1.46.5 (30-Dec-2021) Dec 13 06:50:05.082126 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 15121403 blocks Dec 13 06:50:05.083487 update_engine[1188]: I1213 06:50:05.082424 1188 main.cc:92] Flatcar Update Engine starting Dec 13 06:50:05.086816 bash[1227]: Updated "/home/core/.ssh/authorized_keys" Dec 13 06:50:05.088779 systemd[1]: Finished update-ssh-keys-after-ignition.service. Dec 13 06:50:05.090777 systemd[1]: Started update-engine.service. Dec 13 06:50:05.091183 update_engine[1188]: I1213 06:50:05.090879 1188 update_check_scheduler.cc:74] Next update check in 6m48s Dec 13 06:50:05.094257 systemd[1]: Started locksmithd.service. Dec 13 06:50:05.900360 systemd-resolved[1139]: Clock change detected. Flushing caches. Dec 13 06:50:05.900745 systemd-timesyncd[1140]: Contacted time server 149.22.220.130:123 (0.flatcar.pool.ntp.org). Dec 13 06:50:05.900924 systemd-timesyncd[1140]: Initial clock synchronization to Fri 2024-12-13 06:50:05.900197 UTC. Dec 13 06:50:05.949453 systemd-logind[1187]: Watching system buttons on /dev/input/event2 (Power Button) Dec 13 06:50:05.951037 systemd-logind[1187]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 13 06:50:05.952330 systemd-logind[1187]: New seat seat0. Dec 13 06:50:05.959045 systemd[1]: Started systemd-logind.service. Dec 13 06:50:05.978037 env[1191]: time="2024-12-13T06:50:05.977905659Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Dec 13 06:50:06.001117 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Dec 13 06:50:06.007578 dbus-daemon[1175]: [system] Successfully activated service 'org.freedesktop.hostname1' Dec 13 06:50:06.007837 systemd[1]: Started systemd-hostnamed.service. Dec 13 06:50:06.008506 dbus-daemon[1175]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1210 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Dec 13 06:50:06.013456 systemd[1]: Starting polkit.service... Dec 13 06:50:06.019717 extend-filesystems[1225]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Dec 13 06:50:06.019717 extend-filesystems[1225]: old_desc_blocks = 1, new_desc_blocks = 8 Dec 13 06:50:06.019717 extend-filesystems[1225]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Dec 13 06:50:06.023452 extend-filesystems[1179]: Resized filesystem in /dev/vda9 Dec 13 06:50:06.020825 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 06:50:06.021226 systemd[1]: Finished extend-filesystems.service. Dec 13 06:50:06.039959 polkitd[1232]: Started polkitd version 121 Dec 13 06:50:06.058328 polkitd[1232]: Loading rules from directory /etc/polkit-1/rules.d Dec 13 06:50:06.058453 polkitd[1232]: Loading rules from directory /usr/share/polkit-1/rules.d Dec 13 06:50:06.059867 polkitd[1232]: Finished loading, compiling and executing 2 rules Dec 13 06:50:06.060391 dbus-daemon[1175]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Dec 13 06:50:06.060569 systemd[1]: Started polkit.service. Dec 13 06:50:06.061795 polkitd[1232]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Dec 13 06:50:06.072643 systemd-hostnamed[1210]: Hostname set to (static) Dec 13 06:50:06.073569 env[1191]: time="2024-12-13T06:50:06.073521590Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 06:50:06.073872 env[1191]: time="2024-12-13T06:50:06.073842856Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 06:50:06.076183 env[1191]: time="2024-12-13T06:50:06.076135978Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.173-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 06:50:06.076183 env[1191]: time="2024-12-13T06:50:06.076180741Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 06:50:06.076473 env[1191]: time="2024-12-13T06:50:06.076440178Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 06:50:06.076559 env[1191]: time="2024-12-13T06:50:06.076474487Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 06:50:06.076559 env[1191]: time="2024-12-13T06:50:06.076497704Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Dec 13 06:50:06.076559 env[1191]: time="2024-12-13T06:50:06.076514167Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 06:50:06.076702 env[1191]: time="2024-12-13T06:50:06.076674387Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 06:50:06.077207 env[1191]: time="2024-12-13T06:50:06.077178412Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 06:50:06.077396 env[1191]: time="2024-12-13T06:50:06.077362989Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 06:50:06.077453 env[1191]: time="2024-12-13T06:50:06.077396409Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 06:50:06.077498 env[1191]: time="2024-12-13T06:50:06.077473128Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Dec 13 06:50:06.077498 env[1191]: time="2024-12-13T06:50:06.077492871Z" level=info msg="metadata content store policy set" policy=shared Dec 13 06:50:06.082841 env[1191]: time="2024-12-13T06:50:06.082805712Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 06:50:06.082907 env[1191]: time="2024-12-13T06:50:06.082848331Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 06:50:06.082907 env[1191]: time="2024-12-13T06:50:06.082871625Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 06:50:06.082985 env[1191]: time="2024-12-13T06:50:06.082942020Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 06:50:06.082985 env[1191]: time="2024-12-13T06:50:06.082967585Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 06:50:06.083092 env[1191]: time="2024-12-13T06:50:06.083039297Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 06:50:06.083144 env[1191]: time="2024-12-13T06:50:06.083090971Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 06:50:06.083144 env[1191]: time="2024-12-13T06:50:06.083117690Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 06:50:06.083144 env[1191]: time="2024-12-13T06:50:06.083139498Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Dec 13 06:50:06.083263 env[1191]: time="2024-12-13T06:50:06.083160138Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 06:50:06.083263 env[1191]: time="2024-12-13T06:50:06.083180100Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 06:50:06.083263 env[1191]: time="2024-12-13T06:50:06.083206536Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 06:50:06.083373 env[1191]: time="2024-12-13T06:50:06.083357665Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 06:50:06.083535 env[1191]: time="2024-12-13T06:50:06.083508153Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 06:50:06.084031 env[1191]: time="2024-12-13T06:50:06.084002475Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 06:50:06.084126 env[1191]: time="2024-12-13T06:50:06.084057938Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 06:50:06.084126 env[1191]: time="2024-12-13T06:50:06.084109905Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 06:50:06.084222 env[1191]: time="2024-12-13T06:50:06.084205065Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 06:50:06.084268 env[1191]: time="2024-12-13T06:50:06.084229069Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 06:50:06.084367 env[1191]: time="2024-12-13T06:50:06.084340992Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 06:50:06.084420 env[1191]: time="2024-12-13T06:50:06.084375847Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 06:50:06.084420 env[1191]: time="2024-12-13T06:50:06.084397169Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 06:50:06.084420 env[1191]: time="2024-12-13T06:50:06.084416166Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 06:50:06.084574 env[1191]: time="2024-12-13T06:50:06.084446653Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 06:50:06.084574 env[1191]: time="2024-12-13T06:50:06.084465886Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 06:50:06.084574 env[1191]: time="2024-12-13T06:50:06.084488322Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 06:50:06.084813 env[1191]: time="2024-12-13T06:50:06.084730587Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 06:50:06.084813 env[1191]: time="2024-12-13T06:50:06.084756664Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 06:50:06.084813 env[1191]: time="2024-12-13T06:50:06.084775922Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 06:50:06.084813 env[1191]: time="2024-12-13T06:50:06.084795805Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 06:50:06.084976 env[1191]: time="2024-12-13T06:50:06.084816805Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Dec 13 06:50:06.084976 env[1191]: time="2024-12-13T06:50:06.084834217Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 06:50:06.084976 env[1191]: time="2024-12-13T06:50:06.084877318Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Dec 13 06:50:06.084976 env[1191]: time="2024-12-13T06:50:06.084939371Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 06:50:06.085301 env[1191]: time="2024-12-13T06:50:06.085226935Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 06:50:06.087495 env[1191]: time="2024-12-13T06:50:06.085316888Z" level=info msg="Connect containerd service" Dec 13 06:50:06.087495 env[1191]: time="2024-12-13T06:50:06.085387723Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 06:50:06.087656 env[1191]: time="2024-12-13T06:50:06.087622486Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 06:50:06.088184 env[1191]: time="2024-12-13T06:50:06.088156579Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 06:50:06.088265 env[1191]: time="2024-12-13T06:50:06.088231569Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 06:50:06.088416 systemd[1]: Started containerd.service. Dec 13 06:50:06.089614 env[1191]: time="2024-12-13T06:50:06.089415405Z" level=info msg="containerd successfully booted in 0.116323s" Dec 13 06:50:06.090232 env[1191]: time="2024-12-13T06:50:06.090186141Z" level=info msg="Start subscribing containerd event" Dec 13 06:50:06.090319 env[1191]: time="2024-12-13T06:50:06.090270956Z" level=info msg="Start recovering state" Dec 13 06:50:06.090447 env[1191]: time="2024-12-13T06:50:06.090420685Z" level=info msg="Start event monitor" Dec 13 06:50:06.090502 env[1191]: time="2024-12-13T06:50:06.090467342Z" level=info msg="Start snapshots syncer" Dec 13 06:50:06.090502 env[1191]: time="2024-12-13T06:50:06.090488749Z" level=info msg="Start cni network conf syncer for default" Dec 13 06:50:06.090624 env[1191]: time="2024-12-13T06:50:06.090506117Z" level=info msg="Start streaming server" Dec 13 06:50:06.175124 locksmithd[1228]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 06:50:06.334177 systemd-networkd[1024]: eth0: Gained IPv6LL Dec 13 06:50:06.338970 systemd[1]: Finished systemd-networkd-wait-online.service. Dec 13 06:50:06.340186 systemd[1]: Reached target network-online.target. Dec 13 06:50:06.343518 systemd[1]: Starting kubelet.service... Dec 13 06:50:07.046770 sshd_keygen[1206]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 06:50:07.078195 systemd[1]: Finished sshd-keygen.service. Dec 13 06:50:07.082943 systemd[1]: Starting issuegen.service... Dec 13 06:50:07.087117 systemd[1]: Started sshd@0-10.243.75.202:22-139.178.89.65:44932.service. Dec 13 06:50:07.103466 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 06:50:07.103729 systemd[1]: Finished issuegen.service. Dec 13 06:50:07.107348 systemd[1]: Starting systemd-user-sessions.service... Dec 13 06:50:07.120377 systemd[1]: Finished systemd-user-sessions.service. Dec 13 06:50:07.123551 systemd[1]: Started getty@tty1.service. Dec 13 06:50:07.128266 systemd[1]: Started serial-getty@ttyS0.service. Dec 13 06:50:07.131465 systemd[1]: Reached target getty.target. Dec 13 06:50:07.237283 systemd[1]: Started kubelet.service. Dec 13 06:50:07.470694 systemd-networkd[1024]: eth0: Ignoring DHCPv6 address 2a02:1348:17c:d2f2:24:19ff:fef3:4bca/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:17c:d2f2:24:19ff:fef3:4bca/64 assigned by NDisc. Dec 13 06:50:07.470707 systemd-networkd[1024]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Dec 13 06:50:08.017280 sshd[1257]: Accepted publickey for core from 139.178.89.65 port 44932 ssh2: RSA SHA256:dQnQ6z9Pj/RNX8sNR4TqdGn8nHqynNIoEP6sXMH78jY Dec 13 06:50:08.018849 sshd[1257]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 06:50:08.037012 kubelet[1266]: E1213 06:50:08.036174 1266 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 06:50:08.039494 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 06:50:08.039745 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 06:50:08.040186 systemd[1]: kubelet.service: Consumed 1.094s CPU time. Dec 13 06:50:08.042218 systemd[1]: Created slice user-500.slice. Dec 13 06:50:08.046293 systemd[1]: Starting user-runtime-dir@500.service... Dec 13 06:50:08.051673 systemd-logind[1187]: New session 1 of user core. Dec 13 06:50:08.062357 systemd[1]: Finished user-runtime-dir@500.service. Dec 13 06:50:08.067156 systemd[1]: Starting user@500.service... Dec 13 06:50:08.073359 (systemd)[1275]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 06:50:08.180119 systemd[1275]: Queued start job for default target default.target. Dec 13 06:50:08.181257 systemd[1275]: Reached target paths.target. Dec 13 06:50:08.181294 systemd[1275]: Reached target sockets.target. Dec 13 06:50:08.181316 systemd[1275]: Reached target timers.target. Dec 13 06:50:08.181335 systemd[1275]: Reached target basic.target. Dec 13 06:50:08.181409 systemd[1275]: Reached target default.target. Dec 13 06:50:08.181459 systemd[1275]: Startup finished in 96ms. Dec 13 06:50:08.181638 systemd[1]: Started user@500.service. Dec 13 06:50:08.184141 systemd[1]: Started session-1.scope. Dec 13 06:50:08.811427 systemd[1]: Started sshd@1-10.243.75.202:22-139.178.89.65:52682.service. Dec 13 06:50:09.706258 sshd[1285]: Accepted publickey for core from 139.178.89.65 port 52682 ssh2: RSA SHA256:dQnQ6z9Pj/RNX8sNR4TqdGn8nHqynNIoEP6sXMH78jY Dec 13 06:50:09.707239 sshd[1285]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 06:50:09.716042 systemd[1]: Started session-2.scope. Dec 13 06:50:09.717160 systemd-logind[1187]: New session 2 of user core. Dec 13 06:50:10.322399 sshd[1285]: pam_unix(sshd:session): session closed for user core Dec 13 06:50:10.326123 systemd-logind[1187]: Session 2 logged out. Waiting for processes to exit. Dec 13 06:50:10.326449 systemd[1]: sshd@1-10.243.75.202:22-139.178.89.65:52682.service: Deactivated successfully. Dec 13 06:50:10.327423 systemd[1]: session-2.scope: Deactivated successfully. Dec 13 06:50:10.328472 systemd-logind[1187]: Removed session 2. Dec 13 06:50:10.469167 systemd[1]: Started sshd@2-10.243.75.202:22-139.178.89.65:52688.service. Dec 13 06:50:11.357938 sshd[1291]: Accepted publickey for core from 139.178.89.65 port 52688 ssh2: RSA SHA256:dQnQ6z9Pj/RNX8sNR4TqdGn8nHqynNIoEP6sXMH78jY Dec 13 06:50:11.361693 sshd[1291]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 06:50:11.369922 systemd-logind[1187]: New session 3 of user core. Dec 13 06:50:11.370400 systemd[1]: Started session-3.scope. Dec 13 06:50:11.978767 sshd[1291]: pam_unix(sshd:session): session closed for user core Dec 13 06:50:11.982582 systemd-logind[1187]: Session 3 logged out. Waiting for processes to exit. Dec 13 06:50:11.983130 systemd[1]: sshd@2-10.243.75.202:22-139.178.89.65:52688.service: Deactivated successfully. Dec 13 06:50:11.984109 systemd[1]: session-3.scope: Deactivated successfully. Dec 13 06:50:11.985176 systemd-logind[1187]: Removed session 3. Dec 13 06:50:12.841303 coreos-metadata[1174]: Dec 13 06:50:12.841 WARN failed to locate config-drive, using the metadata service API instead Dec 13 06:50:12.892378 coreos-metadata[1174]: Dec 13 06:50:12.892 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Dec 13 06:50:12.935356 coreos-metadata[1174]: Dec 13 06:50:12.935 INFO Fetch successful Dec 13 06:50:12.935565 coreos-metadata[1174]: Dec 13 06:50:12.935 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Dec 13 06:50:12.991762 coreos-metadata[1174]: Dec 13 06:50:12.991 INFO Fetch successful Dec 13 06:50:12.993953 unknown[1174]: wrote ssh authorized keys file for user: core Dec 13 06:50:13.007296 update-ssh-keys[1298]: Updated "/home/core/.ssh/authorized_keys" Dec 13 06:50:13.008677 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Dec 13 06:50:13.009322 systemd[1]: Reached target multi-user.target. Dec 13 06:50:13.012394 systemd[1]: Starting systemd-update-utmp-runlevel.service... Dec 13 06:50:13.022673 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Dec 13 06:50:13.022925 systemd[1]: Finished systemd-update-utmp-runlevel.service. Dec 13 06:50:13.023217 systemd[1]: Startup finished in 1.111s (kernel) + 12.054s (initrd) + 13.523s (userspace) = 26.689s. Dec 13 06:50:18.210150 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 06:50:18.210582 systemd[1]: Stopped kubelet.service. Dec 13 06:50:18.210677 systemd[1]: kubelet.service: Consumed 1.094s CPU time. Dec 13 06:50:18.213490 systemd[1]: Starting kubelet.service... Dec 13 06:50:18.338823 systemd[1]: Started kubelet.service. Dec 13 06:50:18.447269 kubelet[1304]: E1213 06:50:18.447178 1304 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 06:50:18.451766 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 06:50:18.451978 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 06:50:22.128587 systemd[1]: Started sshd@3-10.243.75.202:22-139.178.89.65:58264.service. Dec 13 06:50:23.020625 sshd[1311]: Accepted publickey for core from 139.178.89.65 port 58264 ssh2: RSA SHA256:dQnQ6z9Pj/RNX8sNR4TqdGn8nHqynNIoEP6sXMH78jY Dec 13 06:50:23.022649 sshd[1311]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 06:50:23.030426 systemd[1]: Started session-4.scope. Dec 13 06:50:23.031352 systemd-logind[1187]: New session 4 of user core. Dec 13 06:50:23.640625 sshd[1311]: pam_unix(sshd:session): session closed for user core Dec 13 06:50:23.644429 systemd-logind[1187]: Session 4 logged out. Waiting for processes to exit. Dec 13 06:50:23.645174 systemd[1]: sshd@3-10.243.75.202:22-139.178.89.65:58264.service: Deactivated successfully. Dec 13 06:50:23.646094 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 06:50:23.647170 systemd-logind[1187]: Removed session 4. Dec 13 06:50:23.792670 systemd[1]: Started sshd@4-10.243.75.202:22-139.178.89.65:58274.service. Dec 13 06:50:24.687204 sshd[1317]: Accepted publickey for core from 139.178.89.65 port 58274 ssh2: RSA SHA256:dQnQ6z9Pj/RNX8sNR4TqdGn8nHqynNIoEP6sXMH78jY Dec 13 06:50:24.689857 sshd[1317]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 06:50:24.696955 systemd-logind[1187]: New session 5 of user core. Dec 13 06:50:24.697910 systemd[1]: Started session-5.scope. Dec 13 06:50:25.304829 sshd[1317]: pam_unix(sshd:session): session closed for user core Dec 13 06:50:25.308267 systemd[1]: sshd@4-10.243.75.202:22-139.178.89.65:58274.service: Deactivated successfully. Dec 13 06:50:25.309253 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 06:50:25.310038 systemd-logind[1187]: Session 5 logged out. Waiting for processes to exit. Dec 13 06:50:25.311575 systemd-logind[1187]: Removed session 5. Dec 13 06:50:25.452617 systemd[1]: Started sshd@5-10.243.75.202:22-139.178.89.65:58276.service. Dec 13 06:50:26.346610 sshd[1323]: Accepted publickey for core from 139.178.89.65 port 58276 ssh2: RSA SHA256:dQnQ6z9Pj/RNX8sNR4TqdGn8nHqynNIoEP6sXMH78jY Dec 13 06:50:26.349138 sshd[1323]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 06:50:26.355439 systemd-logind[1187]: New session 6 of user core. Dec 13 06:50:26.356250 systemd[1]: Started session-6.scope. Dec 13 06:50:26.970839 sshd[1323]: pam_unix(sshd:session): session closed for user core Dec 13 06:50:26.974985 systemd[1]: sshd@5-10.243.75.202:22-139.178.89.65:58276.service: Deactivated successfully. Dec 13 06:50:26.976032 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 06:50:26.976911 systemd-logind[1187]: Session 6 logged out. Waiting for processes to exit. Dec 13 06:50:26.978243 systemd-logind[1187]: Removed session 6. Dec 13 06:50:27.116467 systemd[1]: Started sshd@6-10.243.75.202:22-139.178.89.65:58290.service. Dec 13 06:50:28.003914 sshd[1329]: Accepted publickey for core from 139.178.89.65 port 58290 ssh2: RSA SHA256:dQnQ6z9Pj/RNX8sNR4TqdGn8nHqynNIoEP6sXMH78jY Dec 13 06:50:28.005825 sshd[1329]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 06:50:28.013330 systemd[1]: Started session-7.scope. Dec 13 06:50:28.015180 systemd-logind[1187]: New session 7 of user core. Dec 13 06:50:28.458975 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 06:50:28.459312 systemd[1]: Stopped kubelet.service. Dec 13 06:50:28.461899 systemd[1]: Starting kubelet.service... Dec 13 06:50:28.590639 systemd[1]: Started kubelet.service. Dec 13 06:50:28.666239 sudo[1334]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 06:50:28.666642 sudo[1334]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Dec 13 06:50:28.690520 systemd[1]: Starting coreos-metadata.service... Dec 13 06:50:28.728482 kubelet[1337]: E1213 06:50:28.728256 1337 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 06:50:28.731081 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 06:50:28.731326 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 06:50:35.743523 coreos-metadata[1346]: Dec 13 06:50:35.743 WARN failed to locate config-drive, using the metadata service API instead Dec 13 06:50:35.794766 coreos-metadata[1346]: Dec 13 06:50:35.794 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Dec 13 06:50:35.796443 coreos-metadata[1346]: Dec 13 06:50:35.796 INFO Fetch successful Dec 13 06:50:35.796644 coreos-metadata[1346]: Dec 13 06:50:35.796 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Dec 13 06:50:35.807079 coreos-metadata[1346]: Dec 13 06:50:35.806 INFO Fetch successful Dec 13 06:50:35.807300 coreos-metadata[1346]: Dec 13 06:50:35.807 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Dec 13 06:50:35.820754 coreos-metadata[1346]: Dec 13 06:50:35.820 INFO Fetch successful Dec 13 06:50:35.820849 coreos-metadata[1346]: Dec 13 06:50:35.820 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Dec 13 06:50:35.836946 coreos-metadata[1346]: Dec 13 06:50:35.836 INFO Fetch successful Dec 13 06:50:35.837208 coreos-metadata[1346]: Dec 13 06:50:35.837 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Dec 13 06:50:35.851927 coreos-metadata[1346]: Dec 13 06:50:35.851 INFO Fetch successful Dec 13 06:50:35.863507 systemd[1]: Finished coreos-metadata.service. Dec 13 06:50:36.709807 systemd[1]: Stopped kubelet.service. Dec 13 06:50:36.714043 systemd[1]: Starting kubelet.service... Dec 13 06:50:36.741259 systemd[1]: Reloading. Dec 13 06:50:36.889856 /usr/lib/systemd/system-generators/torcx-generator[1414]: time="2024-12-13T06:50:36Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 06:50:36.890906 /usr/lib/systemd/system-generators/torcx-generator[1414]: time="2024-12-13T06:50:36Z" level=info msg="torcx already run" Dec 13 06:50:36.982419 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 06:50:36.983542 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 06:50:37.011194 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 06:50:37.151903 systemd[1]: Started kubelet.service. Dec 13 06:50:37.154298 systemd[1]: Stopping kubelet.service... Dec 13 06:50:37.155865 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 06:50:37.156176 systemd[1]: Stopped kubelet.service. Dec 13 06:50:37.158897 systemd[1]: Starting kubelet.service... Dec 13 06:50:37.278535 systemd[1]: Started kubelet.service. Dec 13 06:50:37.340763 kubelet[1462]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 06:50:37.341447 kubelet[1462]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 06:50:37.341588 kubelet[1462]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 06:50:37.341826 kubelet[1462]: I1213 06:50:37.341758 1462 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 06:50:37.506457 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Dec 13 06:50:37.513244 kubelet[1462]: I1213 06:50:37.513212 1462 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 06:50:37.513595 kubelet[1462]: I1213 06:50:37.513572 1462 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 06:50:37.514025 kubelet[1462]: I1213 06:50:37.514000 1462 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 06:50:37.542952 kubelet[1462]: I1213 06:50:37.541728 1462 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 06:50:37.564465 kubelet[1462]: I1213 06:50:37.564417 1462 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 06:50:37.565732 kubelet[1462]: I1213 06:50:37.565690 1462 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 06:50:37.566003 kubelet[1462]: I1213 06:50:37.565973 1462 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 06:50:37.566711 kubelet[1462]: I1213 06:50:37.566669 1462 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 06:50:37.566711 kubelet[1462]: I1213 06:50:37.566710 1462 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 06:50:37.566952 kubelet[1462]: I1213 06:50:37.566923 1462 state_mem.go:36] "Initialized new in-memory state store" Dec 13 06:50:37.567584 kubelet[1462]: I1213 06:50:37.567209 1462 kubelet.go:396] "Attempting to sync node with API server" Dec 13 06:50:37.567673 kubelet[1462]: I1213 06:50:37.567591 1462 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 06:50:37.567673 kubelet[1462]: I1213 06:50:37.567670 1462 kubelet.go:312] "Adding apiserver pod source" Dec 13 06:50:37.567868 kubelet[1462]: E1213 06:50:37.567816 1462 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:50:37.567939 kubelet[1462]: E1213 06:50:37.567897 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:50:37.568049 kubelet[1462]: I1213 06:50:37.567940 1462 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 06:50:37.570764 kubelet[1462]: I1213 06:50:37.570730 1462 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Dec 13 06:50:37.574096 kubelet[1462]: I1213 06:50:37.574055 1462 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 06:50:37.575542 kubelet[1462]: W1213 06:50:37.575508 1462 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 06:50:37.576135 kubelet[1462]: W1213 06:50:37.576109 1462 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes "10.243.75.202" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Dec 13 06:50:37.576306 kubelet[1462]: E1213 06:50:37.576283 1462 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.243.75.202" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Dec 13 06:50:37.576511 kubelet[1462]: I1213 06:50:37.576483 1462 server.go:1256] "Started kubelet" Dec 13 06:50:37.576647 kubelet[1462]: W1213 06:50:37.576622 1462 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Dec 13 06:50:37.576770 kubelet[1462]: E1213 06:50:37.576749 1462 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Dec 13 06:50:37.581823 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Dec 13 06:50:37.582019 kubelet[1462]: I1213 06:50:37.581480 1462 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 06:50:37.591804 kubelet[1462]: E1213 06:50:37.591771 1462 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 06:50:37.591985 kubelet[1462]: I1213 06:50:37.591896 1462 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 06:50:37.593141 kubelet[1462]: I1213 06:50:37.593102 1462 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 06:50:37.593729 kubelet[1462]: I1213 06:50:37.593703 1462 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 06:50:37.593825 kubelet[1462]: I1213 06:50:37.593803 1462 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 06:50:37.595304 kubelet[1462]: I1213 06:50:37.595278 1462 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 06:50:37.595964 kubelet[1462]: I1213 06:50:37.595921 1462 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 06:50:37.597213 kubelet[1462]: I1213 06:50:37.597109 1462 server.go:461] "Adding debug handlers to kubelet server" Dec 13 06:50:37.601255 kubelet[1462]: I1213 06:50:37.600688 1462 factory.go:221] Registration of the systemd container factory successfully Dec 13 06:50:37.601255 kubelet[1462]: I1213 06:50:37.600845 1462 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 06:50:37.621191 kubelet[1462]: E1213 06:50:37.621147 1462 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.243.75.202\" not found" node="10.243.75.202" Dec 13 06:50:37.621680 kubelet[1462]: I1213 06:50:37.621643 1462 factory.go:221] Registration of the containerd container factory successfully Dec 13 06:50:37.645736 kubelet[1462]: I1213 06:50:37.645697 1462 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 06:50:37.645736 kubelet[1462]: I1213 06:50:37.645730 1462 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 06:50:37.645967 kubelet[1462]: I1213 06:50:37.645785 1462 state_mem.go:36] "Initialized new in-memory state store" Dec 13 06:50:37.648073 kubelet[1462]: I1213 06:50:37.648036 1462 policy_none.go:49] "None policy: Start" Dec 13 06:50:37.649054 kubelet[1462]: I1213 06:50:37.649018 1462 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 06:50:37.649472 kubelet[1462]: I1213 06:50:37.649448 1462 state_mem.go:35] "Initializing new in-memory state store" Dec 13 06:50:37.658947 systemd[1]: Created slice kubepods.slice. Dec 13 06:50:37.668172 systemd[1]: Created slice kubepods-burstable.slice. Dec 13 06:50:37.672294 systemd[1]: Created slice kubepods-besteffort.slice. Dec 13 06:50:37.679395 kubelet[1462]: I1213 06:50:37.679367 1462 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 06:50:37.681152 kubelet[1462]: I1213 06:50:37.681112 1462 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 06:50:37.684197 kubelet[1462]: E1213 06:50:37.684149 1462 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.243.75.202\" not found" Dec 13 06:50:37.694036 kubelet[1462]: I1213 06:50:37.693995 1462 kubelet_node_status.go:73] "Attempting to register node" node="10.243.75.202" Dec 13 06:50:37.699323 kubelet[1462]: I1213 06:50:37.699297 1462 kubelet_node_status.go:76] "Successfully registered node" node="10.243.75.202" Dec 13 06:50:37.708924 kubelet[1462]: E1213 06:50:37.708898 1462 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.243.75.202\" not found" Dec 13 06:50:37.762177 kubelet[1462]: I1213 06:50:37.762146 1462 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 06:50:37.764231 kubelet[1462]: I1213 06:50:37.764209 1462 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 06:50:37.764432 kubelet[1462]: I1213 06:50:37.764398 1462 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 06:50:37.764931 kubelet[1462]: I1213 06:50:37.764906 1462 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 06:50:37.765648 kubelet[1462]: E1213 06:50:37.765616 1462 kubelet.go:2353] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Dec 13 06:50:37.810685 kubelet[1462]: E1213 06:50:37.809451 1462 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.243.75.202\" not found" Dec 13 06:50:37.910410 kubelet[1462]: E1213 06:50:37.910360 1462 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.243.75.202\" not found" Dec 13 06:50:38.011443 kubelet[1462]: E1213 06:50:38.011379 1462 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.243.75.202\" not found" Dec 13 06:50:38.112995 kubelet[1462]: E1213 06:50:38.112797 1462 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.243.75.202\" not found" Dec 13 06:50:38.214224 kubelet[1462]: E1213 06:50:38.214169 1462 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.243.75.202\" not found" Dec 13 06:50:38.315296 kubelet[1462]: E1213 06:50:38.315242 1462 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.243.75.202\" not found" Dec 13 06:50:38.417129 kubelet[1462]: I1213 06:50:38.416972 1462 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Dec 13 06:50:38.418892 env[1191]: time="2024-12-13T06:50:38.418610616Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 06:50:38.419359 kubelet[1462]: I1213 06:50:38.419012 1462 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Dec 13 06:50:38.516743 kubelet[1462]: I1213 06:50:38.516622 1462 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Dec 13 06:50:38.517008 kubelet[1462]: W1213 06:50:38.516899 1462 reflector.go:462] vendor/k8s.io/client-go/informers/factory.go:159: watch of *v1.RuntimeClass ended with: very short watch: vendor/k8s.io/client-go/informers/factory.go:159: Unexpected watch close - watch lasted less than a second and no items received Dec 13 06:50:38.517008 kubelet[1462]: W1213 06:50:38.516982 1462 reflector.go:462] vendor/k8s.io/client-go/informers/factory.go:159: watch of *v1.Node ended with: very short watch: vendor/k8s.io/client-go/informers/factory.go:159: Unexpected watch close - watch lasted less than a second and no items received Dec 13 06:50:38.517162 kubelet[1462]: W1213 06:50:38.517020 1462 reflector.go:462] vendor/k8s.io/client-go/informers/factory.go:159: watch of *v1.CSIDriver ended with: very short watch: vendor/k8s.io/client-go/informers/factory.go:159: Unexpected watch close - watch lasted less than a second and no items received Dec 13 06:50:38.568914 kubelet[1462]: I1213 06:50:38.568841 1462 apiserver.go:52] "Watching apiserver" Dec 13 06:50:38.569186 kubelet[1462]: E1213 06:50:38.569160 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:50:38.581024 kubelet[1462]: I1213 06:50:38.580988 1462 topology_manager.go:215] "Topology Admit Handler" podUID="955653e6-c8a6-46e1-b516-1de8ccc1dec1" podNamespace="kube-system" podName="cilium-9889r" Dec 13 06:50:38.581480 kubelet[1462]: I1213 06:50:38.581454 1462 topology_manager.go:215] "Topology Admit Handler" podUID="0b0ab03e-3067-46ca-b28c-70f02bc2e25f" podNamespace="kube-system" podName="kube-proxy-zmbj7" Dec 13 06:50:38.589380 systemd[1]: Created slice kubepods-besteffort-pod0b0ab03e_3067_46ca_b28c_70f02bc2e25f.slice. Dec 13 06:50:38.594770 kubelet[1462]: I1213 06:50:38.594684 1462 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 06:50:38.598456 kubelet[1462]: I1213 06:50:38.598427 1462 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/955653e6-c8a6-46e1-b516-1de8ccc1dec1-hostproc\") pod \"cilium-9889r\" (UID: \"955653e6-c8a6-46e1-b516-1de8ccc1dec1\") " pod="kube-system/cilium-9889r" Dec 13 06:50:38.598540 kubelet[1462]: I1213 06:50:38.598474 1462 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/955653e6-c8a6-46e1-b516-1de8ccc1dec1-cilium-cgroup\") pod \"cilium-9889r\" (UID: \"955653e6-c8a6-46e1-b516-1de8ccc1dec1\") " pod="kube-system/cilium-9889r" Dec 13 06:50:38.598540 kubelet[1462]: I1213 06:50:38.598504 1462 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/955653e6-c8a6-46e1-b516-1de8ccc1dec1-cni-path\") pod \"cilium-9889r\" (UID: \"955653e6-c8a6-46e1-b516-1de8ccc1dec1\") " pod="kube-system/cilium-9889r" Dec 13 06:50:38.598540 kubelet[1462]: I1213 06:50:38.598534 1462 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/955653e6-c8a6-46e1-b516-1de8ccc1dec1-etc-cni-netd\") pod \"cilium-9889r\" (UID: \"955653e6-c8a6-46e1-b516-1de8ccc1dec1\") " pod="kube-system/cilium-9889r" Dec 13 06:50:38.598730 kubelet[1462]: I1213 06:50:38.598573 1462 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/955653e6-c8a6-46e1-b516-1de8ccc1dec1-hubble-tls\") pod \"cilium-9889r\" (UID: \"955653e6-c8a6-46e1-b516-1de8ccc1dec1\") " pod="kube-system/cilium-9889r" Dec 13 06:50:38.598730 kubelet[1462]: I1213 06:50:38.598602 1462 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0b0ab03e-3067-46ca-b28c-70f02bc2e25f-xtables-lock\") pod \"kube-proxy-zmbj7\" (UID: \"0b0ab03e-3067-46ca-b28c-70f02bc2e25f\") " pod="kube-system/kube-proxy-zmbj7" Dec 13 06:50:38.598730 kubelet[1462]: I1213 06:50:38.598642 1462 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/955653e6-c8a6-46e1-b516-1de8ccc1dec1-cilium-run\") pod \"cilium-9889r\" (UID: \"955653e6-c8a6-46e1-b516-1de8ccc1dec1\") " pod="kube-system/cilium-9889r" Dec 13 06:50:38.598730 kubelet[1462]: I1213 06:50:38.598670 1462 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/955653e6-c8a6-46e1-b516-1de8ccc1dec1-xtables-lock\") pod \"cilium-9889r\" (UID: \"955653e6-c8a6-46e1-b516-1de8ccc1dec1\") " pod="kube-system/cilium-9889r" Dec 13 06:50:38.598730 kubelet[1462]: I1213 06:50:38.598698 1462 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/955653e6-c8a6-46e1-b516-1de8ccc1dec1-cilium-config-path\") pod \"cilium-9889r\" (UID: \"955653e6-c8a6-46e1-b516-1de8ccc1dec1\") " pod="kube-system/cilium-9889r" Dec 13 06:50:38.598730 kubelet[1462]: I1213 06:50:38.598729 1462 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9c2fc\" (UniqueName: \"kubernetes.io/projected/0b0ab03e-3067-46ca-b28c-70f02bc2e25f-kube-api-access-9c2fc\") pod \"kube-proxy-zmbj7\" (UID: \"0b0ab03e-3067-46ca-b28c-70f02bc2e25f\") " pod="kube-system/kube-proxy-zmbj7" Dec 13 06:50:38.599039 kubelet[1462]: I1213 06:50:38.598756 1462 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/955653e6-c8a6-46e1-b516-1de8ccc1dec1-bpf-maps\") pod \"cilium-9889r\" (UID: \"955653e6-c8a6-46e1-b516-1de8ccc1dec1\") " pod="kube-system/cilium-9889r" Dec 13 06:50:38.599039 kubelet[1462]: I1213 06:50:38.598785 1462 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/955653e6-c8a6-46e1-b516-1de8ccc1dec1-clustermesh-secrets\") pod \"cilium-9889r\" (UID: \"955653e6-c8a6-46e1-b516-1de8ccc1dec1\") " pod="kube-system/cilium-9889r" Dec 13 06:50:38.599039 kubelet[1462]: I1213 06:50:38.598820 1462 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rjp8\" (UniqueName: \"kubernetes.io/projected/955653e6-c8a6-46e1-b516-1de8ccc1dec1-kube-api-access-2rjp8\") pod \"cilium-9889r\" (UID: \"955653e6-c8a6-46e1-b516-1de8ccc1dec1\") " pod="kube-system/cilium-9889r" Dec 13 06:50:38.599039 kubelet[1462]: I1213 06:50:38.598861 1462 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/955653e6-c8a6-46e1-b516-1de8ccc1dec1-host-proc-sys-kernel\") pod \"cilium-9889r\" (UID: \"955653e6-c8a6-46e1-b516-1de8ccc1dec1\") " pod="kube-system/cilium-9889r" Dec 13 06:50:38.599039 kubelet[1462]: I1213 06:50:38.598891 1462 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/0b0ab03e-3067-46ca-b28c-70f02bc2e25f-kube-proxy\") pod \"kube-proxy-zmbj7\" (UID: \"0b0ab03e-3067-46ca-b28c-70f02bc2e25f\") " pod="kube-system/kube-proxy-zmbj7" Dec 13 06:50:38.599333 kubelet[1462]: I1213 06:50:38.598917 1462 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0b0ab03e-3067-46ca-b28c-70f02bc2e25f-lib-modules\") pod \"kube-proxy-zmbj7\" (UID: \"0b0ab03e-3067-46ca-b28c-70f02bc2e25f\") " pod="kube-system/kube-proxy-zmbj7" Dec 13 06:50:38.599333 kubelet[1462]: I1213 06:50:38.598943 1462 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/955653e6-c8a6-46e1-b516-1de8ccc1dec1-lib-modules\") pod \"cilium-9889r\" (UID: \"955653e6-c8a6-46e1-b516-1de8ccc1dec1\") " pod="kube-system/cilium-9889r" Dec 13 06:50:38.599333 kubelet[1462]: I1213 06:50:38.598972 1462 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/955653e6-c8a6-46e1-b516-1de8ccc1dec1-host-proc-sys-net\") pod \"cilium-9889r\" (UID: \"955653e6-c8a6-46e1-b516-1de8ccc1dec1\") " pod="kube-system/cilium-9889r" Dec 13 06:50:38.603024 systemd[1]: Created slice kubepods-burstable-pod955653e6_c8a6_46e1_b516_1de8ccc1dec1.slice. Dec 13 06:50:38.810040 sudo[1334]: pam_unix(sudo:session): session closed for user root Dec 13 06:50:38.900266 env[1191]: time="2024-12-13T06:50:38.900179140Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zmbj7,Uid:0b0ab03e-3067-46ca-b28c-70f02bc2e25f,Namespace:kube-system,Attempt:0,}" Dec 13 06:50:38.910633 env[1191]: time="2024-12-13T06:50:38.910237128Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9889r,Uid:955653e6-c8a6-46e1-b516-1de8ccc1dec1,Namespace:kube-system,Attempt:0,}" Dec 13 06:50:38.955536 sshd[1329]: pam_unix(sshd:session): session closed for user core Dec 13 06:50:38.960257 systemd[1]: sshd@6-10.243.75.202:22-139.178.89.65:58290.service: Deactivated successfully. Dec 13 06:50:38.961882 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 06:50:38.961942 systemd-logind[1187]: Session 7 logged out. Waiting for processes to exit. Dec 13 06:50:38.964144 systemd-logind[1187]: Removed session 7. Dec 13 06:50:39.570044 kubelet[1462]: E1213 06:50:39.569975 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:50:39.677228 env[1191]: time="2024-12-13T06:50:39.677161776Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 06:50:39.680694 env[1191]: time="2024-12-13T06:50:39.680612938Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 06:50:39.682478 env[1191]: time="2024-12-13T06:50:39.682437644Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 06:50:39.685179 env[1191]: time="2024-12-13T06:50:39.685147504Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 06:50:39.687388 env[1191]: time="2024-12-13T06:50:39.687353903Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 06:50:39.689911 env[1191]: time="2024-12-13T06:50:39.689863973Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 06:50:39.693693 env[1191]: time="2024-12-13T06:50:39.693651172Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 06:50:39.696037 env[1191]: time="2024-12-13T06:50:39.696004629Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 06:50:39.707733 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1852573721.mount: Deactivated successfully. Dec 13 06:50:39.731530 env[1191]: time="2024-12-13T06:50:39.726490563Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 06:50:39.731530 env[1191]: time="2024-12-13T06:50:39.726561424Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 06:50:39.731530 env[1191]: time="2024-12-13T06:50:39.726578880Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 06:50:39.731530 env[1191]: time="2024-12-13T06:50:39.730839791Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9036371bf2c2c209965a4ac0c4080e449db3c5a07cba33057337f088b6e7956a pid=1520 runtime=io.containerd.runc.v2 Dec 13 06:50:39.745879 env[1191]: time="2024-12-13T06:50:39.745780968Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 06:50:39.746142 env[1191]: time="2024-12-13T06:50:39.745836213Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 06:50:39.746294 env[1191]: time="2024-12-13T06:50:39.746253562Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 06:50:39.746612 env[1191]: time="2024-12-13T06:50:39.746567976Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/71422a541abe9d81079bf4caa5e4e8860c7162ed70381f68458c9c9303603e17 pid=1540 runtime=io.containerd.runc.v2 Dec 13 06:50:39.769941 systemd[1]: Started cri-containerd-9036371bf2c2c209965a4ac0c4080e449db3c5a07cba33057337f088b6e7956a.scope. Dec 13 06:50:39.806717 systemd[1]: Started cri-containerd-71422a541abe9d81079bf4caa5e4e8860c7162ed70381f68458c9c9303603e17.scope. Dec 13 06:50:39.832546 env[1191]: time="2024-12-13T06:50:39.832400786Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9889r,Uid:955653e6-c8a6-46e1-b516-1de8ccc1dec1,Namespace:kube-system,Attempt:0,} returns sandbox id \"9036371bf2c2c209965a4ac0c4080e449db3c5a07cba33057337f088b6e7956a\"" Dec 13 06:50:39.840427 env[1191]: time="2024-12-13T06:50:39.839564779Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Dec 13 06:50:39.850990 env[1191]: time="2024-12-13T06:50:39.850938732Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zmbj7,Uid:0b0ab03e-3067-46ca-b28c-70f02bc2e25f,Namespace:kube-system,Attempt:0,} returns sandbox id \"71422a541abe9d81079bf4caa5e4e8860c7162ed70381f68458c9c9303603e17\"" Dec 13 06:50:40.570574 kubelet[1462]: E1213 06:50:40.570452 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:50:41.571410 kubelet[1462]: E1213 06:50:41.571305 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:50:42.573111 kubelet[1462]: E1213 06:50:42.572434 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:50:43.573448 kubelet[1462]: E1213 06:50:43.573382 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:50:44.573664 kubelet[1462]: E1213 06:50:44.573534 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:50:45.573842 kubelet[1462]: E1213 06:50:45.573785 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:50:46.574709 kubelet[1462]: E1213 06:50:46.574583 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:50:47.460701 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1583236250.mount: Deactivated successfully. Dec 13 06:50:47.575367 kubelet[1462]: E1213 06:50:47.575265 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:50:48.576225 kubelet[1462]: E1213 06:50:48.576046 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:50:49.577426 kubelet[1462]: E1213 06:50:49.577338 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:50:50.577776 kubelet[1462]: E1213 06:50:50.577715 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:50:51.346151 update_engine[1188]: I1213 06:50:51.345430 1188 update_attempter.cc:509] Updating boot flags... Dec 13 06:50:51.579050 kubelet[1462]: E1213 06:50:51.578934 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:50:51.872115 env[1191]: time="2024-12-13T06:50:51.871988732Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 06:50:51.875823 env[1191]: time="2024-12-13T06:50:51.874528540Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 06:50:51.878178 env[1191]: time="2024-12-13T06:50:51.877572706Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 06:50:51.878879 env[1191]: time="2024-12-13T06:50:51.878809234Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Dec 13 06:50:51.881556 env[1191]: time="2024-12-13T06:50:51.881504666Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\"" Dec 13 06:50:51.884742 env[1191]: time="2024-12-13T06:50:51.884566826Z" level=info msg="CreateContainer within sandbox \"9036371bf2c2c209965a4ac0c4080e449db3c5a07cba33057337f088b6e7956a\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 06:50:51.899344 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount618198882.mount: Deactivated successfully. Dec 13 06:50:51.906924 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1462811952.mount: Deactivated successfully. Dec 13 06:50:51.913933 env[1191]: time="2024-12-13T06:50:51.913857667Z" level=info msg="CreateContainer within sandbox \"9036371bf2c2c209965a4ac0c4080e449db3c5a07cba33057337f088b6e7956a\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"9e1b748c14a6d16f94553182408704beab45494a271469ac46126df92552c073\"" Dec 13 06:50:51.914905 env[1191]: time="2024-12-13T06:50:51.914847376Z" level=info msg="StartContainer for \"9e1b748c14a6d16f94553182408704beab45494a271469ac46126df92552c073\"" Dec 13 06:50:51.951667 systemd[1]: Started cri-containerd-9e1b748c14a6d16f94553182408704beab45494a271469ac46126df92552c073.scope. Dec 13 06:50:51.997789 env[1191]: time="2024-12-13T06:50:51.996238737Z" level=info msg="StartContainer for \"9e1b748c14a6d16f94553182408704beab45494a271469ac46126df92552c073\" returns successfully" Dec 13 06:50:52.008741 systemd[1]: cri-containerd-9e1b748c14a6d16f94553182408704beab45494a271469ac46126df92552c073.scope: Deactivated successfully. Dec 13 06:50:52.295153 env[1191]: time="2024-12-13T06:50:52.295058240Z" level=info msg="shim disconnected" id=9e1b748c14a6d16f94553182408704beab45494a271469ac46126df92552c073 Dec 13 06:50:52.295647 env[1191]: time="2024-12-13T06:50:52.295614907Z" level=warning msg="cleaning up after shim disconnected" id=9e1b748c14a6d16f94553182408704beab45494a271469ac46126df92552c073 namespace=k8s.io Dec 13 06:50:52.295804 env[1191]: time="2024-12-13T06:50:52.295764472Z" level=info msg="cleaning up dead shim" Dec 13 06:50:52.315130 env[1191]: time="2024-12-13T06:50:52.315055915Z" level=warning msg="cleanup warnings time=\"2024-12-13T06:50:52Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1661 runtime=io.containerd.runc.v2\n" Dec 13 06:50:52.580176 kubelet[1462]: E1213 06:50:52.579771 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:50:52.880596 env[1191]: time="2024-12-13T06:50:52.879617697Z" level=info msg="CreateContainer within sandbox \"9036371bf2c2c209965a4ac0c4080e449db3c5a07cba33057337f088b6e7956a\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 06:50:52.897327 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9e1b748c14a6d16f94553182408704beab45494a271469ac46126df92552c073-rootfs.mount: Deactivated successfully. Dec 13 06:50:52.905732 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount369609103.mount: Deactivated successfully. Dec 13 06:50:52.919868 env[1191]: time="2024-12-13T06:50:52.919796064Z" level=info msg="CreateContainer within sandbox \"9036371bf2c2c209965a4ac0c4080e449db3c5a07cba33057337f088b6e7956a\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"7f87d9c26835b78bdcbf25a7c0e79713bbf71bee282d48ff080e2313acd8b9cf\"" Dec 13 06:50:52.926428 env[1191]: time="2024-12-13T06:50:52.926395092Z" level=info msg="StartContainer for \"7f87d9c26835b78bdcbf25a7c0e79713bbf71bee282d48ff080e2313acd8b9cf\"" Dec 13 06:50:52.970940 systemd[1]: Started cri-containerd-7f87d9c26835b78bdcbf25a7c0e79713bbf71bee282d48ff080e2313acd8b9cf.scope. Dec 13 06:50:53.036247 env[1191]: time="2024-12-13T06:50:53.036181561Z" level=info msg="StartContainer for \"7f87d9c26835b78bdcbf25a7c0e79713bbf71bee282d48ff080e2313acd8b9cf\" returns successfully" Dec 13 06:50:53.067041 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 06:50:53.067419 systemd[1]: Stopped systemd-sysctl.service. Dec 13 06:50:53.067999 systemd[1]: Stopping systemd-sysctl.service... Dec 13 06:50:53.073580 systemd[1]: Starting systemd-sysctl.service... Dec 13 06:50:53.074120 systemd[1]: cri-containerd-7f87d9c26835b78bdcbf25a7c0e79713bbf71bee282d48ff080e2313acd8b9cf.scope: Deactivated successfully. Dec 13 06:50:53.094922 systemd[1]: Finished systemd-sysctl.service. Dec 13 06:50:53.158155 env[1191]: time="2024-12-13T06:50:53.157446446Z" level=info msg="shim disconnected" id=7f87d9c26835b78bdcbf25a7c0e79713bbf71bee282d48ff080e2313acd8b9cf Dec 13 06:50:53.158155 env[1191]: time="2024-12-13T06:50:53.157530560Z" level=warning msg="cleaning up after shim disconnected" id=7f87d9c26835b78bdcbf25a7c0e79713bbf71bee282d48ff080e2313acd8b9cf namespace=k8s.io Dec 13 06:50:53.158155 env[1191]: time="2024-12-13T06:50:53.157547586Z" level=info msg="cleaning up dead shim" Dec 13 06:50:53.176018 env[1191]: time="2024-12-13T06:50:53.175926869Z" level=warning msg="cleanup warnings time=\"2024-12-13T06:50:53Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1727 runtime=io.containerd.runc.v2\n" Dec 13 06:50:53.580056 kubelet[1462]: E1213 06:50:53.579977 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:50:53.881910 env[1191]: time="2024-12-13T06:50:53.881779508Z" level=info msg="CreateContainer within sandbox \"9036371bf2c2c209965a4ac0c4080e449db3c5a07cba33057337f088b6e7956a\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 06:50:53.896656 systemd[1]: run-containerd-runc-k8s.io-7f87d9c26835b78bdcbf25a7c0e79713bbf71bee282d48ff080e2313acd8b9cf-runc.o4ugJg.mount: Deactivated successfully. Dec 13 06:50:53.896793 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7f87d9c26835b78bdcbf25a7c0e79713bbf71bee282d48ff080e2313acd8b9cf-rootfs.mount: Deactivated successfully. Dec 13 06:50:53.900377 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2113617791.mount: Deactivated successfully. Dec 13 06:50:53.908001 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1151292091.mount: Deactivated successfully. Dec 13 06:50:53.913402 env[1191]: time="2024-12-13T06:50:53.913345513Z" level=info msg="CreateContainer within sandbox \"9036371bf2c2c209965a4ac0c4080e449db3c5a07cba33057337f088b6e7956a\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"0b82cb502aae1270564b75ea18010cbb2d4cdefdc37cdfa7ffb8e6c9ef9b83cd\"" Dec 13 06:50:53.914352 env[1191]: time="2024-12-13T06:50:53.914312245Z" level=info msg="StartContainer for \"0b82cb502aae1270564b75ea18010cbb2d4cdefdc37cdfa7ffb8e6c9ef9b83cd\"" Dec 13 06:50:53.957797 systemd[1]: Started cri-containerd-0b82cb502aae1270564b75ea18010cbb2d4cdefdc37cdfa7ffb8e6c9ef9b83cd.scope. Dec 13 06:50:54.065595 systemd[1]: cri-containerd-0b82cb502aae1270564b75ea18010cbb2d4cdefdc37cdfa7ffb8e6c9ef9b83cd.scope: Deactivated successfully. Dec 13 06:50:54.069651 env[1191]: time="2024-12-13T06:50:54.069579339Z" level=info msg="StartContainer for \"0b82cb502aae1270564b75ea18010cbb2d4cdefdc37cdfa7ffb8e6c9ef9b83cd\" returns successfully" Dec 13 06:50:54.106944 env[1191]: time="2024-12-13T06:50:54.106879646Z" level=info msg="shim disconnected" id=0b82cb502aae1270564b75ea18010cbb2d4cdefdc37cdfa7ffb8e6c9ef9b83cd Dec 13 06:50:54.107180 env[1191]: time="2024-12-13T06:50:54.106946549Z" level=warning msg="cleaning up after shim disconnected" id=0b82cb502aae1270564b75ea18010cbb2d4cdefdc37cdfa7ffb8e6c9ef9b83cd namespace=k8s.io Dec 13 06:50:54.107180 env[1191]: time="2024-12-13T06:50:54.106972503Z" level=info msg="cleaning up dead shim" Dec 13 06:50:54.125326 env[1191]: time="2024-12-13T06:50:54.125234433Z" level=warning msg="cleanup warnings time=\"2024-12-13T06:50:54Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1786 runtime=io.containerd.runc.v2\n" Dec 13 06:50:54.581012 kubelet[1462]: E1213 06:50:54.580946 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:50:54.859239 env[1191]: time="2024-12-13T06:50:54.859111064Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 06:50:54.861119 env[1191]: time="2024-12-13T06:50:54.861083037Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 06:50:54.862866 env[1191]: time="2024-12-13T06:50:54.862819718Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 06:50:54.864244 env[1191]: time="2024-12-13T06:50:54.864211787Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 06:50:54.864955 env[1191]: time="2024-12-13T06:50:54.864918039Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\" returns image reference \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\"" Dec 13 06:50:54.868940 env[1191]: time="2024-12-13T06:50:54.868883940Z" level=info msg="CreateContainer within sandbox \"71422a541abe9d81079bf4caa5e4e8860c7162ed70381f68458c9c9303603e17\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 06:50:54.886644 env[1191]: time="2024-12-13T06:50:54.886594911Z" level=info msg="CreateContainer within sandbox \"71422a541abe9d81079bf4caa5e4e8860c7162ed70381f68458c9c9303603e17\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"bf8a56ddad67adcface038495e092279cbb86174c698c092ba0a2b693c50a2ce\"" Dec 13 06:50:54.887932 env[1191]: time="2024-12-13T06:50:54.887895916Z" level=info msg="StartContainer for \"bf8a56ddad67adcface038495e092279cbb86174c698c092ba0a2b693c50a2ce\"" Dec 13 06:50:54.893048 env[1191]: time="2024-12-13T06:50:54.893005673Z" level=info msg="CreateContainer within sandbox \"9036371bf2c2c209965a4ac0c4080e449db3c5a07cba33057337f088b6e7956a\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 06:50:54.895985 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0b82cb502aae1270564b75ea18010cbb2d4cdefdc37cdfa7ffb8e6c9ef9b83cd-rootfs.mount: Deactivated successfully. Dec 13 06:50:54.913199 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3108782633.mount: Deactivated successfully. Dec 13 06:50:54.927702 env[1191]: time="2024-12-13T06:50:54.927647431Z" level=info msg="CreateContainer within sandbox \"9036371bf2c2c209965a4ac0c4080e449db3c5a07cba33057337f088b6e7956a\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"927cf9a6489f93427b760f2b700ff9a66a4751afaad8f1a3228816b2df17185c\"" Dec 13 06:50:54.928534 env[1191]: time="2024-12-13T06:50:54.928499558Z" level=info msg="StartContainer for \"927cf9a6489f93427b760f2b700ff9a66a4751afaad8f1a3228816b2df17185c\"" Dec 13 06:50:54.940580 systemd[1]: Started cri-containerd-bf8a56ddad67adcface038495e092279cbb86174c698c092ba0a2b693c50a2ce.scope. Dec 13 06:50:54.964237 systemd[1]: Started cri-containerd-927cf9a6489f93427b760f2b700ff9a66a4751afaad8f1a3228816b2df17185c.scope. Dec 13 06:50:55.003864 env[1191]: time="2024-12-13T06:50:55.003799809Z" level=info msg="StartContainer for \"bf8a56ddad67adcface038495e092279cbb86174c698c092ba0a2b693c50a2ce\" returns successfully" Dec 13 06:50:55.028758 systemd[1]: cri-containerd-927cf9a6489f93427b760f2b700ff9a66a4751afaad8f1a3228816b2df17185c.scope: Deactivated successfully. Dec 13 06:50:55.034416 env[1191]: time="2024-12-13T06:50:55.034025183Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod955653e6_c8a6_46e1_b516_1de8ccc1dec1.slice/cri-containerd-927cf9a6489f93427b760f2b700ff9a66a4751afaad8f1a3228816b2df17185c.scope/memory.events\": no such file or directory" Dec 13 06:50:55.050626 env[1191]: time="2024-12-13T06:50:55.050556352Z" level=info msg="StartContainer for \"927cf9a6489f93427b760f2b700ff9a66a4751afaad8f1a3228816b2df17185c\" returns successfully" Dec 13 06:50:55.173473 env[1191]: time="2024-12-13T06:50:55.173329240Z" level=info msg="shim disconnected" id=927cf9a6489f93427b760f2b700ff9a66a4751afaad8f1a3228816b2df17185c Dec 13 06:50:55.173473 env[1191]: time="2024-12-13T06:50:55.173400324Z" level=warning msg="cleaning up after shim disconnected" id=927cf9a6489f93427b760f2b700ff9a66a4751afaad8f1a3228816b2df17185c namespace=k8s.io Dec 13 06:50:55.173473 env[1191]: time="2024-12-13T06:50:55.173418482Z" level=info msg="cleaning up dead shim" Dec 13 06:50:55.185988 env[1191]: time="2024-12-13T06:50:55.185928662Z" level=warning msg="cleanup warnings time=\"2024-12-13T06:50:55Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1880 runtime=io.containerd.runc.v2\n" Dec 13 06:50:55.582270 kubelet[1462]: E1213 06:50:55.582159 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:50:55.896423 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount606945485.mount: Deactivated successfully. Dec 13 06:50:55.905214 env[1191]: time="2024-12-13T06:50:55.905135438Z" level=info msg="CreateContainer within sandbox \"9036371bf2c2c209965a4ac0c4080e449db3c5a07cba33057337f088b6e7956a\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 06:50:55.911908 kubelet[1462]: I1213 06:50:55.911817 1462 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-zmbj7" podStartSLOduration=3.8990836079999998 podStartE2EDuration="18.911663729s" podCreationTimestamp="2024-12-13 06:50:37 +0000 UTC" firstStartedPulling="2024-12-13 06:50:39.852768463 +0000 UTC m=+2.568334751" lastFinishedPulling="2024-12-13 06:50:54.865348571 +0000 UTC m=+17.580914872" observedRunningTime="2024-12-13 06:50:55.911366743 +0000 UTC m=+18.626933038" watchObservedRunningTime="2024-12-13 06:50:55.911663729 +0000 UTC m=+18.627230033" Dec 13 06:50:55.922006 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3865271303.mount: Deactivated successfully. Dec 13 06:50:55.929027 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1061411410.mount: Deactivated successfully. Dec 13 06:50:55.932020 env[1191]: time="2024-12-13T06:50:55.931956731Z" level=info msg="CreateContainer within sandbox \"9036371bf2c2c209965a4ac0c4080e449db3c5a07cba33057337f088b6e7956a\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"3bd0eee4e4730e2cfebcc1caa851d7b1d5216f69fa1d371d9cd86a077f41632b\"" Dec 13 06:50:55.932818 env[1191]: time="2024-12-13T06:50:55.932762759Z" level=info msg="StartContainer for \"3bd0eee4e4730e2cfebcc1caa851d7b1d5216f69fa1d371d9cd86a077f41632b\"" Dec 13 06:50:55.955510 systemd[1]: Started cri-containerd-3bd0eee4e4730e2cfebcc1caa851d7b1d5216f69fa1d371d9cd86a077f41632b.scope. Dec 13 06:50:56.015213 env[1191]: time="2024-12-13T06:50:56.015153887Z" level=info msg="StartContainer for \"3bd0eee4e4730e2cfebcc1caa851d7b1d5216f69fa1d371d9cd86a077f41632b\" returns successfully" Dec 13 06:50:56.163796 kubelet[1462]: I1213 06:50:56.163647 1462 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Dec 13 06:50:56.583092 kubelet[1462]: E1213 06:50:56.583041 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:50:56.691108 kernel: Initializing XFRM netlink socket Dec 13 06:50:57.568381 kubelet[1462]: E1213 06:50:57.568266 1462 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:50:57.584669 kubelet[1462]: E1213 06:50:57.584570 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:50:58.420563 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Dec 13 06:50:58.420711 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Dec 13 06:50:58.424846 systemd-networkd[1024]: cilium_host: Link UP Dec 13 06:50:58.426858 systemd-networkd[1024]: cilium_net: Link UP Dec 13 06:50:58.428111 systemd-networkd[1024]: cilium_net: Gained carrier Dec 13 06:50:58.429228 systemd-networkd[1024]: cilium_host: Gained carrier Dec 13 06:50:58.585186 kubelet[1462]: E1213 06:50:58.585132 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:50:58.587451 systemd-networkd[1024]: cilium_vxlan: Link UP Dec 13 06:50:58.587462 systemd-networkd[1024]: cilium_vxlan: Gained carrier Dec 13 06:50:58.645712 systemd-networkd[1024]: cilium_net: Gained IPv6LL Dec 13 06:50:58.949199 kernel: NET: Registered PF_ALG protocol family Dec 13 06:50:59.022166 systemd-networkd[1024]: cilium_host: Gained IPv6LL Dec 13 06:50:59.586345 kubelet[1462]: E1213 06:50:59.586256 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:50:59.837334 systemd-networkd[1024]: cilium_vxlan: Gained IPv6LL Dec 13 06:50:59.914980 systemd-networkd[1024]: lxc_health: Link UP Dec 13 06:50:59.932226 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Dec 13 06:50:59.934306 systemd-networkd[1024]: lxc_health: Gained carrier Dec 13 06:51:00.066302 kubelet[1462]: I1213 06:51:00.066254 1462 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-9889r" podStartSLOduration=11.022643339 podStartE2EDuration="23.066172414s" podCreationTimestamp="2024-12-13 06:50:37 +0000 UTC" firstStartedPulling="2024-12-13 06:50:39.836120038 +0000 UTC m=+2.551686333" lastFinishedPulling="2024-12-13 06:50:51.87964907 +0000 UTC m=+14.595215408" observedRunningTime="2024-12-13 06:50:56.933002956 +0000 UTC m=+19.648569251" watchObservedRunningTime="2024-12-13 06:51:00.066172414 +0000 UTC m=+22.781738715" Dec 13 06:51:00.587612 kubelet[1462]: E1213 06:51:00.587508 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:51:01.588803 kubelet[1462]: E1213 06:51:01.588544 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:51:01.757880 systemd-networkd[1024]: lxc_health: Gained IPv6LL Dec 13 06:51:02.589913 kubelet[1462]: E1213 06:51:02.589808 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:51:02.973691 kubelet[1462]: I1213 06:51:02.973644 1462 topology_manager.go:215] "Topology Admit Handler" podUID="fb0d7422-6939-491c-94d7-34b82e90b52f" podNamespace="default" podName="nginx-deployment-6d5f899847-vpvmv" Dec 13 06:51:02.983599 systemd[1]: Created slice kubepods-besteffort-podfb0d7422_6939_491c_94d7_34b82e90b52f.slice. Dec 13 06:51:02.989765 kubelet[1462]: I1213 06:51:02.989651 1462 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hch2s\" (UniqueName: \"kubernetes.io/projected/fb0d7422-6939-491c-94d7-34b82e90b52f-kube-api-access-hch2s\") pod \"nginx-deployment-6d5f899847-vpvmv\" (UID: \"fb0d7422-6939-491c-94d7-34b82e90b52f\") " pod="default/nginx-deployment-6d5f899847-vpvmv" Dec 13 06:51:03.292840 env[1191]: time="2024-12-13T06:51:03.291444868Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-vpvmv,Uid:fb0d7422-6939-491c-94d7-34b82e90b52f,Namespace:default,Attempt:0,}" Dec 13 06:51:03.368183 systemd-networkd[1024]: lxc0077828a8d4d: Link UP Dec 13 06:51:03.376215 kernel: eth0: renamed from tmpc6374 Dec 13 06:51:03.385802 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 06:51:03.385913 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc0077828a8d4d: link becomes ready Dec 13 06:51:03.386212 systemd-networkd[1024]: lxc0077828a8d4d: Gained carrier Dec 13 06:51:03.591836 kubelet[1462]: E1213 06:51:03.590930 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:51:04.592000 kubelet[1462]: E1213 06:51:04.591913 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:51:05.213746 systemd-networkd[1024]: lxc0077828a8d4d: Gained IPv6LL Dec 13 06:51:05.593833 kubelet[1462]: E1213 06:51:05.593682 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:51:06.177116 env[1191]: time="2024-12-13T06:51:06.176937664Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 06:51:06.177116 env[1191]: time="2024-12-13T06:51:06.177038708Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 06:51:06.178180 env[1191]: time="2024-12-13T06:51:06.177086897Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 06:51:06.178180 env[1191]: time="2024-12-13T06:51:06.177418981Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c63746dc9782136f8a3743227a1b3bc1210e4f0f8e226177a62a1f5f83218c6a pid=2546 runtime=io.containerd.runc.v2 Dec 13 06:51:06.202833 systemd[1]: Started cri-containerd-c63746dc9782136f8a3743227a1b3bc1210e4f0f8e226177a62a1f5f83218c6a.scope. Dec 13 06:51:06.272262 env[1191]: time="2024-12-13T06:51:06.272183013Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-vpvmv,Uid:fb0d7422-6939-491c-94d7-34b82e90b52f,Namespace:default,Attempt:0,} returns sandbox id \"c63746dc9782136f8a3743227a1b3bc1210e4f0f8e226177a62a1f5f83218c6a\"" Dec 13 06:51:06.275990 env[1191]: time="2024-12-13T06:51:06.275957142Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Dec 13 06:51:06.595736 kubelet[1462]: E1213 06:51:06.595585 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:51:07.596297 kubelet[1462]: E1213 06:51:07.596230 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:51:08.596524 kubelet[1462]: E1213 06:51:08.596447 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:51:09.596827 kubelet[1462]: E1213 06:51:09.596753 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:51:10.518831 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2221864660.mount: Deactivated successfully. Dec 13 06:51:10.597227 kubelet[1462]: E1213 06:51:10.597162 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:51:11.598089 kubelet[1462]: E1213 06:51:11.597998 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:51:12.598624 kubelet[1462]: E1213 06:51:12.598536 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:51:12.867504 env[1191]: time="2024-12-13T06:51:12.866743551Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 06:51:12.869801 env[1191]: time="2024-12-13T06:51:12.869762803Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 06:51:12.876277 env[1191]: time="2024-12-13T06:51:12.876239547Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 06:51:12.877275 env[1191]: time="2024-12-13T06:51:12.877230740Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx@sha256:e04edf30a4ea4c5a4107110797c72d3ee8a654415f00acd4019be17218afd9a1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 06:51:12.880947 env[1191]: time="2024-12-13T06:51:12.879749012Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3\"" Dec 13 06:51:12.883558 env[1191]: time="2024-12-13T06:51:12.883465186Z" level=info msg="CreateContainer within sandbox \"c63746dc9782136f8a3743227a1b3bc1210e4f0f8e226177a62a1f5f83218c6a\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Dec 13 06:51:12.899474 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount216798039.mount: Deactivated successfully. Dec 13 06:51:12.908098 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2237321334.mount: Deactivated successfully. Dec 13 06:51:12.926917 env[1191]: time="2024-12-13T06:51:12.926830826Z" level=info msg="CreateContainer within sandbox \"c63746dc9782136f8a3743227a1b3bc1210e4f0f8e226177a62a1f5f83218c6a\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"3e88dd64c72073285e80cf25d61204ad4d4de1f3180531dd68c530582b7bb094\"" Dec 13 06:51:12.928122 env[1191]: time="2024-12-13T06:51:12.928053698Z" level=info msg="StartContainer for \"3e88dd64c72073285e80cf25d61204ad4d4de1f3180531dd68c530582b7bb094\"" Dec 13 06:51:12.965225 systemd[1]: Started cri-containerd-3e88dd64c72073285e80cf25d61204ad4d4de1f3180531dd68c530582b7bb094.scope. Dec 13 06:51:13.018229 env[1191]: time="2024-12-13T06:51:13.018176149Z" level=info msg="StartContainer for \"3e88dd64c72073285e80cf25d61204ad4d4de1f3180531dd68c530582b7bb094\" returns successfully" Dec 13 06:51:13.599316 kubelet[1462]: E1213 06:51:13.599209 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:51:14.599824 kubelet[1462]: E1213 06:51:14.599756 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:51:15.601474 kubelet[1462]: E1213 06:51:15.601415 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:51:16.602912 kubelet[1462]: E1213 06:51:16.602836 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:51:17.568365 kubelet[1462]: E1213 06:51:17.568300 1462 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:51:17.603816 kubelet[1462]: E1213 06:51:17.603769 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:51:18.603986 kubelet[1462]: E1213 06:51:18.603928 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:51:19.605286 kubelet[1462]: E1213 06:51:19.605220 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:51:20.606133 kubelet[1462]: E1213 06:51:20.606038 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:51:21.607348 kubelet[1462]: E1213 06:51:21.607293 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:51:22.608170 kubelet[1462]: E1213 06:51:22.608112 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:51:23.609790 kubelet[1462]: E1213 06:51:23.609733 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:51:24.607925 kubelet[1462]: I1213 06:51:24.607821 1462 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nginx-deployment-6d5f899847-vpvmv" podStartSLOduration=16.001188852 podStartE2EDuration="22.60770793s" podCreationTimestamp="2024-12-13 06:51:02 +0000 UTC" firstStartedPulling="2024-12-13 06:51:06.274661552 +0000 UTC m=+28.990227847" lastFinishedPulling="2024-12-13 06:51:12.881180627 +0000 UTC m=+35.596746925" observedRunningTime="2024-12-13 06:51:13.975036679 +0000 UTC m=+36.690603001" watchObservedRunningTime="2024-12-13 06:51:24.60770793 +0000 UTC m=+47.323274243" Dec 13 06:51:24.608298 kubelet[1462]: I1213 06:51:24.608052 1462 topology_manager.go:215] "Topology Admit Handler" podUID="1a1a36cd-4df3-4c60-91ad-579b4d7294df" podNamespace="default" podName="nfs-server-provisioner-0" Dec 13 06:51:24.611591 kubelet[1462]: E1213 06:51:24.611554 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:51:24.612186 kubelet[1462]: I1213 06:51:24.611936 1462 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/1a1a36cd-4df3-4c60-91ad-579b4d7294df-data\") pod \"nfs-server-provisioner-0\" (UID: \"1a1a36cd-4df3-4c60-91ad-579b4d7294df\") " pod="default/nfs-server-provisioner-0" Dec 13 06:51:24.612378 kubelet[1462]: I1213 06:51:24.612352 1462 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86txb\" (UniqueName: \"kubernetes.io/projected/1a1a36cd-4df3-4c60-91ad-579b4d7294df-kube-api-access-86txb\") pod \"nfs-server-provisioner-0\" (UID: \"1a1a36cd-4df3-4c60-91ad-579b4d7294df\") " pod="default/nfs-server-provisioner-0" Dec 13 06:51:24.615790 systemd[1]: Created slice kubepods-besteffort-pod1a1a36cd_4df3_4c60_91ad_579b4d7294df.slice. Dec 13 06:51:24.920815 env[1191]: time="2024-12-13T06:51:24.919939145Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:1a1a36cd-4df3-4c60-91ad-579b4d7294df,Namespace:default,Attempt:0,}" Dec 13 06:51:24.988450 systemd-networkd[1024]: lxc6cf733c62bfe: Link UP Dec 13 06:51:25.001099 kernel: eth0: renamed from tmp79c8a Dec 13 06:51:25.009660 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 06:51:25.009780 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc6cf733c62bfe: link becomes ready Dec 13 06:51:25.009993 systemd-networkd[1024]: lxc6cf733c62bfe: Gained carrier Dec 13 06:51:25.243969 env[1191]: time="2024-12-13T06:51:25.243856108Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 06:51:25.244555 env[1191]: time="2024-12-13T06:51:25.243932960Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 06:51:25.244555 env[1191]: time="2024-12-13T06:51:25.243949909Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 06:51:25.244555 env[1191]: time="2024-12-13T06:51:25.244224481Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/79c8a8ae41fd9fbbec8d1a192a6e4885e8025fe9621dfe742668f6740f562ad0 pid=2680 runtime=io.containerd.runc.v2 Dec 13 06:51:25.270182 systemd[1]: Started cri-containerd-79c8a8ae41fd9fbbec8d1a192a6e4885e8025fe9621dfe742668f6740f562ad0.scope. Dec 13 06:51:25.338214 env[1191]: time="2024-12-13T06:51:25.338159872Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:1a1a36cd-4df3-4c60-91ad-579b4d7294df,Namespace:default,Attempt:0,} returns sandbox id \"79c8a8ae41fd9fbbec8d1a192a6e4885e8025fe9621dfe742668f6740f562ad0\"" Dec 13 06:51:25.341238 env[1191]: time="2024-12-13T06:51:25.341055763Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Dec 13 06:51:25.613594 kubelet[1462]: E1213 06:51:25.612895 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:51:25.727621 systemd[1]: run-containerd-runc-k8s.io-79c8a8ae41fd9fbbec8d1a192a6e4885e8025fe9621dfe742668f6740f562ad0-runc.UIu5qe.mount: Deactivated successfully. Dec 13 06:51:26.614020 kubelet[1462]: E1213 06:51:26.613926 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:51:26.846450 systemd-networkd[1024]: lxc6cf733c62bfe: Gained IPv6LL Dec 13 06:51:27.614625 kubelet[1462]: E1213 06:51:27.614481 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:51:28.615257 kubelet[1462]: E1213 06:51:28.615139 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:51:29.371528 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4072273489.mount: Deactivated successfully. Dec 13 06:51:29.616221 kubelet[1462]: E1213 06:51:29.616112 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:51:30.616440 kubelet[1462]: E1213 06:51:30.616355 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:51:31.617149 kubelet[1462]: E1213 06:51:31.617067 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:51:32.617677 kubelet[1462]: E1213 06:51:32.617582 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:51:32.698505 env[1191]: time="2024-12-13T06:51:32.698408591Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 06:51:32.701898 env[1191]: time="2024-12-13T06:51:32.701857722Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 06:51:32.704917 env[1191]: time="2024-12-13T06:51:32.704882875Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 06:51:32.708231 env[1191]: time="2024-12-13T06:51:32.707129837Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 06:51:32.708453 env[1191]: time="2024-12-13T06:51:32.708199428Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Dec 13 06:51:32.713008 env[1191]: time="2024-12-13T06:51:32.712935702Z" level=info msg="CreateContainer within sandbox \"79c8a8ae41fd9fbbec8d1a192a6e4885e8025fe9621dfe742668f6740f562ad0\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Dec 13 06:51:32.725888 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3086151020.mount: Deactivated successfully. Dec 13 06:51:32.734833 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3949611112.mount: Deactivated successfully. Dec 13 06:51:32.735908 env[1191]: time="2024-12-13T06:51:32.735845014Z" level=info msg="CreateContainer within sandbox \"79c8a8ae41fd9fbbec8d1a192a6e4885e8025fe9621dfe742668f6740f562ad0\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"7b24a4dc8e1016e883bc7a723541bdaf0060e84aaf47918b76d217de2af5be62\"" Dec 13 06:51:32.736957 env[1191]: time="2024-12-13T06:51:32.736884910Z" level=info msg="StartContainer for \"7b24a4dc8e1016e883bc7a723541bdaf0060e84aaf47918b76d217de2af5be62\"" Dec 13 06:51:32.774488 systemd[1]: Started cri-containerd-7b24a4dc8e1016e883bc7a723541bdaf0060e84aaf47918b76d217de2af5be62.scope. Dec 13 06:51:32.830366 env[1191]: time="2024-12-13T06:51:32.830310495Z" level=info msg="StartContainer for \"7b24a4dc8e1016e883bc7a723541bdaf0060e84aaf47918b76d217de2af5be62\" returns successfully" Dec 13 06:51:33.027630 kubelet[1462]: I1213 06:51:33.027579 1462 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=1.659214513 podStartE2EDuration="9.027458002s" podCreationTimestamp="2024-12-13 06:51:24 +0000 UTC" firstStartedPulling="2024-12-13 06:51:25.340506349 +0000 UTC m=+48.056072643" lastFinishedPulling="2024-12-13 06:51:32.708749827 +0000 UTC m=+55.424316132" observedRunningTime="2024-12-13 06:51:33.026842556 +0000 UTC m=+55.742408859" watchObservedRunningTime="2024-12-13 06:51:33.027458002 +0000 UTC m=+55.743024300" Dec 13 06:51:33.618818 kubelet[1462]: E1213 06:51:33.618736 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:51:34.619706 kubelet[1462]: E1213 06:51:34.619632 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:51:35.620877 kubelet[1462]: E1213 06:51:35.620796 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:51:36.621410 kubelet[1462]: E1213 06:51:36.621295 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:51:37.568415 kubelet[1462]: E1213 06:51:37.568322 1462 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:51:37.621779 kubelet[1462]: E1213 06:51:37.621688 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:51:38.623014 kubelet[1462]: E1213 06:51:38.622912 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:51:39.624209 kubelet[1462]: E1213 06:51:39.624154 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:51:40.625339 kubelet[1462]: E1213 06:51:40.625272 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:51:41.626165 kubelet[1462]: E1213 06:51:41.626102 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:51:42.626760 kubelet[1462]: E1213 06:51:42.626692 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:51:42.949501 kubelet[1462]: I1213 06:51:42.948835 1462 topology_manager.go:215] "Topology Admit Handler" podUID="af36a7e7-4ea6-47fe-92b7-98f2f66cf5f0" podNamespace="default" podName="test-pod-1" Dec 13 06:51:42.957685 systemd[1]: Created slice kubepods-besteffort-podaf36a7e7_4ea6_47fe_92b7_98f2f66cf5f0.slice. Dec 13 06:51:43.110124 kubelet[1462]: I1213 06:51:43.110017 1462 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s62bj\" (UniqueName: \"kubernetes.io/projected/af36a7e7-4ea6-47fe-92b7-98f2f66cf5f0-kube-api-access-s62bj\") pod \"test-pod-1\" (UID: \"af36a7e7-4ea6-47fe-92b7-98f2f66cf5f0\") " pod="default/test-pod-1" Dec 13 06:51:43.110361 kubelet[1462]: I1213 06:51:43.110157 1462 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-d50fb3c9-65c0-46e9-8a71-636916bad74c\" (UniqueName: \"kubernetes.io/nfs/af36a7e7-4ea6-47fe-92b7-98f2f66cf5f0-pvc-d50fb3c9-65c0-46e9-8a71-636916bad74c\") pod \"test-pod-1\" (UID: \"af36a7e7-4ea6-47fe-92b7-98f2f66cf5f0\") " pod="default/test-pod-1" Dec 13 06:51:43.260307 kernel: FS-Cache: Loaded Dec 13 06:51:43.330000 kernel: RPC: Registered named UNIX socket transport module. Dec 13 06:51:43.330231 kernel: RPC: Registered udp transport module. Dec 13 06:51:43.330303 kernel: RPC: Registered tcp transport module. Dec 13 06:51:43.331184 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Dec 13 06:51:43.415101 kernel: FS-Cache: Netfs 'nfs' registered for caching Dec 13 06:51:43.629924 kubelet[1462]: E1213 06:51:43.628421 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:51:43.650622 kernel: NFS: Registering the id_resolver key type Dec 13 06:51:43.650795 kernel: Key type id_resolver registered Dec 13 06:51:43.651911 kernel: Key type id_legacy registered Dec 13 06:51:43.708229 nfsidmap[2795]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'gb1.brightbox.com' Dec 13 06:51:43.715271 nfsidmap[2798]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'gb1.brightbox.com' Dec 13 06:51:43.865015 env[1191]: time="2024-12-13T06:51:43.864953902Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:af36a7e7-4ea6-47fe-92b7-98f2f66cf5f0,Namespace:default,Attempt:0,}" Dec 13 06:51:43.931335 systemd-networkd[1024]: lxcf1b0fb3bb5c4: Link UP Dec 13 06:51:43.944114 kernel: eth0: renamed from tmpfabe1 Dec 13 06:51:43.959979 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 06:51:43.960155 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcf1b0fb3bb5c4: link becomes ready Dec 13 06:51:43.960490 systemd-networkd[1024]: lxcf1b0fb3bb5c4: Gained carrier Dec 13 06:51:44.189053 env[1191]: time="2024-12-13T06:51:44.188349726Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 06:51:44.189053 env[1191]: time="2024-12-13T06:51:44.188496564Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 06:51:44.189053 env[1191]: time="2024-12-13T06:51:44.188574297Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 06:51:44.189390 env[1191]: time="2024-12-13T06:51:44.189160631Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/fabe17486678d2511640585a02e5c462137f9b3765d5a977c4f295790cd62e7e pid=2838 runtime=io.containerd.runc.v2 Dec 13 06:51:44.207681 systemd[1]: Started cri-containerd-fabe17486678d2511640585a02e5c462137f9b3765d5a977c4f295790cd62e7e.scope. Dec 13 06:51:44.282003 env[1191]: time="2024-12-13T06:51:44.281935051Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:af36a7e7-4ea6-47fe-92b7-98f2f66cf5f0,Namespace:default,Attempt:0,} returns sandbox id \"fabe17486678d2511640585a02e5c462137f9b3765d5a977c4f295790cd62e7e\"" Dec 13 06:51:44.284302 env[1191]: time="2024-12-13T06:51:44.283992736Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Dec 13 06:51:44.621128 env[1191]: time="2024-12-13T06:51:44.621055959Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 06:51:44.622590 env[1191]: time="2024-12-13T06:51:44.622557808Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 06:51:44.624862 env[1191]: time="2024-12-13T06:51:44.624824664Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 06:51:44.629025 kubelet[1462]: E1213 06:51:44.628948 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:51:44.629589 env[1191]: time="2024-12-13T06:51:44.629554513Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx@sha256:e04edf30a4ea4c5a4107110797c72d3ee8a654415f00acd4019be17218afd9a1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 06:51:44.630590 env[1191]: time="2024-12-13T06:51:44.630553454Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3\"" Dec 13 06:51:44.633542 env[1191]: time="2024-12-13T06:51:44.633483284Z" level=info msg="CreateContainer within sandbox \"fabe17486678d2511640585a02e5c462137f9b3765d5a977c4f295790cd62e7e\" for container &ContainerMetadata{Name:test,Attempt:0,}" Dec 13 06:51:44.646402 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount833292226.mount: Deactivated successfully. Dec 13 06:51:44.653524 env[1191]: time="2024-12-13T06:51:44.653437794Z" level=info msg="CreateContainer within sandbox \"fabe17486678d2511640585a02e5c462137f9b3765d5a977c4f295790cd62e7e\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"8f608be1d777855613261dfd61a1ffc4e379533a23ebb2bea5cb9ca380e31b0c\"" Dec 13 06:51:44.654636 env[1191]: time="2024-12-13T06:51:44.654603004Z" level=info msg="StartContainer for \"8f608be1d777855613261dfd61a1ffc4e379533a23ebb2bea5cb9ca380e31b0c\"" Dec 13 06:51:44.684582 systemd[1]: Started cri-containerd-8f608be1d777855613261dfd61a1ffc4e379533a23ebb2bea5cb9ca380e31b0c.scope. Dec 13 06:51:44.730505 env[1191]: time="2024-12-13T06:51:44.730450422Z" level=info msg="StartContainer for \"8f608be1d777855613261dfd61a1ffc4e379533a23ebb2bea5cb9ca380e31b0c\" returns successfully" Dec 13 06:51:45.056800 kubelet[1462]: I1213 06:51:45.056746 1462 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=18.709311449 podStartE2EDuration="19.056670603s" podCreationTimestamp="2024-12-13 06:51:26 +0000 UTC" firstStartedPulling="2024-12-13 06:51:44.283581615 +0000 UTC m=+66.999147910" lastFinishedPulling="2024-12-13 06:51:44.630940764 +0000 UTC m=+67.346507064" observedRunningTime="2024-12-13 06:51:45.056437813 +0000 UTC m=+67.772004135" watchObservedRunningTime="2024-12-13 06:51:45.056670603 +0000 UTC m=+67.772236909" Dec 13 06:51:45.341522 systemd-networkd[1024]: lxcf1b0fb3bb5c4: Gained IPv6LL Dec 13 06:51:45.630128 kubelet[1462]: E1213 06:51:45.629904 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:51:46.631233 kubelet[1462]: E1213 06:51:46.631144 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:51:47.632324 kubelet[1462]: E1213 06:51:47.632253 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:51:48.633160 kubelet[1462]: E1213 06:51:48.633036 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:51:49.634188 kubelet[1462]: E1213 06:51:49.634126 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:51:50.634902 kubelet[1462]: E1213 06:51:50.634818 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:51:51.635977 kubelet[1462]: E1213 06:51:51.635904 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:51:52.535017 systemd[1]: run-containerd-runc-k8s.io-3bd0eee4e4730e2cfebcc1caa851d7b1d5216f69fa1d371d9cd86a077f41632b-runc.848GCh.mount: Deactivated successfully. Dec 13 06:51:52.577344 env[1191]: time="2024-12-13T06:51:52.577234621Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 06:51:52.584640 env[1191]: time="2024-12-13T06:51:52.584600484Z" level=info msg="StopContainer for \"3bd0eee4e4730e2cfebcc1caa851d7b1d5216f69fa1d371d9cd86a077f41632b\" with timeout 2 (s)" Dec 13 06:51:52.585034 env[1191]: time="2024-12-13T06:51:52.585000343Z" level=info msg="Stop container \"3bd0eee4e4730e2cfebcc1caa851d7b1d5216f69fa1d371d9cd86a077f41632b\" with signal terminated" Dec 13 06:51:52.593861 systemd-networkd[1024]: lxc_health: Link DOWN Dec 13 06:51:52.593870 systemd-networkd[1024]: lxc_health: Lost carrier Dec 13 06:51:52.635994 systemd[1]: cri-containerd-3bd0eee4e4730e2cfebcc1caa851d7b1d5216f69fa1d371d9cd86a077f41632b.scope: Deactivated successfully. Dec 13 06:51:52.636530 systemd[1]: cri-containerd-3bd0eee4e4730e2cfebcc1caa851d7b1d5216f69fa1d371d9cd86a077f41632b.scope: Consumed 9.351s CPU time. Dec 13 06:51:52.639419 kubelet[1462]: E1213 06:51:52.637983 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:51:52.663211 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3bd0eee4e4730e2cfebcc1caa851d7b1d5216f69fa1d371d9cd86a077f41632b-rootfs.mount: Deactivated successfully. Dec 13 06:51:52.671916 env[1191]: time="2024-12-13T06:51:52.671841298Z" level=info msg="shim disconnected" id=3bd0eee4e4730e2cfebcc1caa851d7b1d5216f69fa1d371d9cd86a077f41632b Dec 13 06:51:52.672129 env[1191]: time="2024-12-13T06:51:52.671915798Z" level=warning msg="cleaning up after shim disconnected" id=3bd0eee4e4730e2cfebcc1caa851d7b1d5216f69fa1d371d9cd86a077f41632b namespace=k8s.io Dec 13 06:51:52.672129 env[1191]: time="2024-12-13T06:51:52.671940511Z" level=info msg="cleaning up dead shim" Dec 13 06:51:52.685650 env[1191]: time="2024-12-13T06:51:52.685585925Z" level=warning msg="cleanup warnings time=\"2024-12-13T06:51:52Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2967 runtime=io.containerd.runc.v2\n" Dec 13 06:51:52.687999 env[1191]: time="2024-12-13T06:51:52.687946813Z" level=info msg="StopContainer for \"3bd0eee4e4730e2cfebcc1caa851d7b1d5216f69fa1d371d9cd86a077f41632b\" returns successfully" Dec 13 06:51:52.688912 env[1191]: time="2024-12-13T06:51:52.688875738Z" level=info msg="StopPodSandbox for \"9036371bf2c2c209965a4ac0c4080e449db3c5a07cba33057337f088b6e7956a\"" Dec 13 06:51:52.689279 env[1191]: time="2024-12-13T06:51:52.689243720Z" level=info msg="Container to stop \"9e1b748c14a6d16f94553182408704beab45494a271469ac46126df92552c073\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 06:51:52.689418 env[1191]: time="2024-12-13T06:51:52.689386600Z" level=info msg="Container to stop \"7f87d9c26835b78bdcbf25a7c0e79713bbf71bee282d48ff080e2313acd8b9cf\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 06:51:52.689547 env[1191]: time="2024-12-13T06:51:52.689516001Z" level=info msg="Container to stop \"0b82cb502aae1270564b75ea18010cbb2d4cdefdc37cdfa7ffb8e6c9ef9b83cd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 06:51:52.689674 env[1191]: time="2024-12-13T06:51:52.689643475Z" level=info msg="Container to stop \"927cf9a6489f93427b760f2b700ff9a66a4751afaad8f1a3228816b2df17185c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 06:51:52.689811 env[1191]: time="2024-12-13T06:51:52.689781499Z" level=info msg="Container to stop \"3bd0eee4e4730e2cfebcc1caa851d7b1d5216f69fa1d371d9cd86a077f41632b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 06:51:52.692332 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9036371bf2c2c209965a4ac0c4080e449db3c5a07cba33057337f088b6e7956a-shm.mount: Deactivated successfully. Dec 13 06:51:52.700028 systemd[1]: cri-containerd-9036371bf2c2c209965a4ac0c4080e449db3c5a07cba33057337f088b6e7956a.scope: Deactivated successfully. Dec 13 06:51:52.707089 kubelet[1462]: E1213 06:51:52.707040 1462 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 06:51:52.729688 env[1191]: time="2024-12-13T06:51:52.729628799Z" level=info msg="shim disconnected" id=9036371bf2c2c209965a4ac0c4080e449db3c5a07cba33057337f088b6e7956a Dec 13 06:51:52.729995 env[1191]: time="2024-12-13T06:51:52.729953222Z" level=warning msg="cleaning up after shim disconnected" id=9036371bf2c2c209965a4ac0c4080e449db3c5a07cba33057337f088b6e7956a namespace=k8s.io Dec 13 06:51:52.730654 env[1191]: time="2024-12-13T06:51:52.730626626Z" level=info msg="cleaning up dead shim" Dec 13 06:51:52.743139 env[1191]: time="2024-12-13T06:51:52.743078989Z" level=warning msg="cleanup warnings time=\"2024-12-13T06:51:52Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3000 runtime=io.containerd.runc.v2\n" Dec 13 06:51:52.743970 env[1191]: time="2024-12-13T06:51:52.743912839Z" level=info msg="TearDown network for sandbox \"9036371bf2c2c209965a4ac0c4080e449db3c5a07cba33057337f088b6e7956a\" successfully" Dec 13 06:51:52.744080 env[1191]: time="2024-12-13T06:51:52.743973080Z" level=info msg="StopPodSandbox for \"9036371bf2c2c209965a4ac0c4080e449db3c5a07cba33057337f088b6e7956a\" returns successfully" Dec 13 06:51:52.878647 kubelet[1462]: I1213 06:51:52.878496 1462 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/955653e6-c8a6-46e1-b516-1de8ccc1dec1-host-proc-sys-kernel\") pod \"955653e6-c8a6-46e1-b516-1de8ccc1dec1\" (UID: \"955653e6-c8a6-46e1-b516-1de8ccc1dec1\") " Dec 13 06:51:52.878647 kubelet[1462]: I1213 06:51:52.878562 1462 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/955653e6-c8a6-46e1-b516-1de8ccc1dec1-cni-path\") pod \"955653e6-c8a6-46e1-b516-1de8ccc1dec1\" (UID: \"955653e6-c8a6-46e1-b516-1de8ccc1dec1\") " Dec 13 06:51:52.878647 kubelet[1462]: I1213 06:51:52.878589 1462 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/955653e6-c8a6-46e1-b516-1de8ccc1dec1-cilium-run\") pod \"955653e6-c8a6-46e1-b516-1de8ccc1dec1\" (UID: \"955653e6-c8a6-46e1-b516-1de8ccc1dec1\") " Dec 13 06:51:52.879184 kubelet[1462]: I1213 06:51:52.879139 1462 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/955653e6-c8a6-46e1-b516-1de8ccc1dec1-cni-path" (OuterVolumeSpecName: "cni-path") pod "955653e6-c8a6-46e1-b516-1de8ccc1dec1" (UID: "955653e6-c8a6-46e1-b516-1de8ccc1dec1"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 06:51:52.879414 kubelet[1462]: I1213 06:51:52.879382 1462 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/955653e6-c8a6-46e1-b516-1de8ccc1dec1-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "955653e6-c8a6-46e1-b516-1de8ccc1dec1" (UID: "955653e6-c8a6-46e1-b516-1de8ccc1dec1"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 06:51:52.879599 kubelet[1462]: I1213 06:51:52.879573 1462 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/955653e6-c8a6-46e1-b516-1de8ccc1dec1-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "955653e6-c8a6-46e1-b516-1de8ccc1dec1" (UID: "955653e6-c8a6-46e1-b516-1de8ccc1dec1"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 06:51:52.880172 kubelet[1462]: I1213 06:51:52.879729 1462 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/955653e6-c8a6-46e1-b516-1de8ccc1dec1-clustermesh-secrets\") pod \"955653e6-c8a6-46e1-b516-1de8ccc1dec1\" (UID: \"955653e6-c8a6-46e1-b516-1de8ccc1dec1\") " Dec 13 06:51:52.880351 kubelet[1462]: I1213 06:51:52.880315 1462 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2rjp8\" (UniqueName: \"kubernetes.io/projected/955653e6-c8a6-46e1-b516-1de8ccc1dec1-kube-api-access-2rjp8\") pod \"955653e6-c8a6-46e1-b516-1de8ccc1dec1\" (UID: \"955653e6-c8a6-46e1-b516-1de8ccc1dec1\") " Dec 13 06:51:52.880507 kubelet[1462]: I1213 06:51:52.880484 1462 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/955653e6-c8a6-46e1-b516-1de8ccc1dec1-host-proc-sys-net\") pod \"955653e6-c8a6-46e1-b516-1de8ccc1dec1\" (UID: \"955653e6-c8a6-46e1-b516-1de8ccc1dec1\") " Dec 13 06:51:52.880663 kubelet[1462]: I1213 06:51:52.880641 1462 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/955653e6-c8a6-46e1-b516-1de8ccc1dec1-hostproc\") pod \"955653e6-c8a6-46e1-b516-1de8ccc1dec1\" (UID: \"955653e6-c8a6-46e1-b516-1de8ccc1dec1\") " Dec 13 06:51:52.880829 kubelet[1462]: I1213 06:51:52.880806 1462 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/955653e6-c8a6-46e1-b516-1de8ccc1dec1-etc-cni-netd\") pod \"955653e6-c8a6-46e1-b516-1de8ccc1dec1\" (UID: \"955653e6-c8a6-46e1-b516-1de8ccc1dec1\") " Dec 13 06:51:52.882182 kubelet[1462]: I1213 06:51:52.882142 1462 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/955653e6-c8a6-46e1-b516-1de8ccc1dec1-xtables-lock\") pod \"955653e6-c8a6-46e1-b516-1de8ccc1dec1\" (UID: \"955653e6-c8a6-46e1-b516-1de8ccc1dec1\") " Dec 13 06:51:52.882282 kubelet[1462]: I1213 06:51:52.882205 1462 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/955653e6-c8a6-46e1-b516-1de8ccc1dec1-cilium-config-path\") pod \"955653e6-c8a6-46e1-b516-1de8ccc1dec1\" (UID: \"955653e6-c8a6-46e1-b516-1de8ccc1dec1\") " Dec 13 06:51:52.882282 kubelet[1462]: I1213 06:51:52.882236 1462 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/955653e6-c8a6-46e1-b516-1de8ccc1dec1-lib-modules\") pod \"955653e6-c8a6-46e1-b516-1de8ccc1dec1\" (UID: \"955653e6-c8a6-46e1-b516-1de8ccc1dec1\") " Dec 13 06:51:52.882282 kubelet[1462]: I1213 06:51:52.882268 1462 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/955653e6-c8a6-46e1-b516-1de8ccc1dec1-cilium-cgroup\") pod \"955653e6-c8a6-46e1-b516-1de8ccc1dec1\" (UID: \"955653e6-c8a6-46e1-b516-1de8ccc1dec1\") " Dec 13 06:51:52.882589 kubelet[1462]: I1213 06:51:52.882296 1462 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/955653e6-c8a6-46e1-b516-1de8ccc1dec1-bpf-maps\") pod \"955653e6-c8a6-46e1-b516-1de8ccc1dec1\" (UID: \"955653e6-c8a6-46e1-b516-1de8ccc1dec1\") " Dec 13 06:51:52.882589 kubelet[1462]: I1213 06:51:52.882341 1462 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/955653e6-c8a6-46e1-b516-1de8ccc1dec1-hubble-tls\") pod \"955653e6-c8a6-46e1-b516-1de8ccc1dec1\" (UID: \"955653e6-c8a6-46e1-b516-1de8ccc1dec1\") " Dec 13 06:51:52.882589 kubelet[1462]: I1213 06:51:52.882392 1462 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/955653e6-c8a6-46e1-b516-1de8ccc1dec1-cilium-run\") on node \"10.243.75.202\" DevicePath \"\"" Dec 13 06:51:52.882589 kubelet[1462]: I1213 06:51:52.882414 1462 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/955653e6-c8a6-46e1-b516-1de8ccc1dec1-host-proc-sys-kernel\") on node \"10.243.75.202\" DevicePath \"\"" Dec 13 06:51:52.882589 kubelet[1462]: I1213 06:51:52.882436 1462 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/955653e6-c8a6-46e1-b516-1de8ccc1dec1-cni-path\") on node \"10.243.75.202\" DevicePath \"\"" Dec 13 06:51:52.886218 kubelet[1462]: I1213 06:51:52.886177 1462 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/955653e6-c8a6-46e1-b516-1de8ccc1dec1-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "955653e6-c8a6-46e1-b516-1de8ccc1dec1" (UID: "955653e6-c8a6-46e1-b516-1de8ccc1dec1"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 06:51:52.886366 kubelet[1462]: I1213 06:51:52.880980 1462 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/955653e6-c8a6-46e1-b516-1de8ccc1dec1-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "955653e6-c8a6-46e1-b516-1de8ccc1dec1" (UID: "955653e6-c8a6-46e1-b516-1de8ccc1dec1"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 06:51:52.886506 kubelet[1462]: I1213 06:51:52.881243 1462 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/955653e6-c8a6-46e1-b516-1de8ccc1dec1-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "955653e6-c8a6-46e1-b516-1de8ccc1dec1" (UID: "955653e6-c8a6-46e1-b516-1de8ccc1dec1"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 06:51:52.886636 kubelet[1462]: I1213 06:51:52.881267 1462 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/955653e6-c8a6-46e1-b516-1de8ccc1dec1-hostproc" (OuterVolumeSpecName: "hostproc") pod "955653e6-c8a6-46e1-b516-1de8ccc1dec1" (UID: "955653e6-c8a6-46e1-b516-1de8ccc1dec1"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 06:51:52.886841 kubelet[1462]: I1213 06:51:52.886787 1462 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/955653e6-c8a6-46e1-b516-1de8ccc1dec1-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "955653e6-c8a6-46e1-b516-1de8ccc1dec1" (UID: "955653e6-c8a6-46e1-b516-1de8ccc1dec1"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 06:51:52.887189 kubelet[1462]: I1213 06:51:52.887136 1462 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/955653e6-c8a6-46e1-b516-1de8ccc1dec1-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "955653e6-c8a6-46e1-b516-1de8ccc1dec1" (UID: "955653e6-c8a6-46e1-b516-1de8ccc1dec1"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 06:51:52.887366 kubelet[1462]: I1213 06:51:52.887331 1462 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/955653e6-c8a6-46e1-b516-1de8ccc1dec1-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "955653e6-c8a6-46e1-b516-1de8ccc1dec1" (UID: "955653e6-c8a6-46e1-b516-1de8ccc1dec1"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 06:51:52.887635 kubelet[1462]: I1213 06:51:52.887593 1462 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/955653e6-c8a6-46e1-b516-1de8ccc1dec1-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "955653e6-c8a6-46e1-b516-1de8ccc1dec1" (UID: "955653e6-c8a6-46e1-b516-1de8ccc1dec1"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 06:51:52.887635 kubelet[1462]: I1213 06:51:52.887602 1462 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/955653e6-c8a6-46e1-b516-1de8ccc1dec1-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "955653e6-c8a6-46e1-b516-1de8ccc1dec1" (UID: "955653e6-c8a6-46e1-b516-1de8ccc1dec1"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 06:51:52.890061 kubelet[1462]: I1213 06:51:52.890030 1462 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/955653e6-c8a6-46e1-b516-1de8ccc1dec1-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "955653e6-c8a6-46e1-b516-1de8ccc1dec1" (UID: "955653e6-c8a6-46e1-b516-1de8ccc1dec1"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 06:51:52.891766 kubelet[1462]: I1213 06:51:52.891732 1462 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/955653e6-c8a6-46e1-b516-1de8ccc1dec1-kube-api-access-2rjp8" (OuterVolumeSpecName: "kube-api-access-2rjp8") pod "955653e6-c8a6-46e1-b516-1de8ccc1dec1" (UID: "955653e6-c8a6-46e1-b516-1de8ccc1dec1"). InnerVolumeSpecName "kube-api-access-2rjp8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 06:51:52.983298 kubelet[1462]: I1213 06:51:52.983243 1462 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/955653e6-c8a6-46e1-b516-1de8ccc1dec1-clustermesh-secrets\") on node \"10.243.75.202\" DevicePath \"\"" Dec 13 06:51:52.983612 kubelet[1462]: I1213 06:51:52.983584 1462 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-2rjp8\" (UniqueName: \"kubernetes.io/projected/955653e6-c8a6-46e1-b516-1de8ccc1dec1-kube-api-access-2rjp8\") on node \"10.243.75.202\" DevicePath \"\"" Dec 13 06:51:52.983754 kubelet[1462]: I1213 06:51:52.983735 1462 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/955653e6-c8a6-46e1-b516-1de8ccc1dec1-hostproc\") on node \"10.243.75.202\" DevicePath \"\"" Dec 13 06:51:52.983942 kubelet[1462]: I1213 06:51:52.983923 1462 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/955653e6-c8a6-46e1-b516-1de8ccc1dec1-etc-cni-netd\") on node \"10.243.75.202\" DevicePath \"\"" Dec 13 06:51:52.984122 kubelet[1462]: I1213 06:51:52.984101 1462 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/955653e6-c8a6-46e1-b516-1de8ccc1dec1-xtables-lock\") on node \"10.243.75.202\" DevicePath \"\"" Dec 13 06:51:52.984299 kubelet[1462]: I1213 06:51:52.984279 1462 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/955653e6-c8a6-46e1-b516-1de8ccc1dec1-cilium-config-path\") on node \"10.243.75.202\" DevicePath \"\"" Dec 13 06:51:52.984444 kubelet[1462]: I1213 06:51:52.984424 1462 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/955653e6-c8a6-46e1-b516-1de8ccc1dec1-lib-modules\") on node \"10.243.75.202\" DevicePath \"\"" Dec 13 06:51:52.984640 kubelet[1462]: I1213 06:51:52.984610 1462 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/955653e6-c8a6-46e1-b516-1de8ccc1dec1-host-proc-sys-net\") on node \"10.243.75.202\" DevicePath \"\"" Dec 13 06:51:52.984838 kubelet[1462]: I1213 06:51:52.984819 1462 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/955653e6-c8a6-46e1-b516-1de8ccc1dec1-bpf-maps\") on node \"10.243.75.202\" DevicePath \"\"" Dec 13 06:51:52.985011 kubelet[1462]: I1213 06:51:52.984991 1462 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/955653e6-c8a6-46e1-b516-1de8ccc1dec1-cilium-cgroup\") on node \"10.243.75.202\" DevicePath \"\"" Dec 13 06:51:52.985231 kubelet[1462]: I1213 06:51:52.985192 1462 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/955653e6-c8a6-46e1-b516-1de8ccc1dec1-hubble-tls\") on node \"10.243.75.202\" DevicePath \"\"" Dec 13 06:51:53.067159 kubelet[1462]: I1213 06:51:53.067098 1462 scope.go:117] "RemoveContainer" containerID="3bd0eee4e4730e2cfebcc1caa851d7b1d5216f69fa1d371d9cd86a077f41632b" Dec 13 06:51:53.072550 systemd[1]: Removed slice kubepods-burstable-pod955653e6_c8a6_46e1_b516_1de8ccc1dec1.slice. Dec 13 06:51:53.072681 systemd[1]: kubepods-burstable-pod955653e6_c8a6_46e1_b516_1de8ccc1dec1.slice: Consumed 9.517s CPU time. Dec 13 06:51:53.078193 env[1191]: time="2024-12-13T06:51:53.078131984Z" level=info msg="RemoveContainer for \"3bd0eee4e4730e2cfebcc1caa851d7b1d5216f69fa1d371d9cd86a077f41632b\"" Dec 13 06:51:53.082156 env[1191]: time="2024-12-13T06:51:53.082113204Z" level=info msg="RemoveContainer for \"3bd0eee4e4730e2cfebcc1caa851d7b1d5216f69fa1d371d9cd86a077f41632b\" returns successfully" Dec 13 06:51:53.082506 kubelet[1462]: I1213 06:51:53.082477 1462 scope.go:117] "RemoveContainer" containerID="927cf9a6489f93427b760f2b700ff9a66a4751afaad8f1a3228816b2df17185c" Dec 13 06:51:53.083814 env[1191]: time="2024-12-13T06:51:53.083780823Z" level=info msg="RemoveContainer for \"927cf9a6489f93427b760f2b700ff9a66a4751afaad8f1a3228816b2df17185c\"" Dec 13 06:51:53.086790 env[1191]: time="2024-12-13T06:51:53.086728590Z" level=info msg="RemoveContainer for \"927cf9a6489f93427b760f2b700ff9a66a4751afaad8f1a3228816b2df17185c\" returns successfully" Dec 13 06:51:53.087013 kubelet[1462]: I1213 06:51:53.086981 1462 scope.go:117] "RemoveContainer" containerID="0b82cb502aae1270564b75ea18010cbb2d4cdefdc37cdfa7ffb8e6c9ef9b83cd" Dec 13 06:51:53.088767 env[1191]: time="2024-12-13T06:51:53.088716625Z" level=info msg="RemoveContainer for \"0b82cb502aae1270564b75ea18010cbb2d4cdefdc37cdfa7ffb8e6c9ef9b83cd\"" Dec 13 06:51:53.091956 env[1191]: time="2024-12-13T06:51:53.091888683Z" level=info msg="RemoveContainer for \"0b82cb502aae1270564b75ea18010cbb2d4cdefdc37cdfa7ffb8e6c9ef9b83cd\" returns successfully" Dec 13 06:51:53.092250 kubelet[1462]: I1213 06:51:53.092226 1462 scope.go:117] "RemoveContainer" containerID="7f87d9c26835b78bdcbf25a7c0e79713bbf71bee282d48ff080e2313acd8b9cf" Dec 13 06:51:53.093774 env[1191]: time="2024-12-13T06:51:53.093721891Z" level=info msg="RemoveContainer for \"7f87d9c26835b78bdcbf25a7c0e79713bbf71bee282d48ff080e2313acd8b9cf\"" Dec 13 06:51:53.112088 env[1191]: time="2024-12-13T06:51:53.111931519Z" level=info msg="RemoveContainer for \"7f87d9c26835b78bdcbf25a7c0e79713bbf71bee282d48ff080e2313acd8b9cf\" returns successfully" Dec 13 06:51:53.112434 kubelet[1462]: I1213 06:51:53.112391 1462 scope.go:117] "RemoveContainer" containerID="9e1b748c14a6d16f94553182408704beab45494a271469ac46126df92552c073" Dec 13 06:51:53.114125 env[1191]: time="2024-12-13T06:51:53.114052635Z" level=info msg="RemoveContainer for \"9e1b748c14a6d16f94553182408704beab45494a271469ac46126df92552c073\"" Dec 13 06:51:53.116813 env[1191]: time="2024-12-13T06:51:53.116766690Z" level=info msg="RemoveContainer for \"9e1b748c14a6d16f94553182408704beab45494a271469ac46126df92552c073\" returns successfully" Dec 13 06:51:53.117107 kubelet[1462]: I1213 06:51:53.117049 1462 scope.go:117] "RemoveContainer" containerID="3bd0eee4e4730e2cfebcc1caa851d7b1d5216f69fa1d371d9cd86a077f41632b" Dec 13 06:51:53.117728 env[1191]: time="2024-12-13T06:51:53.117499253Z" level=error msg="ContainerStatus for \"3bd0eee4e4730e2cfebcc1caa851d7b1d5216f69fa1d371d9cd86a077f41632b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3bd0eee4e4730e2cfebcc1caa851d7b1d5216f69fa1d371d9cd86a077f41632b\": not found" Dec 13 06:51:53.117936 kubelet[1462]: E1213 06:51:53.117915 1462 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3bd0eee4e4730e2cfebcc1caa851d7b1d5216f69fa1d371d9cd86a077f41632b\": not found" containerID="3bd0eee4e4730e2cfebcc1caa851d7b1d5216f69fa1d371d9cd86a077f41632b" Dec 13 06:51:53.118132 kubelet[1462]: I1213 06:51:53.118096 1462 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3bd0eee4e4730e2cfebcc1caa851d7b1d5216f69fa1d371d9cd86a077f41632b"} err="failed to get container status \"3bd0eee4e4730e2cfebcc1caa851d7b1d5216f69fa1d371d9cd86a077f41632b\": rpc error: code = NotFound desc = an error occurred when try to find container \"3bd0eee4e4730e2cfebcc1caa851d7b1d5216f69fa1d371d9cd86a077f41632b\": not found" Dec 13 06:51:53.118132 kubelet[1462]: I1213 06:51:53.118128 1462 scope.go:117] "RemoveContainer" containerID="927cf9a6489f93427b760f2b700ff9a66a4751afaad8f1a3228816b2df17185c" Dec 13 06:51:53.118602 env[1191]: time="2024-12-13T06:51:53.118465417Z" level=error msg="ContainerStatus for \"927cf9a6489f93427b760f2b700ff9a66a4751afaad8f1a3228816b2df17185c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"927cf9a6489f93427b760f2b700ff9a66a4751afaad8f1a3228816b2df17185c\": not found" Dec 13 06:51:53.118993 kubelet[1462]: E1213 06:51:53.118970 1462 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"927cf9a6489f93427b760f2b700ff9a66a4751afaad8f1a3228816b2df17185c\": not found" containerID="927cf9a6489f93427b760f2b700ff9a66a4751afaad8f1a3228816b2df17185c" Dec 13 06:51:53.119170 kubelet[1462]: I1213 06:51:53.119146 1462 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"927cf9a6489f93427b760f2b700ff9a66a4751afaad8f1a3228816b2df17185c"} err="failed to get container status \"927cf9a6489f93427b760f2b700ff9a66a4751afaad8f1a3228816b2df17185c\": rpc error: code = NotFound desc = an error occurred when try to find container \"927cf9a6489f93427b760f2b700ff9a66a4751afaad8f1a3228816b2df17185c\": not found" Dec 13 06:51:53.119320 kubelet[1462]: I1213 06:51:53.119299 1462 scope.go:117] "RemoveContainer" containerID="0b82cb502aae1270564b75ea18010cbb2d4cdefdc37cdfa7ffb8e6c9ef9b83cd" Dec 13 06:51:53.119721 env[1191]: time="2024-12-13T06:51:53.119648621Z" level=error msg="ContainerStatus for \"0b82cb502aae1270564b75ea18010cbb2d4cdefdc37cdfa7ffb8e6c9ef9b83cd\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0b82cb502aae1270564b75ea18010cbb2d4cdefdc37cdfa7ffb8e6c9ef9b83cd\": not found" Dec 13 06:51:53.119974 kubelet[1462]: E1213 06:51:53.119953 1462 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0b82cb502aae1270564b75ea18010cbb2d4cdefdc37cdfa7ffb8e6c9ef9b83cd\": not found" containerID="0b82cb502aae1270564b75ea18010cbb2d4cdefdc37cdfa7ffb8e6c9ef9b83cd" Dec 13 06:51:53.120147 kubelet[1462]: I1213 06:51:53.120126 1462 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0b82cb502aae1270564b75ea18010cbb2d4cdefdc37cdfa7ffb8e6c9ef9b83cd"} err="failed to get container status \"0b82cb502aae1270564b75ea18010cbb2d4cdefdc37cdfa7ffb8e6c9ef9b83cd\": rpc error: code = NotFound desc = an error occurred when try to find container \"0b82cb502aae1270564b75ea18010cbb2d4cdefdc37cdfa7ffb8e6c9ef9b83cd\": not found" Dec 13 06:51:53.120303 kubelet[1462]: I1213 06:51:53.120282 1462 scope.go:117] "RemoveContainer" containerID="7f87d9c26835b78bdcbf25a7c0e79713bbf71bee282d48ff080e2313acd8b9cf" Dec 13 06:51:53.120692 env[1191]: time="2024-12-13T06:51:53.120576549Z" level=error msg="ContainerStatus for \"7f87d9c26835b78bdcbf25a7c0e79713bbf71bee282d48ff080e2313acd8b9cf\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7f87d9c26835b78bdcbf25a7c0e79713bbf71bee282d48ff080e2313acd8b9cf\": not found" Dec 13 06:51:53.120936 kubelet[1462]: E1213 06:51:53.120916 1462 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7f87d9c26835b78bdcbf25a7c0e79713bbf71bee282d48ff080e2313acd8b9cf\": not found" containerID="7f87d9c26835b78bdcbf25a7c0e79713bbf71bee282d48ff080e2313acd8b9cf" Dec 13 06:51:53.121123 kubelet[1462]: I1213 06:51:53.121102 1462 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7f87d9c26835b78bdcbf25a7c0e79713bbf71bee282d48ff080e2313acd8b9cf"} err="failed to get container status \"7f87d9c26835b78bdcbf25a7c0e79713bbf71bee282d48ff080e2313acd8b9cf\": rpc error: code = NotFound desc = an error occurred when try to find container \"7f87d9c26835b78bdcbf25a7c0e79713bbf71bee282d48ff080e2313acd8b9cf\": not found" Dec 13 06:51:53.121274 kubelet[1462]: I1213 06:51:53.121253 1462 scope.go:117] "RemoveContainer" containerID="9e1b748c14a6d16f94553182408704beab45494a271469ac46126df92552c073" Dec 13 06:51:53.121692 env[1191]: time="2024-12-13T06:51:53.121554818Z" level=error msg="ContainerStatus for \"9e1b748c14a6d16f94553182408704beab45494a271469ac46126df92552c073\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9e1b748c14a6d16f94553182408704beab45494a271469ac46126df92552c073\": not found" Dec 13 06:51:53.121960 kubelet[1462]: E1213 06:51:53.121941 1462 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9e1b748c14a6d16f94553182408704beab45494a271469ac46126df92552c073\": not found" containerID="9e1b748c14a6d16f94553182408704beab45494a271469ac46126df92552c073" Dec 13 06:51:53.122137 kubelet[1462]: I1213 06:51:53.122118 1462 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9e1b748c14a6d16f94553182408704beab45494a271469ac46126df92552c073"} err="failed to get container status \"9e1b748c14a6d16f94553182408704beab45494a271469ac46126df92552c073\": rpc error: code = NotFound desc = an error occurred when try to find container \"9e1b748c14a6d16f94553182408704beab45494a271469ac46126df92552c073\": not found" Dec 13 06:51:53.527604 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9036371bf2c2c209965a4ac0c4080e449db3c5a07cba33057337f088b6e7956a-rootfs.mount: Deactivated successfully. Dec 13 06:51:53.527764 systemd[1]: var-lib-kubelet-pods-955653e6\x2dc8a6\x2d46e1\x2db516\x2d1de8ccc1dec1-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d2rjp8.mount: Deactivated successfully. Dec 13 06:51:53.527897 systemd[1]: var-lib-kubelet-pods-955653e6\x2dc8a6\x2d46e1\x2db516\x2d1de8ccc1dec1-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 06:51:53.528037 systemd[1]: var-lib-kubelet-pods-955653e6\x2dc8a6\x2d46e1\x2db516\x2d1de8ccc1dec1-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 06:51:53.638984 kubelet[1462]: E1213 06:51:53.638903 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:51:53.769709 kubelet[1462]: I1213 06:51:53.769670 1462 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="955653e6-c8a6-46e1-b516-1de8ccc1dec1" path="/var/lib/kubelet/pods/955653e6-c8a6-46e1-b516-1de8ccc1dec1/volumes" Dec 13 06:51:54.639498 kubelet[1462]: E1213 06:51:54.639426 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:51:55.640119 kubelet[1462]: E1213 06:51:55.640043 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:51:56.641457 kubelet[1462]: E1213 06:51:56.641384 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:51:57.058870 kubelet[1462]: I1213 06:51:57.058808 1462 topology_manager.go:215] "Topology Admit Handler" podUID="683ae517-a654-4d9a-a778-e61cf5654bd5" podNamespace="kube-system" podName="cilium-operator-5cc964979-dbtpx" Dec 13 06:51:57.059129 kubelet[1462]: E1213 06:51:57.058916 1462 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="955653e6-c8a6-46e1-b516-1de8ccc1dec1" containerName="apply-sysctl-overwrites" Dec 13 06:51:57.059129 kubelet[1462]: E1213 06:51:57.058939 1462 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="955653e6-c8a6-46e1-b516-1de8ccc1dec1" containerName="mount-bpf-fs" Dec 13 06:51:57.059129 kubelet[1462]: E1213 06:51:57.058952 1462 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="955653e6-c8a6-46e1-b516-1de8ccc1dec1" containerName="clean-cilium-state" Dec 13 06:51:57.059129 kubelet[1462]: E1213 06:51:57.058963 1462 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="955653e6-c8a6-46e1-b516-1de8ccc1dec1" containerName="mount-cgroup" Dec 13 06:51:57.059129 kubelet[1462]: E1213 06:51:57.058975 1462 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="955653e6-c8a6-46e1-b516-1de8ccc1dec1" containerName="cilium-agent" Dec 13 06:51:57.059129 kubelet[1462]: I1213 06:51:57.059031 1462 memory_manager.go:354] "RemoveStaleState removing state" podUID="955653e6-c8a6-46e1-b516-1de8ccc1dec1" containerName="cilium-agent" Dec 13 06:51:57.066011 systemd[1]: Created slice kubepods-besteffort-pod683ae517_a654_4d9a_a778_e61cf5654bd5.slice. Dec 13 06:51:57.068984 kubelet[1462]: I1213 06:51:57.068944 1462 topology_manager.go:215] "Topology Admit Handler" podUID="617c98c3-4b8c-4373-85f5-f23e4c9977df" podNamespace="kube-system" podName="cilium-lj2z9" Dec 13 06:51:57.075829 systemd[1]: Created slice kubepods-burstable-pod617c98c3_4b8c_4373_85f5_f23e4c9977df.slice. Dec 13 06:51:57.213168 kubelet[1462]: I1213 06:51:57.213095 1462 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/617c98c3-4b8c-4373-85f5-f23e4c9977df-host-proc-sys-kernel\") pod \"cilium-lj2z9\" (UID: \"617c98c3-4b8c-4373-85f5-f23e4c9977df\") " pod="kube-system/cilium-lj2z9" Dec 13 06:51:57.213168 kubelet[1462]: I1213 06:51:57.213174 1462 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6spz6\" (UniqueName: \"kubernetes.io/projected/617c98c3-4b8c-4373-85f5-f23e4c9977df-kube-api-access-6spz6\") pod \"cilium-lj2z9\" (UID: \"617c98c3-4b8c-4373-85f5-f23e4c9977df\") " pod="kube-system/cilium-lj2z9" Dec 13 06:51:57.213453 kubelet[1462]: I1213 06:51:57.213220 1462 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/617c98c3-4b8c-4373-85f5-f23e4c9977df-lib-modules\") pod \"cilium-lj2z9\" (UID: \"617c98c3-4b8c-4373-85f5-f23e4c9977df\") " pod="kube-system/cilium-lj2z9" Dec 13 06:51:57.213453 kubelet[1462]: I1213 06:51:57.213285 1462 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/617c98c3-4b8c-4373-85f5-f23e4c9977df-host-proc-sys-net\") pod \"cilium-lj2z9\" (UID: \"617c98c3-4b8c-4373-85f5-f23e4c9977df\") " pod="kube-system/cilium-lj2z9" Dec 13 06:51:57.213453 kubelet[1462]: I1213 06:51:57.213319 1462 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/617c98c3-4b8c-4373-85f5-f23e4c9977df-etc-cni-netd\") pod \"cilium-lj2z9\" (UID: \"617c98c3-4b8c-4373-85f5-f23e4c9977df\") " pod="kube-system/cilium-lj2z9" Dec 13 06:51:57.213453 kubelet[1462]: I1213 06:51:57.213348 1462 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/617c98c3-4b8c-4373-85f5-f23e4c9977df-cilium-config-path\") pod \"cilium-lj2z9\" (UID: \"617c98c3-4b8c-4373-85f5-f23e4c9977df\") " pod="kube-system/cilium-lj2z9" Dec 13 06:51:57.213453 kubelet[1462]: I1213 06:51:57.213376 1462 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/617c98c3-4b8c-4373-85f5-f23e4c9977df-clustermesh-secrets\") pod \"cilium-lj2z9\" (UID: \"617c98c3-4b8c-4373-85f5-f23e4c9977df\") " pod="kube-system/cilium-lj2z9" Dec 13 06:51:57.213718 kubelet[1462]: I1213 06:51:57.213403 1462 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/617c98c3-4b8c-4373-85f5-f23e4c9977df-cilium-ipsec-secrets\") pod \"cilium-lj2z9\" (UID: \"617c98c3-4b8c-4373-85f5-f23e4c9977df\") " pod="kube-system/cilium-lj2z9" Dec 13 06:51:57.213718 kubelet[1462]: I1213 06:51:57.213434 1462 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/617c98c3-4b8c-4373-85f5-f23e4c9977df-cilium-run\") pod \"cilium-lj2z9\" (UID: \"617c98c3-4b8c-4373-85f5-f23e4c9977df\") " pod="kube-system/cilium-lj2z9" Dec 13 06:51:57.213718 kubelet[1462]: I1213 06:51:57.213462 1462 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/617c98c3-4b8c-4373-85f5-f23e4c9977df-hostproc\") pod \"cilium-lj2z9\" (UID: \"617c98c3-4b8c-4373-85f5-f23e4c9977df\") " pod="kube-system/cilium-lj2z9" Dec 13 06:51:57.213718 kubelet[1462]: I1213 06:51:57.213496 1462 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whxnl\" (UniqueName: \"kubernetes.io/projected/683ae517-a654-4d9a-a778-e61cf5654bd5-kube-api-access-whxnl\") pod \"cilium-operator-5cc964979-dbtpx\" (UID: \"683ae517-a654-4d9a-a778-e61cf5654bd5\") " pod="kube-system/cilium-operator-5cc964979-dbtpx" Dec 13 06:51:57.213718 kubelet[1462]: I1213 06:51:57.213525 1462 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/617c98c3-4b8c-4373-85f5-f23e4c9977df-cni-path\") pod \"cilium-lj2z9\" (UID: \"617c98c3-4b8c-4373-85f5-f23e4c9977df\") " pod="kube-system/cilium-lj2z9" Dec 13 06:51:57.213989 kubelet[1462]: I1213 06:51:57.213560 1462 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/617c98c3-4b8c-4373-85f5-f23e4c9977df-xtables-lock\") pod \"cilium-lj2z9\" (UID: \"617c98c3-4b8c-4373-85f5-f23e4c9977df\") " pod="kube-system/cilium-lj2z9" Dec 13 06:51:57.213989 kubelet[1462]: I1213 06:51:57.213593 1462 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/617c98c3-4b8c-4373-85f5-f23e4c9977df-cilium-cgroup\") pod \"cilium-lj2z9\" (UID: \"617c98c3-4b8c-4373-85f5-f23e4c9977df\") " pod="kube-system/cilium-lj2z9" Dec 13 06:51:57.213989 kubelet[1462]: I1213 06:51:57.213621 1462 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/617c98c3-4b8c-4373-85f5-f23e4c9977df-hubble-tls\") pod \"cilium-lj2z9\" (UID: \"617c98c3-4b8c-4373-85f5-f23e4c9977df\") " pod="kube-system/cilium-lj2z9" Dec 13 06:51:57.213989 kubelet[1462]: I1213 06:51:57.213662 1462 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/683ae517-a654-4d9a-a778-e61cf5654bd5-cilium-config-path\") pod \"cilium-operator-5cc964979-dbtpx\" (UID: \"683ae517-a654-4d9a-a778-e61cf5654bd5\") " pod="kube-system/cilium-operator-5cc964979-dbtpx" Dec 13 06:51:57.213989 kubelet[1462]: I1213 06:51:57.213697 1462 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/617c98c3-4b8c-4373-85f5-f23e4c9977df-bpf-maps\") pod \"cilium-lj2z9\" (UID: \"617c98c3-4b8c-4373-85f5-f23e4c9977df\") " pod="kube-system/cilium-lj2z9" Dec 13 06:51:57.371962 env[1191]: time="2024-12-13T06:51:57.371175016Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-dbtpx,Uid:683ae517-a654-4d9a-a778-e61cf5654bd5,Namespace:kube-system,Attempt:0,}" Dec 13 06:51:57.386037 env[1191]: time="2024-12-13T06:51:57.385982020Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-lj2z9,Uid:617c98c3-4b8c-4373-85f5-f23e4c9977df,Namespace:kube-system,Attempt:0,}" Dec 13 06:51:57.388173 env[1191]: time="2024-12-13T06:51:57.388046375Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 06:51:57.388173 env[1191]: time="2024-12-13T06:51:57.388136560Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 06:51:57.388687 env[1191]: time="2024-12-13T06:51:57.388373482Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 06:51:57.388897 env[1191]: time="2024-12-13T06:51:57.388784021Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7d1fe152c039d0f7dbe92aaa4d2f8ef2fee1632e895dd67bffb90cff81ef319b pid=3033 runtime=io.containerd.runc.v2 Dec 13 06:51:57.411009 systemd[1]: Started cri-containerd-7d1fe152c039d0f7dbe92aaa4d2f8ef2fee1632e895dd67bffb90cff81ef319b.scope. Dec 13 06:51:57.417577 env[1191]: time="2024-12-13T06:51:57.412110041Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 06:51:57.417577 env[1191]: time="2024-12-13T06:51:57.412166740Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 06:51:57.417577 env[1191]: time="2024-12-13T06:51:57.412183774Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 06:51:57.417577 env[1191]: time="2024-12-13T06:51:57.412346981Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8222669b2f6240493847a5cf1bfc64bbcd4a68adc92a45bfde783a7a04c0ecbb pid=3059 runtime=io.containerd.runc.v2 Dec 13 06:51:57.444401 systemd[1]: Started cri-containerd-8222669b2f6240493847a5cf1bfc64bbcd4a68adc92a45bfde783a7a04c0ecbb.scope. Dec 13 06:51:57.480614 env[1191]: time="2024-12-13T06:51:57.480555160Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-lj2z9,Uid:617c98c3-4b8c-4373-85f5-f23e4c9977df,Namespace:kube-system,Attempt:0,} returns sandbox id \"8222669b2f6240493847a5cf1bfc64bbcd4a68adc92a45bfde783a7a04c0ecbb\"" Dec 13 06:51:57.486861 env[1191]: time="2024-12-13T06:51:57.486824477Z" level=info msg="CreateContainer within sandbox \"8222669b2f6240493847a5cf1bfc64bbcd4a68adc92a45bfde783a7a04c0ecbb\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 06:51:57.504970 env[1191]: time="2024-12-13T06:51:57.504916477Z" level=info msg="CreateContainer within sandbox \"8222669b2f6240493847a5cf1bfc64bbcd4a68adc92a45bfde783a7a04c0ecbb\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"fd27d85e920b1500965fe2c140d3f248ce1ff6a1226c3f5078e7e26ea101c6c2\"" Dec 13 06:51:57.505933 env[1191]: time="2024-12-13T06:51:57.505899214Z" level=info msg="StartContainer for \"fd27d85e920b1500965fe2c140d3f248ce1ff6a1226c3f5078e7e26ea101c6c2\"" Dec 13 06:51:57.513012 env[1191]: time="2024-12-13T06:51:57.512974511Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-dbtpx,Uid:683ae517-a654-4d9a-a778-e61cf5654bd5,Namespace:kube-system,Attempt:0,} returns sandbox id \"7d1fe152c039d0f7dbe92aaa4d2f8ef2fee1632e895dd67bffb90cff81ef319b\"" Dec 13 06:51:57.515087 env[1191]: time="2024-12-13T06:51:57.515026186Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Dec 13 06:51:57.531995 systemd[1]: Started cri-containerd-fd27d85e920b1500965fe2c140d3f248ce1ff6a1226c3f5078e7e26ea101c6c2.scope. Dec 13 06:51:57.549353 systemd[1]: cri-containerd-fd27d85e920b1500965fe2c140d3f248ce1ff6a1226c3f5078e7e26ea101c6c2.scope: Deactivated successfully. Dec 13 06:51:57.565627 env[1191]: time="2024-12-13T06:51:57.565567750Z" level=info msg="shim disconnected" id=fd27d85e920b1500965fe2c140d3f248ce1ff6a1226c3f5078e7e26ea101c6c2 Dec 13 06:51:57.565923 env[1191]: time="2024-12-13T06:51:57.565891951Z" level=warning msg="cleaning up after shim disconnected" id=fd27d85e920b1500965fe2c140d3f248ce1ff6a1226c3f5078e7e26ea101c6c2 namespace=k8s.io Dec 13 06:51:57.566158 env[1191]: time="2024-12-13T06:51:57.566131776Z" level=info msg="cleaning up dead shim" Dec 13 06:51:57.568255 kubelet[1462]: E1213 06:51:57.568220 1462 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:51:57.576239 env[1191]: time="2024-12-13T06:51:57.576171759Z" level=warning msg="cleanup warnings time=\"2024-12-13T06:51:57Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3132 runtime=io.containerd.runc.v2\ntime=\"2024-12-13T06:51:57Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/fd27d85e920b1500965fe2c140d3f248ce1ff6a1226c3f5078e7e26ea101c6c2/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Dec 13 06:51:57.576671 env[1191]: time="2024-12-13T06:51:57.576523790Z" level=error msg="copy shim log" error="read /proc/self/fd/57: file already closed" Dec 13 06:51:57.579190 env[1191]: time="2024-12-13T06:51:57.579138538Z" level=error msg="Failed to pipe stdout of container \"fd27d85e920b1500965fe2c140d3f248ce1ff6a1226c3f5078e7e26ea101c6c2\"" error="reading from a closed fifo" Dec 13 06:51:57.579298 env[1191]: time="2024-12-13T06:51:57.579238018Z" level=error msg="Failed to pipe stderr of container \"fd27d85e920b1500965fe2c140d3f248ce1ff6a1226c3f5078e7e26ea101c6c2\"" error="reading from a closed fifo" Dec 13 06:51:57.580803 env[1191]: time="2024-12-13T06:51:57.580737008Z" level=error msg="StartContainer for \"fd27d85e920b1500965fe2c140d3f248ce1ff6a1226c3f5078e7e26ea101c6c2\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Dec 13 06:51:57.581303 kubelet[1462]: E1213 06:51:57.581259 1462 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="fd27d85e920b1500965fe2c140d3f248ce1ff6a1226c3f5078e7e26ea101c6c2" Dec 13 06:51:57.582661 kubelet[1462]: E1213 06:51:57.582633 1462 kuberuntime_manager.go:1262] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Dec 13 06:51:57.582661 kubelet[1462]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Dec 13 06:51:57.582661 kubelet[1462]: rm /hostbin/cilium-mount Dec 13 06:51:57.582868 kubelet[1462]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-6spz6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-lj2z9_kube-system(617c98c3-4b8c-4373-85f5-f23e4c9977df): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Dec 13 06:51:57.582868 kubelet[1462]: E1213 06:51:57.582704 1462 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-lj2z9" podUID="617c98c3-4b8c-4373-85f5-f23e4c9977df" Dec 13 06:51:57.641894 kubelet[1462]: E1213 06:51:57.641685 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:51:57.708799 kubelet[1462]: E1213 06:51:57.708749 1462 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 06:51:58.085769 env[1191]: time="2024-12-13T06:51:58.085717371Z" level=info msg="CreateContainer within sandbox \"8222669b2f6240493847a5cf1bfc64bbcd4a68adc92a45bfde783a7a04c0ecbb\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:1,}" Dec 13 06:51:58.096587 env[1191]: time="2024-12-13T06:51:58.096507980Z" level=info msg="CreateContainer within sandbox \"8222669b2f6240493847a5cf1bfc64bbcd4a68adc92a45bfde783a7a04c0ecbb\" for &ContainerMetadata{Name:mount-cgroup,Attempt:1,} returns container id \"0b23047c8e89309d4002b462f66371cac89fb7c8b56e65bd708150ff4052bc36\"" Dec 13 06:51:58.097468 env[1191]: time="2024-12-13T06:51:58.097433995Z" level=info msg="StartContainer for \"0b23047c8e89309d4002b462f66371cac89fb7c8b56e65bd708150ff4052bc36\"" Dec 13 06:51:58.119606 systemd[1]: Started cri-containerd-0b23047c8e89309d4002b462f66371cac89fb7c8b56e65bd708150ff4052bc36.scope. Dec 13 06:51:58.138896 systemd[1]: cri-containerd-0b23047c8e89309d4002b462f66371cac89fb7c8b56e65bd708150ff4052bc36.scope: Deactivated successfully. Dec 13 06:51:58.148805 env[1191]: time="2024-12-13T06:51:58.148731535Z" level=info msg="shim disconnected" id=0b23047c8e89309d4002b462f66371cac89fb7c8b56e65bd708150ff4052bc36 Dec 13 06:51:58.149133 env[1191]: time="2024-12-13T06:51:58.149102271Z" level=warning msg="cleaning up after shim disconnected" id=0b23047c8e89309d4002b462f66371cac89fb7c8b56e65bd708150ff4052bc36 namespace=k8s.io Dec 13 06:51:58.149278 env[1191]: time="2024-12-13T06:51:58.149250114Z" level=info msg="cleaning up dead shim" Dec 13 06:51:58.159532 env[1191]: time="2024-12-13T06:51:58.159462933Z" level=warning msg="cleanup warnings time=\"2024-12-13T06:51:58Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3170 runtime=io.containerd.runc.v2\ntime=\"2024-12-13T06:51:58Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/0b23047c8e89309d4002b462f66371cac89fb7c8b56e65bd708150ff4052bc36/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Dec 13 06:51:58.159889 env[1191]: time="2024-12-13T06:51:58.159817856Z" level=error msg="copy shim log" error="read /proc/self/fd/57: file already closed" Dec 13 06:51:58.163228 env[1191]: time="2024-12-13T06:51:58.163181356Z" level=error msg="Failed to pipe stdout of container \"0b23047c8e89309d4002b462f66371cac89fb7c8b56e65bd708150ff4052bc36\"" error="reading from a closed fifo" Dec 13 06:51:58.163743 env[1191]: time="2024-12-13T06:51:58.163374435Z" level=error msg="Failed to pipe stderr of container \"0b23047c8e89309d4002b462f66371cac89fb7c8b56e65bd708150ff4052bc36\"" error="reading from a closed fifo" Dec 13 06:51:58.164729 env[1191]: time="2024-12-13T06:51:58.164684675Z" level=error msg="StartContainer for \"0b23047c8e89309d4002b462f66371cac89fb7c8b56e65bd708150ff4052bc36\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Dec 13 06:51:58.165185 kubelet[1462]: E1213 06:51:58.165151 1462 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="0b23047c8e89309d4002b462f66371cac89fb7c8b56e65bd708150ff4052bc36" Dec 13 06:51:58.165371 kubelet[1462]: E1213 06:51:58.165347 1462 kuberuntime_manager.go:1262] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Dec 13 06:51:58.165371 kubelet[1462]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Dec 13 06:51:58.165371 kubelet[1462]: rm /hostbin/cilium-mount Dec 13 06:51:58.165371 kubelet[1462]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-6spz6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-lj2z9_kube-system(617c98c3-4b8c-4373-85f5-f23e4c9977df): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Dec 13 06:51:58.165650 kubelet[1462]: E1213 06:51:58.165425 1462 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-lj2z9" podUID="617c98c3-4b8c-4373-85f5-f23e4c9977df" Dec 13 06:51:58.642125 kubelet[1462]: E1213 06:51:58.642041 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:51:58.828851 kubelet[1462]: I1213 06:51:58.828794 1462 setters.go:568] "Node became not ready" node="10.243.75.202" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-12-13T06:51:58Z","lastTransitionTime":"2024-12-13T06:51:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Dec 13 06:51:59.088748 kubelet[1462]: I1213 06:51:59.088702 1462 scope.go:117] "RemoveContainer" containerID="fd27d85e920b1500965fe2c140d3f248ce1ff6a1226c3f5078e7e26ea101c6c2" Dec 13 06:51:59.090004 env[1191]: time="2024-12-13T06:51:59.089956828Z" level=info msg="StopPodSandbox for \"8222669b2f6240493847a5cf1bfc64bbcd4a68adc92a45bfde783a7a04c0ecbb\"" Dec 13 06:51:59.093475 env[1191]: time="2024-12-13T06:51:59.090044529Z" level=info msg="Container to stop \"0b23047c8e89309d4002b462f66371cac89fb7c8b56e65bd708150ff4052bc36\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 06:51:59.093475 env[1191]: time="2024-12-13T06:51:59.090106394Z" level=info msg="Container to stop \"fd27d85e920b1500965fe2c140d3f248ce1ff6a1226c3f5078e7e26ea101c6c2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 06:51:59.092537 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8222669b2f6240493847a5cf1bfc64bbcd4a68adc92a45bfde783a7a04c0ecbb-shm.mount: Deactivated successfully. Dec 13 06:51:59.094762 env[1191]: time="2024-12-13T06:51:59.094712790Z" level=info msg="RemoveContainer for \"fd27d85e920b1500965fe2c140d3f248ce1ff6a1226c3f5078e7e26ea101c6c2\"" Dec 13 06:51:59.099419 env[1191]: time="2024-12-13T06:51:59.099370281Z" level=info msg="RemoveContainer for \"fd27d85e920b1500965fe2c140d3f248ce1ff6a1226c3f5078e7e26ea101c6c2\" returns successfully" Dec 13 06:51:59.102960 systemd[1]: cri-containerd-8222669b2f6240493847a5cf1bfc64bbcd4a68adc92a45bfde783a7a04c0ecbb.scope: Deactivated successfully. Dec 13 06:51:59.137533 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8222669b2f6240493847a5cf1bfc64bbcd4a68adc92a45bfde783a7a04c0ecbb-rootfs.mount: Deactivated successfully. Dec 13 06:51:59.143241 env[1191]: time="2024-12-13T06:51:59.143172698Z" level=info msg="shim disconnected" id=8222669b2f6240493847a5cf1bfc64bbcd4a68adc92a45bfde783a7a04c0ecbb Dec 13 06:51:59.143973 env[1191]: time="2024-12-13T06:51:59.143931286Z" level=warning msg="cleaning up after shim disconnected" id=8222669b2f6240493847a5cf1bfc64bbcd4a68adc92a45bfde783a7a04c0ecbb namespace=k8s.io Dec 13 06:51:59.144121 env[1191]: time="2024-12-13T06:51:59.144093742Z" level=info msg="cleaning up dead shim" Dec 13 06:51:59.154052 env[1191]: time="2024-12-13T06:51:59.153983643Z" level=warning msg="cleanup warnings time=\"2024-12-13T06:51:59Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3201 runtime=io.containerd.runc.v2\n" Dec 13 06:51:59.154494 env[1191]: time="2024-12-13T06:51:59.154455379Z" level=info msg="TearDown network for sandbox \"8222669b2f6240493847a5cf1bfc64bbcd4a68adc92a45bfde783a7a04c0ecbb\" successfully" Dec 13 06:51:59.154584 env[1191]: time="2024-12-13T06:51:59.154494297Z" level=info msg="StopPodSandbox for \"8222669b2f6240493847a5cf1bfc64bbcd4a68adc92a45bfde783a7a04c0ecbb\" returns successfully" Dec 13 06:51:59.334023 kubelet[1462]: I1213 06:51:59.333958 1462 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/617c98c3-4b8c-4373-85f5-f23e4c9977df-cilium-ipsec-secrets\") pod \"617c98c3-4b8c-4373-85f5-f23e4c9977df\" (UID: \"617c98c3-4b8c-4373-85f5-f23e4c9977df\") " Dec 13 06:51:59.334453 kubelet[1462]: I1213 06:51:59.334418 1462 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/617c98c3-4b8c-4373-85f5-f23e4c9977df-hostproc\") pod \"617c98c3-4b8c-4373-85f5-f23e4c9977df\" (UID: \"617c98c3-4b8c-4373-85f5-f23e4c9977df\") " Dec 13 06:51:59.334647 kubelet[1462]: I1213 06:51:59.334610 1462 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/617c98c3-4b8c-4373-85f5-f23e4c9977df-clustermesh-secrets\") pod \"617c98c3-4b8c-4373-85f5-f23e4c9977df\" (UID: \"617c98c3-4b8c-4373-85f5-f23e4c9977df\") " Dec 13 06:51:59.334766 kubelet[1462]: I1213 06:51:59.334655 1462 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/617c98c3-4b8c-4373-85f5-f23e4c9977df-xtables-lock\") pod \"617c98c3-4b8c-4373-85f5-f23e4c9977df\" (UID: \"617c98c3-4b8c-4373-85f5-f23e4c9977df\") " Dec 13 06:51:59.334766 kubelet[1462]: I1213 06:51:59.334714 1462 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/617c98c3-4b8c-4373-85f5-f23e4c9977df-etc-cni-netd\") pod \"617c98c3-4b8c-4373-85f5-f23e4c9977df\" (UID: \"617c98c3-4b8c-4373-85f5-f23e4c9977df\") " Dec 13 06:51:59.334766 kubelet[1462]: I1213 06:51:59.334759 1462 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/617c98c3-4b8c-4373-85f5-f23e4c9977df-cilium-run\") pod \"617c98c3-4b8c-4373-85f5-f23e4c9977df\" (UID: \"617c98c3-4b8c-4373-85f5-f23e4c9977df\") " Dec 13 06:51:59.334924 kubelet[1462]: I1213 06:51:59.334802 1462 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/617c98c3-4b8c-4373-85f5-f23e4c9977df-host-proc-sys-net\") pod \"617c98c3-4b8c-4373-85f5-f23e4c9977df\" (UID: \"617c98c3-4b8c-4373-85f5-f23e4c9977df\") " Dec 13 06:51:59.334924 kubelet[1462]: I1213 06:51:59.334835 1462 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6spz6\" (UniqueName: \"kubernetes.io/projected/617c98c3-4b8c-4373-85f5-f23e4c9977df-kube-api-access-6spz6\") pod \"617c98c3-4b8c-4373-85f5-f23e4c9977df\" (UID: \"617c98c3-4b8c-4373-85f5-f23e4c9977df\") " Dec 13 06:51:59.334924 kubelet[1462]: I1213 06:51:59.334879 1462 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/617c98c3-4b8c-4373-85f5-f23e4c9977df-host-proc-sys-kernel\") pod \"617c98c3-4b8c-4373-85f5-f23e4c9977df\" (UID: \"617c98c3-4b8c-4373-85f5-f23e4c9977df\") " Dec 13 06:51:59.334924 kubelet[1462]: I1213 06:51:59.334917 1462 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/617c98c3-4b8c-4373-85f5-f23e4c9977df-cni-path\") pod \"617c98c3-4b8c-4373-85f5-f23e4c9977df\" (UID: \"617c98c3-4b8c-4373-85f5-f23e4c9977df\") " Dec 13 06:51:59.335228 kubelet[1462]: I1213 06:51:59.334939 1462 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/617c98c3-4b8c-4373-85f5-f23e4c9977df-bpf-maps\") pod \"617c98c3-4b8c-4373-85f5-f23e4c9977df\" (UID: \"617c98c3-4b8c-4373-85f5-f23e4c9977df\") " Dec 13 06:51:59.335228 kubelet[1462]: I1213 06:51:59.334983 1462 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/617c98c3-4b8c-4373-85f5-f23e4c9977df-cilium-config-path\") pod \"617c98c3-4b8c-4373-85f5-f23e4c9977df\" (UID: \"617c98c3-4b8c-4373-85f5-f23e4c9977df\") " Dec 13 06:51:59.335228 kubelet[1462]: I1213 06:51:59.335047 1462 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/617c98c3-4b8c-4373-85f5-f23e4c9977df-hubble-tls\") pod \"617c98c3-4b8c-4373-85f5-f23e4c9977df\" (UID: \"617c98c3-4b8c-4373-85f5-f23e4c9977df\") " Dec 13 06:51:59.335228 kubelet[1462]: I1213 06:51:59.335106 1462 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/617c98c3-4b8c-4373-85f5-f23e4c9977df-lib-modules\") pod \"617c98c3-4b8c-4373-85f5-f23e4c9977df\" (UID: \"617c98c3-4b8c-4373-85f5-f23e4c9977df\") " Dec 13 06:51:59.335228 kubelet[1462]: I1213 06:51:59.335146 1462 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/617c98c3-4b8c-4373-85f5-f23e4c9977df-cilium-cgroup\") pod \"617c98c3-4b8c-4373-85f5-f23e4c9977df\" (UID: \"617c98c3-4b8c-4373-85f5-f23e4c9977df\") " Dec 13 06:51:59.335228 kubelet[1462]: I1213 06:51:59.335203 1462 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/617c98c3-4b8c-4373-85f5-f23e4c9977df-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "617c98c3-4b8c-4373-85f5-f23e4c9977df" (UID: "617c98c3-4b8c-4373-85f5-f23e4c9977df"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 06:51:59.335228 kubelet[1462]: I1213 06:51:59.334536 1462 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/617c98c3-4b8c-4373-85f5-f23e4c9977df-hostproc" (OuterVolumeSpecName: "hostproc") pod "617c98c3-4b8c-4373-85f5-f23e4c9977df" (UID: "617c98c3-4b8c-4373-85f5-f23e4c9977df"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 06:51:59.336098 kubelet[1462]: I1213 06:51:59.336041 1462 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/617c98c3-4b8c-4373-85f5-f23e4c9977df-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "617c98c3-4b8c-4373-85f5-f23e4c9977df" (UID: "617c98c3-4b8c-4373-85f5-f23e4c9977df"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 06:51:59.336205 kubelet[1462]: I1213 06:51:59.336122 1462 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/617c98c3-4b8c-4373-85f5-f23e4c9977df-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "617c98c3-4b8c-4373-85f5-f23e4c9977df" (UID: "617c98c3-4b8c-4373-85f5-f23e4c9977df"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 06:51:59.336205 kubelet[1462]: I1213 06:51:59.336166 1462 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/617c98c3-4b8c-4373-85f5-f23e4c9977df-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "617c98c3-4b8c-4373-85f5-f23e4c9977df" (UID: "617c98c3-4b8c-4373-85f5-f23e4c9977df"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 06:51:59.336340 kubelet[1462]: I1213 06:51:59.336205 1462 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/617c98c3-4b8c-4373-85f5-f23e4c9977df-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "617c98c3-4b8c-4373-85f5-f23e4c9977df" (UID: "617c98c3-4b8c-4373-85f5-f23e4c9977df"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 06:51:59.336583 kubelet[1462]: I1213 06:51:59.336435 1462 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/617c98c3-4b8c-4373-85f5-f23e4c9977df-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "617c98c3-4b8c-4373-85f5-f23e4c9977df" (UID: "617c98c3-4b8c-4373-85f5-f23e4c9977df"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 06:51:59.336583 kubelet[1462]: I1213 06:51:59.336480 1462 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/617c98c3-4b8c-4373-85f5-f23e4c9977df-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "617c98c3-4b8c-4373-85f5-f23e4c9977df" (UID: "617c98c3-4b8c-4373-85f5-f23e4c9977df"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 06:51:59.336583 kubelet[1462]: I1213 06:51:59.336511 1462 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/617c98c3-4b8c-4373-85f5-f23e4c9977df-cni-path" (OuterVolumeSpecName: "cni-path") pod "617c98c3-4b8c-4373-85f5-f23e4c9977df" (UID: "617c98c3-4b8c-4373-85f5-f23e4c9977df"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 06:51:59.342021 systemd[1]: var-lib-kubelet-pods-617c98c3\x2d4b8c\x2d4373\x2d85f5\x2df23e4c9977df-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 06:51:59.346808 kubelet[1462]: I1213 06:51:59.346710 1462 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/617c98c3-4b8c-4373-85f5-f23e4c9977df-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "617c98c3-4b8c-4373-85f5-f23e4c9977df" (UID: "617c98c3-4b8c-4373-85f5-f23e4c9977df"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 06:51:59.347015 kubelet[1462]: I1213 06:51:59.346976 1462 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/617c98c3-4b8c-4373-85f5-f23e4c9977df-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "617c98c3-4b8c-4373-85f5-f23e4c9977df" (UID: "617c98c3-4b8c-4373-85f5-f23e4c9977df"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 06:51:59.350568 systemd[1]: var-lib-kubelet-pods-617c98c3\x2d4b8c\x2d4373\x2d85f5\x2df23e4c9977df-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Dec 13 06:51:59.352598 kubelet[1462]: I1213 06:51:59.352564 1462 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/617c98c3-4b8c-4373-85f5-f23e4c9977df-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "617c98c3-4b8c-4373-85f5-f23e4c9977df" (UID: "617c98c3-4b8c-4373-85f5-f23e4c9977df"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 06:51:59.352751 kubelet[1462]: I1213 06:51:59.352660 1462 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/617c98c3-4b8c-4373-85f5-f23e4c9977df-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "617c98c3-4b8c-4373-85f5-f23e4c9977df" (UID: "617c98c3-4b8c-4373-85f5-f23e4c9977df"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 06:51:59.356654 systemd[1]: var-lib-kubelet-pods-617c98c3\x2d4b8c\x2d4373\x2d85f5\x2df23e4c9977df-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d6spz6.mount: Deactivated successfully. Dec 13 06:51:59.358176 kubelet[1462]: I1213 06:51:59.358137 1462 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/617c98c3-4b8c-4373-85f5-f23e4c9977df-kube-api-access-6spz6" (OuterVolumeSpecName: "kube-api-access-6spz6") pod "617c98c3-4b8c-4373-85f5-f23e4c9977df" (UID: "617c98c3-4b8c-4373-85f5-f23e4c9977df"). InnerVolumeSpecName "kube-api-access-6spz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 06:51:59.360269 systemd[1]: var-lib-kubelet-pods-617c98c3\x2d4b8c\x2d4373\x2d85f5\x2df23e4c9977df-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 06:51:59.361358 kubelet[1462]: I1213 06:51:59.361328 1462 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/617c98c3-4b8c-4373-85f5-f23e4c9977df-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "617c98c3-4b8c-4373-85f5-f23e4c9977df" (UID: "617c98c3-4b8c-4373-85f5-f23e4c9977df"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 06:51:59.435683 kubelet[1462]: I1213 06:51:59.435607 1462 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/617c98c3-4b8c-4373-85f5-f23e4c9977df-clustermesh-secrets\") on node \"10.243.75.202\" DevicePath \"\"" Dec 13 06:51:59.435683 kubelet[1462]: I1213 06:51:59.435673 1462 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/617c98c3-4b8c-4373-85f5-f23e4c9977df-hostproc\") on node \"10.243.75.202\" DevicePath \"\"" Dec 13 06:51:59.435683 kubelet[1462]: I1213 06:51:59.435693 1462 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/617c98c3-4b8c-4373-85f5-f23e4c9977df-xtables-lock\") on node \"10.243.75.202\" DevicePath \"\"" Dec 13 06:51:59.436121 kubelet[1462]: I1213 06:51:59.435718 1462 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/617c98c3-4b8c-4373-85f5-f23e4c9977df-host-proc-sys-net\") on node \"10.243.75.202\" DevicePath \"\"" Dec 13 06:51:59.436121 kubelet[1462]: I1213 06:51:59.435732 1462 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/617c98c3-4b8c-4373-85f5-f23e4c9977df-etc-cni-netd\") on node \"10.243.75.202\" DevicePath \"\"" Dec 13 06:51:59.436121 kubelet[1462]: I1213 06:51:59.435749 1462 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/617c98c3-4b8c-4373-85f5-f23e4c9977df-cilium-run\") on node \"10.243.75.202\" DevicePath \"\"" Dec 13 06:51:59.436121 kubelet[1462]: I1213 06:51:59.435781 1462 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/617c98c3-4b8c-4373-85f5-f23e4c9977df-host-proc-sys-kernel\") on node \"10.243.75.202\" DevicePath \"\"" Dec 13 06:51:59.436121 kubelet[1462]: I1213 06:51:59.435812 1462 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-6spz6\" (UniqueName: \"kubernetes.io/projected/617c98c3-4b8c-4373-85f5-f23e4c9977df-kube-api-access-6spz6\") on node \"10.243.75.202\" DevicePath \"\"" Dec 13 06:51:59.436121 kubelet[1462]: I1213 06:51:59.435837 1462 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/617c98c3-4b8c-4373-85f5-f23e4c9977df-bpf-maps\") on node \"10.243.75.202\" DevicePath \"\"" Dec 13 06:51:59.436121 kubelet[1462]: I1213 06:51:59.435853 1462 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/617c98c3-4b8c-4373-85f5-f23e4c9977df-cni-path\") on node \"10.243.75.202\" DevicePath \"\"" Dec 13 06:51:59.436121 kubelet[1462]: I1213 06:51:59.435867 1462 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/617c98c3-4b8c-4373-85f5-f23e4c9977df-lib-modules\") on node \"10.243.75.202\" DevicePath \"\"" Dec 13 06:51:59.436121 kubelet[1462]: I1213 06:51:59.435883 1462 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/617c98c3-4b8c-4373-85f5-f23e4c9977df-cilium-config-path\") on node \"10.243.75.202\" DevicePath \"\"" Dec 13 06:51:59.436121 kubelet[1462]: I1213 06:51:59.435898 1462 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/617c98c3-4b8c-4373-85f5-f23e4c9977df-hubble-tls\") on node \"10.243.75.202\" DevicePath \"\"" Dec 13 06:51:59.436121 kubelet[1462]: I1213 06:51:59.435913 1462 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/617c98c3-4b8c-4373-85f5-f23e4c9977df-cilium-cgroup\") on node \"10.243.75.202\" DevicePath \"\"" Dec 13 06:51:59.436121 kubelet[1462]: I1213 06:51:59.435927 1462 reconciler_common.go:300] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/617c98c3-4b8c-4373-85f5-f23e4c9977df-cilium-ipsec-secrets\") on node \"10.243.75.202\" DevicePath \"\"" Dec 13 06:51:59.644042 kubelet[1462]: E1213 06:51:59.642980 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:51:59.775564 systemd[1]: Removed slice kubepods-burstable-pod617c98c3_4b8c_4373_85f5_f23e4c9977df.slice. Dec 13 06:52:00.093378 kubelet[1462]: I1213 06:52:00.093310 1462 scope.go:117] "RemoveContainer" containerID="0b23047c8e89309d4002b462f66371cac89fb7c8b56e65bd708150ff4052bc36" Dec 13 06:52:00.097285 env[1191]: time="2024-12-13T06:52:00.097204411Z" level=info msg="RemoveContainer for \"0b23047c8e89309d4002b462f66371cac89fb7c8b56e65bd708150ff4052bc36\"" Dec 13 06:52:00.103514 env[1191]: time="2024-12-13T06:52:00.103432322Z" level=info msg="RemoveContainer for \"0b23047c8e89309d4002b462f66371cac89fb7c8b56e65bd708150ff4052bc36\" returns successfully" Dec 13 06:52:00.149528 kubelet[1462]: I1213 06:52:00.149462 1462 topology_manager.go:215] "Topology Admit Handler" podUID="859eb448-729b-40ca-9414-6b5d06752756" podNamespace="kube-system" podName="cilium-r947j" Dec 13 06:52:00.149889 kubelet[1462]: E1213 06:52:00.149850 1462 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="617c98c3-4b8c-4373-85f5-f23e4c9977df" containerName="mount-cgroup" Dec 13 06:52:00.150076 kubelet[1462]: I1213 06:52:00.150055 1462 memory_manager.go:354] "RemoveStaleState removing state" podUID="617c98c3-4b8c-4373-85f5-f23e4c9977df" containerName="mount-cgroup" Dec 13 06:52:00.150281 kubelet[1462]: E1213 06:52:00.150250 1462 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="617c98c3-4b8c-4373-85f5-f23e4c9977df" containerName="mount-cgroup" Dec 13 06:52:00.150455 kubelet[1462]: I1213 06:52:00.150431 1462 memory_manager.go:354] "RemoveStaleState removing state" podUID="617c98c3-4b8c-4373-85f5-f23e4c9977df" containerName="mount-cgroup" Dec 13 06:52:00.157675 systemd[1]: Created slice kubepods-burstable-pod859eb448_729b_40ca_9414_6b5d06752756.slice. Dec 13 06:52:00.240944 kubelet[1462]: I1213 06:52:00.240887 1462 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/859eb448-729b-40ca-9414-6b5d06752756-bpf-maps\") pod \"cilium-r947j\" (UID: \"859eb448-729b-40ca-9414-6b5d06752756\") " pod="kube-system/cilium-r947j" Dec 13 06:52:00.240944 kubelet[1462]: I1213 06:52:00.240950 1462 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/859eb448-729b-40ca-9414-6b5d06752756-hostproc\") pod \"cilium-r947j\" (UID: \"859eb448-729b-40ca-9414-6b5d06752756\") " pod="kube-system/cilium-r947j" Dec 13 06:52:00.241225 kubelet[1462]: I1213 06:52:00.240984 1462 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/859eb448-729b-40ca-9414-6b5d06752756-cilium-ipsec-secrets\") pod \"cilium-r947j\" (UID: \"859eb448-729b-40ca-9414-6b5d06752756\") " pod="kube-system/cilium-r947j" Dec 13 06:52:00.241225 kubelet[1462]: I1213 06:52:00.241013 1462 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/859eb448-729b-40ca-9414-6b5d06752756-host-proc-sys-net\") pod \"cilium-r947j\" (UID: \"859eb448-729b-40ca-9414-6b5d06752756\") " pod="kube-system/cilium-r947j" Dec 13 06:52:00.241225 kubelet[1462]: I1213 06:52:00.241043 1462 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cf5fp\" (UniqueName: \"kubernetes.io/projected/859eb448-729b-40ca-9414-6b5d06752756-kube-api-access-cf5fp\") pod \"cilium-r947j\" (UID: \"859eb448-729b-40ca-9414-6b5d06752756\") " pod="kube-system/cilium-r947j" Dec 13 06:52:00.241225 kubelet[1462]: I1213 06:52:00.241103 1462 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/859eb448-729b-40ca-9414-6b5d06752756-cilium-cgroup\") pod \"cilium-r947j\" (UID: \"859eb448-729b-40ca-9414-6b5d06752756\") " pod="kube-system/cilium-r947j" Dec 13 06:52:00.241225 kubelet[1462]: I1213 06:52:00.241143 1462 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/859eb448-729b-40ca-9414-6b5d06752756-clustermesh-secrets\") pod \"cilium-r947j\" (UID: \"859eb448-729b-40ca-9414-6b5d06752756\") " pod="kube-system/cilium-r947j" Dec 13 06:52:00.241225 kubelet[1462]: I1213 06:52:00.241171 1462 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/859eb448-729b-40ca-9414-6b5d06752756-cilium-run\") pod \"cilium-r947j\" (UID: \"859eb448-729b-40ca-9414-6b5d06752756\") " pod="kube-system/cilium-r947j" Dec 13 06:52:00.241225 kubelet[1462]: I1213 06:52:00.241199 1462 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/859eb448-729b-40ca-9414-6b5d06752756-etc-cni-netd\") pod \"cilium-r947j\" (UID: \"859eb448-729b-40ca-9414-6b5d06752756\") " pod="kube-system/cilium-r947j" Dec 13 06:52:00.241225 kubelet[1462]: I1213 06:52:00.241227 1462 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/859eb448-729b-40ca-9414-6b5d06752756-cilium-config-path\") pod \"cilium-r947j\" (UID: \"859eb448-729b-40ca-9414-6b5d06752756\") " pod="kube-system/cilium-r947j" Dec 13 06:52:00.241722 kubelet[1462]: I1213 06:52:00.241254 1462 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/859eb448-729b-40ca-9414-6b5d06752756-xtables-lock\") pod \"cilium-r947j\" (UID: \"859eb448-729b-40ca-9414-6b5d06752756\") " pod="kube-system/cilium-r947j" Dec 13 06:52:00.241722 kubelet[1462]: I1213 06:52:00.241285 1462 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/859eb448-729b-40ca-9414-6b5d06752756-lib-modules\") pod \"cilium-r947j\" (UID: \"859eb448-729b-40ca-9414-6b5d06752756\") " pod="kube-system/cilium-r947j" Dec 13 06:52:00.241722 kubelet[1462]: I1213 06:52:00.241312 1462 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/859eb448-729b-40ca-9414-6b5d06752756-cni-path\") pod \"cilium-r947j\" (UID: \"859eb448-729b-40ca-9414-6b5d06752756\") " pod="kube-system/cilium-r947j" Dec 13 06:52:00.241722 kubelet[1462]: I1213 06:52:00.241356 1462 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/859eb448-729b-40ca-9414-6b5d06752756-host-proc-sys-kernel\") pod \"cilium-r947j\" (UID: \"859eb448-729b-40ca-9414-6b5d06752756\") " pod="kube-system/cilium-r947j" Dec 13 06:52:00.241722 kubelet[1462]: I1213 06:52:00.241392 1462 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/859eb448-729b-40ca-9414-6b5d06752756-hubble-tls\") pod \"cilium-r947j\" (UID: \"859eb448-729b-40ca-9414-6b5d06752756\") " pod="kube-system/cilium-r947j" Dec 13 06:52:00.466755 env[1191]: time="2024-12-13T06:52:00.466700782Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-r947j,Uid:859eb448-729b-40ca-9414-6b5d06752756,Namespace:kube-system,Attempt:0,}" Dec 13 06:52:00.491322 env[1191]: time="2024-12-13T06:52:00.487541795Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 06:52:00.491322 env[1191]: time="2024-12-13T06:52:00.487615240Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 06:52:00.491322 env[1191]: time="2024-12-13T06:52:00.487633994Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 06:52:00.491322 env[1191]: time="2024-12-13T06:52:00.487887561Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2ed35f71fae8a92460783191b01a3a5d0dd5262c0afbdc4f7a9c6fd5cc93c52c pid=3230 runtime=io.containerd.runc.v2 Dec 13 06:52:00.521169 systemd[1]: Started cri-containerd-2ed35f71fae8a92460783191b01a3a5d0dd5262c0afbdc4f7a9c6fd5cc93c52c.scope. Dec 13 06:52:00.556554 env[1191]: time="2024-12-13T06:52:00.556497395Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-r947j,Uid:859eb448-729b-40ca-9414-6b5d06752756,Namespace:kube-system,Attempt:0,} returns sandbox id \"2ed35f71fae8a92460783191b01a3a5d0dd5262c0afbdc4f7a9c6fd5cc93c52c\"" Dec 13 06:52:00.562535 env[1191]: time="2024-12-13T06:52:00.562485769Z" level=info msg="CreateContainer within sandbox \"2ed35f71fae8a92460783191b01a3a5d0dd5262c0afbdc4f7a9c6fd5cc93c52c\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 06:52:00.599679 env[1191]: time="2024-12-13T06:52:00.599600211Z" level=info msg="CreateContainer within sandbox \"2ed35f71fae8a92460783191b01a3a5d0dd5262c0afbdc4f7a9c6fd5cc93c52c\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"e78cf4b54f1427d1a6d08fc11466f291f2f405ae6fbc6b2466542d571419edf5\"" Dec 13 06:52:00.600842 env[1191]: time="2024-12-13T06:52:00.600772703Z" level=info msg="StartContainer for \"e78cf4b54f1427d1a6d08fc11466f291f2f405ae6fbc6b2466542d571419edf5\"" Dec 13 06:52:00.625024 systemd[1]: Started cri-containerd-e78cf4b54f1427d1a6d08fc11466f291f2f405ae6fbc6b2466542d571419edf5.scope. Dec 13 06:52:00.644242 kubelet[1462]: E1213 06:52:00.644177 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:52:00.671469 env[1191]: time="2024-12-13T06:52:00.671403628Z" level=info msg="StartContainer for \"e78cf4b54f1427d1a6d08fc11466f291f2f405ae6fbc6b2466542d571419edf5\" returns successfully" Dec 13 06:52:00.678721 kubelet[1462]: W1213 06:52:00.675720 1462 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod617c98c3_4b8c_4373_85f5_f23e4c9977df.slice/cri-containerd-fd27d85e920b1500965fe2c140d3f248ce1ff6a1226c3f5078e7e26ea101c6c2.scope WatchSource:0}: container "fd27d85e920b1500965fe2c140d3f248ce1ff6a1226c3f5078e7e26ea101c6c2" in namespace "k8s.io": not found Dec 13 06:52:00.696261 systemd[1]: cri-containerd-e78cf4b54f1427d1a6d08fc11466f291f2f405ae6fbc6b2466542d571419edf5.scope: Deactivated successfully. Dec 13 06:52:00.758665 env[1191]: time="2024-12-13T06:52:00.757963829Z" level=info msg="shim disconnected" id=e78cf4b54f1427d1a6d08fc11466f291f2f405ae6fbc6b2466542d571419edf5 Dec 13 06:52:00.758665 env[1191]: time="2024-12-13T06:52:00.758024615Z" level=warning msg="cleaning up after shim disconnected" id=e78cf4b54f1427d1a6d08fc11466f291f2f405ae6fbc6b2466542d571419edf5 namespace=k8s.io Dec 13 06:52:00.758665 env[1191]: time="2024-12-13T06:52:00.758040445Z" level=info msg="cleaning up dead shim" Dec 13 06:52:00.770961 env[1191]: time="2024-12-13T06:52:00.770902212Z" level=warning msg="cleanup warnings time=\"2024-12-13T06:52:00Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3315 runtime=io.containerd.runc.v2\n" Dec 13 06:52:01.100438 env[1191]: time="2024-12-13T06:52:01.100060437Z" level=info msg="CreateContainer within sandbox \"2ed35f71fae8a92460783191b01a3a5d0dd5262c0afbdc4f7a9c6fd5cc93c52c\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 06:52:01.137854 env[1191]: time="2024-12-13T06:52:01.137783157Z" level=info msg="CreateContainer within sandbox \"2ed35f71fae8a92460783191b01a3a5d0dd5262c0afbdc4f7a9c6fd5cc93c52c\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"bb6475a38855ade9d2e096885c83fb78d2c0623bc0de35d20483341601d198cf\"" Dec 13 06:52:01.138532 env[1191]: time="2024-12-13T06:52:01.138488657Z" level=info msg="StartContainer for \"bb6475a38855ade9d2e096885c83fb78d2c0623bc0de35d20483341601d198cf\"" Dec 13 06:52:01.170779 systemd[1]: Started cri-containerd-bb6475a38855ade9d2e096885c83fb78d2c0623bc0de35d20483341601d198cf.scope. Dec 13 06:52:01.220256 env[1191]: time="2024-12-13T06:52:01.220201741Z" level=info msg="StartContainer for \"bb6475a38855ade9d2e096885c83fb78d2c0623bc0de35d20483341601d198cf\" returns successfully" Dec 13 06:52:01.233895 systemd[1]: cri-containerd-bb6475a38855ade9d2e096885c83fb78d2c0623bc0de35d20483341601d198cf.scope: Deactivated successfully. Dec 13 06:52:01.339668 env[1191]: time="2024-12-13T06:52:01.339583880Z" level=info msg="shim disconnected" id=bb6475a38855ade9d2e096885c83fb78d2c0623bc0de35d20483341601d198cf Dec 13 06:52:01.340035 env[1191]: time="2024-12-13T06:52:01.340006561Z" level=warning msg="cleaning up after shim disconnected" id=bb6475a38855ade9d2e096885c83fb78d2c0623bc0de35d20483341601d198cf namespace=k8s.io Dec 13 06:52:01.340212 env[1191]: time="2024-12-13T06:52:01.340182933Z" level=info msg="cleaning up dead shim" Dec 13 06:52:01.366488 env[1191]: time="2024-12-13T06:52:01.366050094Z" level=warning msg="cleanup warnings time=\"2024-12-13T06:52:01Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3378 runtime=io.containerd.runc.v2\n" Dec 13 06:52:01.645333 kubelet[1462]: E1213 06:52:01.644745 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:52:01.762771 env[1191]: time="2024-12-13T06:52:01.762687746Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 06:52:01.764351 env[1191]: time="2024-12-13T06:52:01.764308896Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 06:52:01.767393 env[1191]: time="2024-12-13T06:52:01.766986451Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 06:52:01.767393 env[1191]: time="2024-12-13T06:52:01.767375941Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Dec 13 06:52:01.770528 kubelet[1462]: I1213 06:52:01.770483 1462 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="617c98c3-4b8c-4373-85f5-f23e4c9977df" path="/var/lib/kubelet/pods/617c98c3-4b8c-4373-85f5-f23e4c9977df/volumes" Dec 13 06:52:01.771986 env[1191]: time="2024-12-13T06:52:01.771939534Z" level=info msg="CreateContainer within sandbox \"7d1fe152c039d0f7dbe92aaa4d2f8ef2fee1632e895dd67bffb90cff81ef319b\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Dec 13 06:52:01.785804 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1504664657.mount: Deactivated successfully. Dec 13 06:52:01.796446 env[1191]: time="2024-12-13T06:52:01.796385289Z" level=info msg="CreateContainer within sandbox \"7d1fe152c039d0f7dbe92aaa4d2f8ef2fee1632e895dd67bffb90cff81ef319b\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"eaac1c926c109cc8b21dd605089140d5489504fb5a6614268a81509d1b10d868\"" Dec 13 06:52:01.797421 env[1191]: time="2024-12-13T06:52:01.797386568Z" level=info msg="StartContainer for \"eaac1c926c109cc8b21dd605089140d5489504fb5a6614268a81509d1b10d868\"" Dec 13 06:52:01.820399 systemd[1]: Started cri-containerd-eaac1c926c109cc8b21dd605089140d5489504fb5a6614268a81509d1b10d868.scope. Dec 13 06:52:01.864833 env[1191]: time="2024-12-13T06:52:01.864775110Z" level=info msg="StartContainer for \"eaac1c926c109cc8b21dd605089140d5489504fb5a6614268a81509d1b10d868\" returns successfully" Dec 13 06:52:02.113083 env[1191]: time="2024-12-13T06:52:02.113008024Z" level=info msg="CreateContainer within sandbox \"2ed35f71fae8a92460783191b01a3a5d0dd5262c0afbdc4f7a9c6fd5cc93c52c\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 06:52:02.119914 kubelet[1462]: I1213 06:52:02.119880 1462 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-5cc964979-dbtpx" podStartSLOduration=1.865962627 podStartE2EDuration="6.119805838s" podCreationTimestamp="2024-12-13 06:51:56 +0000 UTC" firstStartedPulling="2024-12-13 06:51:57.5144308 +0000 UTC m=+80.229997095" lastFinishedPulling="2024-12-13 06:52:01.768274008 +0000 UTC m=+84.483840306" observedRunningTime="2024-12-13 06:52:02.1193936 +0000 UTC m=+84.834959902" watchObservedRunningTime="2024-12-13 06:52:02.119805838 +0000 UTC m=+84.835372144" Dec 13 06:52:02.131256 env[1191]: time="2024-12-13T06:52:02.131153899Z" level=info msg="CreateContainer within sandbox \"2ed35f71fae8a92460783191b01a3a5d0dd5262c0afbdc4f7a9c6fd5cc93c52c\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"30ae3e0922b3f29a030153856f949f35192736c4fe15138a42d3de3a17049abc\"" Dec 13 06:52:02.131905 env[1191]: time="2024-12-13T06:52:02.131872180Z" level=info msg="StartContainer for \"30ae3e0922b3f29a030153856f949f35192736c4fe15138a42d3de3a17049abc\"" Dec 13 06:52:02.157232 systemd[1]: Started cri-containerd-30ae3e0922b3f29a030153856f949f35192736c4fe15138a42d3de3a17049abc.scope. Dec 13 06:52:02.213520 env[1191]: time="2024-12-13T06:52:02.212635041Z" level=info msg="StartContainer for \"30ae3e0922b3f29a030153856f949f35192736c4fe15138a42d3de3a17049abc\" returns successfully" Dec 13 06:52:02.220484 systemd[1]: cri-containerd-30ae3e0922b3f29a030153856f949f35192736c4fe15138a42d3de3a17049abc.scope: Deactivated successfully. Dec 13 06:52:02.248371 env[1191]: time="2024-12-13T06:52:02.248300571Z" level=info msg="shim disconnected" id=30ae3e0922b3f29a030153856f949f35192736c4fe15138a42d3de3a17049abc Dec 13 06:52:02.248371 env[1191]: time="2024-12-13T06:52:02.248369358Z" level=warning msg="cleaning up after shim disconnected" id=30ae3e0922b3f29a030153856f949f35192736c4fe15138a42d3de3a17049abc namespace=k8s.io Dec 13 06:52:02.248671 env[1191]: time="2024-12-13T06:52:02.248386439Z" level=info msg="cleaning up dead shim" Dec 13 06:52:02.263035 env[1191]: time="2024-12-13T06:52:02.262972326Z" level=warning msg="cleanup warnings time=\"2024-12-13T06:52:02Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3477 runtime=io.containerd.runc.v2\n" Dec 13 06:52:02.645379 kubelet[1462]: E1213 06:52:02.645296 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:52:02.709974 kubelet[1462]: E1213 06:52:02.709925 1462 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 06:52:03.118845 env[1191]: time="2024-12-13T06:52:03.118791423Z" level=info msg="CreateContainer within sandbox \"2ed35f71fae8a92460783191b01a3a5d0dd5262c0afbdc4f7a9c6fd5cc93c52c\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 06:52:03.134274 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3206550141.mount: Deactivated successfully. Dec 13 06:52:03.143575 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1618546061.mount: Deactivated successfully. Dec 13 06:52:03.150924 env[1191]: time="2024-12-13T06:52:03.150862299Z" level=info msg="CreateContainer within sandbox \"2ed35f71fae8a92460783191b01a3a5d0dd5262c0afbdc4f7a9c6fd5cc93c52c\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"bccea48b9a60705fea7cbd084fed1fdd98b19f2fe8607c0c5d41107fcb0d2d50\"" Dec 13 06:52:03.152139 env[1191]: time="2024-12-13T06:52:03.152058390Z" level=info msg="StartContainer for \"bccea48b9a60705fea7cbd084fed1fdd98b19f2fe8607c0c5d41107fcb0d2d50\"" Dec 13 06:52:03.174212 systemd[1]: Started cri-containerd-bccea48b9a60705fea7cbd084fed1fdd98b19f2fe8607c0c5d41107fcb0d2d50.scope. Dec 13 06:52:03.217918 systemd[1]: cri-containerd-bccea48b9a60705fea7cbd084fed1fdd98b19f2fe8607c0c5d41107fcb0d2d50.scope: Deactivated successfully. Dec 13 06:52:03.221952 env[1191]: time="2024-12-13T06:52:03.221855291Z" level=info msg="StartContainer for \"bccea48b9a60705fea7cbd084fed1fdd98b19f2fe8607c0c5d41107fcb0d2d50\" returns successfully" Dec 13 06:52:03.246854 env[1191]: time="2024-12-13T06:52:03.246754168Z" level=info msg="shim disconnected" id=bccea48b9a60705fea7cbd084fed1fdd98b19f2fe8607c0c5d41107fcb0d2d50 Dec 13 06:52:03.246854 env[1191]: time="2024-12-13T06:52:03.246835947Z" level=warning msg="cleaning up after shim disconnected" id=bccea48b9a60705fea7cbd084fed1fdd98b19f2fe8607c0c5d41107fcb0d2d50 namespace=k8s.io Dec 13 06:52:03.246854 env[1191]: time="2024-12-13T06:52:03.246851484Z" level=info msg="cleaning up dead shim" Dec 13 06:52:03.257566 env[1191]: time="2024-12-13T06:52:03.257507873Z" level=warning msg="cleanup warnings time=\"2024-12-13T06:52:03Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3533 runtime=io.containerd.runc.v2\n" Dec 13 06:52:03.647335 kubelet[1462]: E1213 06:52:03.647233 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:52:04.124988 env[1191]: time="2024-12-13T06:52:04.124905801Z" level=info msg="CreateContainer within sandbox \"2ed35f71fae8a92460783191b01a3a5d0dd5262c0afbdc4f7a9c6fd5cc93c52c\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 06:52:04.148446 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4075962236.mount: Deactivated successfully. Dec 13 06:52:04.156663 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount641764472.mount: Deactivated successfully. Dec 13 06:52:04.158902 env[1191]: time="2024-12-13T06:52:04.158798862Z" level=info msg="CreateContainer within sandbox \"2ed35f71fae8a92460783191b01a3a5d0dd5262c0afbdc4f7a9c6fd5cc93c52c\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"d81456df350115f4bfb96932caf8a0950a4d49ea3ae9e5d03c8e10daedd2cd60\"" Dec 13 06:52:04.159682 env[1191]: time="2024-12-13T06:52:04.159629066Z" level=info msg="StartContainer for \"d81456df350115f4bfb96932caf8a0950a4d49ea3ae9e5d03c8e10daedd2cd60\"" Dec 13 06:52:04.183180 systemd[1]: Started cri-containerd-d81456df350115f4bfb96932caf8a0950a4d49ea3ae9e5d03c8e10daedd2cd60.scope. Dec 13 06:52:04.234271 env[1191]: time="2024-12-13T06:52:04.234211008Z" level=info msg="StartContainer for \"d81456df350115f4bfb96932caf8a0950a4d49ea3ae9e5d03c8e10daedd2cd60\" returns successfully" Dec 13 06:52:04.648975 kubelet[1462]: E1213 06:52:04.648923 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:52:04.920160 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Dec 13 06:52:05.649793 kubelet[1462]: E1213 06:52:05.649688 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:52:06.650694 kubelet[1462]: E1213 06:52:06.650642 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:52:06.814468 systemd[1]: run-containerd-runc-k8s.io-d81456df350115f4bfb96932caf8a0950a4d49ea3ae9e5d03c8e10daedd2cd60-runc.6EKCEt.mount: Deactivated successfully. Dec 13 06:52:07.652207 kubelet[1462]: E1213 06:52:07.652154 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:52:08.276512 systemd-networkd[1024]: lxc_health: Link UP Dec 13 06:52:08.300800 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Dec 13 06:52:08.296012 systemd-networkd[1024]: lxc_health: Gained carrier Dec 13 06:52:08.497266 kubelet[1462]: I1213 06:52:08.497102 1462 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-r947j" podStartSLOduration=8.496960562 podStartE2EDuration="8.496960562s" podCreationTimestamp="2024-12-13 06:52:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 06:52:05.156206364 +0000 UTC m=+87.871772671" watchObservedRunningTime="2024-12-13 06:52:08.496960562 +0000 UTC m=+91.212526862" Dec 13 06:52:08.653428 kubelet[1462]: E1213 06:52:08.653265 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:52:09.126720 systemd[1]: run-containerd-runc-k8s.io-d81456df350115f4bfb96932caf8a0950a4d49ea3ae9e5d03c8e10daedd2cd60-runc.mNSlpr.mount: Deactivated successfully. Dec 13 06:52:09.341448 systemd-networkd[1024]: lxc_health: Gained IPv6LL Dec 13 06:52:09.654369 kubelet[1462]: E1213 06:52:09.654307 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:52:10.655644 kubelet[1462]: E1213 06:52:10.655577 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:52:11.384308 systemd[1]: run-containerd-runc-k8s.io-d81456df350115f4bfb96932caf8a0950a4d49ea3ae9e5d03c8e10daedd2cd60-runc.EcVfLW.mount: Deactivated successfully. Dec 13 06:52:11.657138 kubelet[1462]: E1213 06:52:11.656538 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:52:12.656832 kubelet[1462]: E1213 06:52:12.656768 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:52:13.592324 systemd[1]: run-containerd-runc-k8s.io-d81456df350115f4bfb96932caf8a0950a4d49ea3ae9e5d03c8e10daedd2cd60-runc.Us3Crb.mount: Deactivated successfully. Dec 13 06:52:13.657689 kubelet[1462]: E1213 06:52:13.657593 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:52:14.658651 kubelet[1462]: E1213 06:52:14.658591 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:52:15.659809 kubelet[1462]: E1213 06:52:15.659729 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:52:15.817038 systemd[1]: run-containerd-runc-k8s.io-d81456df350115f4bfb96932caf8a0950a4d49ea3ae9e5d03c8e10daedd2cd60-runc.UfblS2.mount: Deactivated successfully. Dec 13 06:52:16.660042 kubelet[1462]: E1213 06:52:16.659927 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:52:17.568874 kubelet[1462]: E1213 06:52:17.568777 1462 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:52:17.660532 kubelet[1462]: E1213 06:52:17.660394 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:52:18.662167 kubelet[1462]: E1213 06:52:18.662084 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:52:19.664452 kubelet[1462]: E1213 06:52:19.664342 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:52:20.666230 kubelet[1462]: E1213 06:52:20.666120 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"