Dec 13 06:43:47.929408 kernel: Linux version 5.15.173-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Thu Dec 12 23:50:37 -00 2024 Dec 13 06:43:47.930532 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=66bd2580285375a2ba5b0e34ba63606314bcd90aaed1de1996371bdcb032485c Dec 13 06:43:47.930552 kernel: BIOS-provided physical RAM map: Dec 13 06:43:47.930563 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Dec 13 06:43:47.930573 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Dec 13 06:43:47.930583 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Dec 13 06:43:47.930594 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdbfff] usable Dec 13 06:43:47.930605 kernel: BIOS-e820: [mem 0x000000007ffdc000-0x000000007fffffff] reserved Dec 13 06:43:47.930615 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Dec 13 06:43:47.930625 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Dec 13 06:43:47.930638 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Dec 13 06:43:47.930648 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Dec 13 06:43:47.930659 kernel: NX (Execute Disable) protection: active Dec 13 06:43:47.930669 kernel: SMBIOS 2.8 present. Dec 13 06:43:47.930681 kernel: DMI: Red Hat KVM/RHEL-AV, BIOS 1.13.0-2.module_el8.5.0+2608+72063365 04/01/2014 Dec 13 06:43:47.930693 kernel: Hypervisor detected: KVM Dec 13 06:43:47.930707 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 13 06:43:47.930718 kernel: kvm-clock: cpu 0, msr 4e19b001, primary cpu clock Dec 13 06:43:47.930729 kernel: kvm-clock: using sched offset of 4746735786 cycles Dec 13 06:43:47.930740 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 13 06:43:47.930752 kernel: tsc: Detected 2499.998 MHz processor Dec 13 06:43:47.930763 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 13 06:43:47.930774 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 13 06:43:47.930785 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Dec 13 06:43:47.930796 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 13 06:43:47.930810 kernel: Using GB pages for direct mapping Dec 13 06:43:47.930821 kernel: ACPI: Early table checksum verification disabled Dec 13 06:43:47.930832 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS ) Dec 13 06:43:47.930843 kernel: ACPI: RSDT 0x000000007FFE47A5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 06:43:47.930854 kernel: ACPI: FACP 0x000000007FFE438D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 06:43:47.930865 kernel: ACPI: DSDT 0x000000007FFDFD80 00460D (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 06:43:47.930876 kernel: ACPI: FACS 0x000000007FFDFD40 000040 Dec 13 06:43:47.930887 kernel: ACPI: APIC 0x000000007FFE4481 0000F0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 06:43:47.930898 kernel: ACPI: SRAT 0x000000007FFE4571 0001D0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 06:43:47.930912 kernel: ACPI: MCFG 0x000000007FFE4741 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 06:43:47.930923 kernel: ACPI: WAET 0x000000007FFE477D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 06:43:47.930933 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe438d-0x7ffe4480] Dec 13 06:43:47.930944 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfd80-0x7ffe438c] Dec 13 06:43:47.930955 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfd40-0x7ffdfd7f] Dec 13 06:43:47.930966 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe4481-0x7ffe4570] Dec 13 06:43:47.930984 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe4571-0x7ffe4740] Dec 13 06:43:47.930998 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe4741-0x7ffe477c] Dec 13 06:43:47.931010 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe477d-0x7ffe47a4] Dec 13 06:43:47.931021 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Dec 13 06:43:47.931033 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Dec 13 06:43:47.931044 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Dec 13 06:43:47.931056 kernel: SRAT: PXM 0 -> APIC 0x03 -> Node 0 Dec 13 06:43:47.931067 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Dec 13 06:43:47.931082 kernel: SRAT: PXM 0 -> APIC 0x05 -> Node 0 Dec 13 06:43:47.931093 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Dec 13 06:43:47.931104 kernel: SRAT: PXM 0 -> APIC 0x07 -> Node 0 Dec 13 06:43:47.931116 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Dec 13 06:43:47.931127 kernel: SRAT: PXM 0 -> APIC 0x09 -> Node 0 Dec 13 06:43:47.931139 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Dec 13 06:43:47.931150 kernel: SRAT: PXM 0 -> APIC 0x0b -> Node 0 Dec 13 06:43:47.931161 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Dec 13 06:43:47.931173 kernel: SRAT: PXM 0 -> APIC 0x0d -> Node 0 Dec 13 06:43:47.931184 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Dec 13 06:43:47.931198 kernel: SRAT: PXM 0 -> APIC 0x0f -> Node 0 Dec 13 06:43:47.931222 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Dec 13 06:43:47.931233 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Dec 13 06:43:47.931244 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x20800fffff] hotplug Dec 13 06:43:47.931260 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdbfff] -> [mem 0x00000000-0x7ffdbfff] Dec 13 06:43:47.931284 kernel: NODE_DATA(0) allocated [mem 0x7ffd6000-0x7ffdbfff] Dec 13 06:43:47.931295 kernel: Zone ranges: Dec 13 06:43:47.931307 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 13 06:43:47.931318 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdbfff] Dec 13 06:43:47.931333 kernel: Normal empty Dec 13 06:43:47.931345 kernel: Movable zone start for each node Dec 13 06:43:47.931356 kernel: Early memory node ranges Dec 13 06:43:47.931368 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Dec 13 06:43:47.931380 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdbfff] Dec 13 06:43:47.931391 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdbfff] Dec 13 06:43:47.931403 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 06:43:47.931447 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Dec 13 06:43:47.931462 kernel: On node 0, zone DMA32: 36 pages in unavailable ranges Dec 13 06:43:47.931478 kernel: ACPI: PM-Timer IO Port: 0x608 Dec 13 06:43:47.931490 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 13 06:43:47.931501 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 13 06:43:47.931513 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Dec 13 06:43:47.931524 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 13 06:43:47.931536 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 13 06:43:47.931547 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 13 06:43:47.931559 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 13 06:43:47.931570 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 13 06:43:47.931585 kernel: TSC deadline timer available Dec 13 06:43:47.931597 kernel: smpboot: Allowing 16 CPUs, 14 hotplug CPUs Dec 13 06:43:47.931608 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Dec 13 06:43:47.931620 kernel: Booting paravirtualized kernel on KVM Dec 13 06:43:47.931631 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 13 06:43:47.931643 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:16 nr_node_ids:1 Dec 13 06:43:47.931655 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u262144 Dec 13 06:43:47.931667 kernel: pcpu-alloc: s188696 r8192 d32488 u262144 alloc=1*2097152 Dec 13 06:43:47.931678 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Dec 13 06:43:47.931692 kernel: kvm-guest: stealtime: cpu 0, msr 7da1c0c0 Dec 13 06:43:47.931704 kernel: kvm-guest: PV spinlocks enabled Dec 13 06:43:47.931728 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 13 06:43:47.931739 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515804 Dec 13 06:43:47.931750 kernel: Policy zone: DMA32 Dec 13 06:43:47.931762 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=66bd2580285375a2ba5b0e34ba63606314bcd90aaed1de1996371bdcb032485c Dec 13 06:43:47.931775 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 06:43:47.931798 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 06:43:47.931813 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 13 06:43:47.931824 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 06:43:47.931836 kernel: Memory: 1903832K/2096616K available (12294K kernel code, 2275K rwdata, 13716K rodata, 47476K init, 4108K bss, 192524K reserved, 0K cma-reserved) Dec 13 06:43:47.931848 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Dec 13 06:43:47.931860 kernel: Kernel/User page tables isolation: enabled Dec 13 06:43:47.931872 kernel: ftrace: allocating 34549 entries in 135 pages Dec 13 06:43:47.931883 kernel: ftrace: allocated 135 pages with 4 groups Dec 13 06:43:47.931895 kernel: rcu: Hierarchical RCU implementation. Dec 13 06:43:47.931907 kernel: rcu: RCU event tracing is enabled. Dec 13 06:43:47.931922 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Dec 13 06:43:47.931934 kernel: Rude variant of Tasks RCU enabled. Dec 13 06:43:47.931945 kernel: Tracing variant of Tasks RCU enabled. Dec 13 06:43:47.931957 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 06:43:47.931969 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Dec 13 06:43:47.931980 kernel: NR_IRQS: 33024, nr_irqs: 552, preallocated irqs: 16 Dec 13 06:43:47.931992 kernel: random: crng init done Dec 13 06:43:47.932013 kernel: Console: colour VGA+ 80x25 Dec 13 06:43:47.932026 kernel: printk: console [tty0] enabled Dec 13 06:43:47.932038 kernel: printk: console [ttyS0] enabled Dec 13 06:43:47.932050 kernel: ACPI: Core revision 20210730 Dec 13 06:43:47.932062 kernel: APIC: Switch to symmetric I/O mode setup Dec 13 06:43:47.932077 kernel: x2apic enabled Dec 13 06:43:47.932089 kernel: Switched APIC routing to physical x2apic. Dec 13 06:43:47.932101 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Dec 13 06:43:47.932113 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499998) Dec 13 06:43:47.932131 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Dec 13 06:43:47.932146 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Dec 13 06:43:47.932158 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Dec 13 06:43:47.932181 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 13 06:43:47.932195 kernel: Spectre V2 : Mitigation: Retpolines Dec 13 06:43:47.932207 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Dec 13 06:43:47.932219 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Dec 13 06:43:47.932231 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Dec 13 06:43:47.932249 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 13 06:43:47.932261 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Dec 13 06:43:47.932273 kernel: MDS: Mitigation: Clear CPU buffers Dec 13 06:43:47.932285 kernel: MMIO Stale Data: Unknown: No mitigations Dec 13 06:43:47.932300 kernel: SRBDS: Unknown: Dependent on hypervisor status Dec 13 06:43:47.932313 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 13 06:43:47.932325 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 13 06:43:47.932337 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 13 06:43:47.932349 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 13 06:43:47.932361 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Dec 13 06:43:47.932373 kernel: Freeing SMP alternatives memory: 32K Dec 13 06:43:47.932385 kernel: pid_max: default: 32768 minimum: 301 Dec 13 06:43:47.932409 kernel: LSM: Security Framework initializing Dec 13 06:43:47.932421 kernel: SELinux: Initializing. Dec 13 06:43:47.932433 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 13 06:43:47.932458 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 13 06:43:47.932471 kernel: smpboot: CPU0: Intel Xeon E3-12xx v2 (Ivy Bridge, IBRS) (family: 0x6, model: 0x3a, stepping: 0x9) Dec 13 06:43:47.932483 kernel: Performance Events: unsupported p6 CPU model 58 no PMU driver, software events only. Dec 13 06:43:47.932495 kernel: signal: max sigframe size: 1776 Dec 13 06:43:47.932507 kernel: rcu: Hierarchical SRCU implementation. Dec 13 06:43:47.932520 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Dec 13 06:43:47.932532 kernel: smp: Bringing up secondary CPUs ... Dec 13 06:43:47.932544 kernel: x86: Booting SMP configuration: Dec 13 06:43:47.932556 kernel: .... node #0, CPUs: #1 Dec 13 06:43:47.932572 kernel: kvm-clock: cpu 1, msr 4e19b041, secondary cpu clock Dec 13 06:43:47.932584 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Dec 13 06:43:47.932596 kernel: kvm-guest: stealtime: cpu 1, msr 7da5c0c0 Dec 13 06:43:47.932608 kernel: smp: Brought up 1 node, 2 CPUs Dec 13 06:43:47.932620 kernel: smpboot: Max logical packages: 16 Dec 13 06:43:47.932633 kernel: smpboot: Total of 2 processors activated (9999.99 BogoMIPS) Dec 13 06:43:47.932645 kernel: devtmpfs: initialized Dec 13 06:43:47.932657 kernel: x86/mm: Memory block size: 128MB Dec 13 06:43:47.932669 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 06:43:47.932681 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Dec 13 06:43:47.932697 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 06:43:47.932709 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 06:43:47.932721 kernel: audit: initializing netlink subsys (disabled) Dec 13 06:43:47.932734 kernel: audit: type=2000 audit(1734072227.109:1): state=initialized audit_enabled=0 res=1 Dec 13 06:43:47.932746 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 06:43:47.932758 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 13 06:43:47.932770 kernel: cpuidle: using governor menu Dec 13 06:43:47.932782 kernel: ACPI: bus type PCI registered Dec 13 06:43:47.932794 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 06:43:47.932810 kernel: dca service started, version 1.12.1 Dec 13 06:43:47.932822 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Dec 13 06:43:47.932834 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved in E820 Dec 13 06:43:47.932846 kernel: PCI: Using configuration type 1 for base access Dec 13 06:43:47.932859 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 13 06:43:47.932871 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 06:43:47.932883 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 06:43:47.932895 kernel: ACPI: Added _OSI(Module Device) Dec 13 06:43:47.932911 kernel: ACPI: Added _OSI(Processor Device) Dec 13 06:43:47.932923 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 06:43:47.932935 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 06:43:47.932948 kernel: ACPI: Added _OSI(Linux-Dell-Video) Dec 13 06:43:47.932960 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Dec 13 06:43:47.932972 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Dec 13 06:43:47.932993 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 06:43:47.933005 kernel: ACPI: Interpreter enabled Dec 13 06:43:47.933017 kernel: ACPI: PM: (supports S0 S5) Dec 13 06:43:47.933029 kernel: ACPI: Using IOAPIC for interrupt routing Dec 13 06:43:47.933045 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 13 06:43:47.933057 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Dec 13 06:43:47.933069 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 13 06:43:47.933343 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 13 06:43:47.933540 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Dec 13 06:43:47.933696 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Dec 13 06:43:47.933714 kernel: PCI host bridge to bus 0000:00 Dec 13 06:43:47.933882 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 13 06:43:47.934025 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 13 06:43:47.934166 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 13 06:43:47.934308 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Dec 13 06:43:47.934478 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Dec 13 06:43:47.934620 kernel: pci_bus 0000:00: root bus resource [mem 0x20c0000000-0x28bfffffff window] Dec 13 06:43:47.934771 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 13 06:43:47.934969 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Dec 13 06:43:47.935177 kernel: pci 0000:00:01.0: [1013:00b8] type 00 class 0x030000 Dec 13 06:43:47.935333 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfa000000-0xfbffffff pref] Dec 13 06:43:47.940608 kernel: pci 0000:00:01.0: reg 0x14: [mem 0xfea50000-0xfea50fff] Dec 13 06:43:47.940775 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea40000-0xfea4ffff pref] Dec 13 06:43:47.940935 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 13 06:43:47.941110 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Dec 13 06:43:47.941277 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea51000-0xfea51fff] Dec 13 06:43:47.941478 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Dec 13 06:43:47.941640 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea52000-0xfea52fff] Dec 13 06:43:47.941803 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Dec 13 06:43:47.941959 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea53000-0xfea53fff] Dec 13 06:43:47.942127 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Dec 13 06:43:47.942284 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea54000-0xfea54fff] Dec 13 06:43:47.942482 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Dec 13 06:43:47.942639 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea55000-0xfea55fff] Dec 13 06:43:47.942805 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Dec 13 06:43:47.942959 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea56000-0xfea56fff] Dec 13 06:43:47.943126 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Dec 13 06:43:47.943278 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea57000-0xfea57fff] Dec 13 06:43:47.943499 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Dec 13 06:43:47.943657 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea58000-0xfea58fff] Dec 13 06:43:47.943830 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Dec 13 06:43:47.943985 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc0c0-0xc0df] Dec 13 06:43:47.944137 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfea59000-0xfea59fff] Dec 13 06:43:47.944297 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Dec 13 06:43:47.944483 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfea00000-0xfea3ffff pref] Dec 13 06:43:47.944648 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Dec 13 06:43:47.944803 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Dec 13 06:43:47.944958 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfea5a000-0xfea5afff] Dec 13 06:43:47.945112 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfd004000-0xfd007fff 64bit pref] Dec 13 06:43:47.945296 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Dec 13 06:43:47.952518 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Dec 13 06:43:47.952695 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Dec 13 06:43:47.952854 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc0e0-0xc0ff] Dec 13 06:43:47.953009 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea5b000-0xfea5bfff] Dec 13 06:43:47.953175 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Dec 13 06:43:47.953328 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Dec 13 06:43:47.953538 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 Dec 13 06:43:47.953702 kernel: pci 0000:01:00.0: reg 0x10: [mem 0xfda00000-0xfda000ff 64bit] Dec 13 06:43:47.953860 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Dec 13 06:43:47.954013 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Dec 13 06:43:47.954166 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Dec 13 06:43:47.954334 kernel: pci_bus 0000:02: extended config space not accessible Dec 13 06:43:47.954569 kernel: pci 0000:02:01.0: [8086:25ab] type 00 class 0x088000 Dec 13 06:43:47.954745 kernel: pci 0000:02:01.0: reg 0x10: [mem 0xfd800000-0xfd80000f] Dec 13 06:43:47.954907 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Dec 13 06:43:47.955066 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Dec 13 06:43:47.955234 kernel: pci 0000:03:00.0: [1b36:000d] type 00 class 0x0c0330 Dec 13 06:43:47.955406 kernel: pci 0000:03:00.0: reg 0x10: [mem 0xfe800000-0xfe803fff 64bit] Dec 13 06:43:47.955575 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Dec 13 06:43:47.955736 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Dec 13 06:43:47.955888 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Dec 13 06:43:47.956065 kernel: pci 0000:04:00.0: [1af4:1044] type 00 class 0x00ff00 Dec 13 06:43:47.956235 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Dec 13 06:43:47.956400 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Dec 13 06:43:47.956598 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Dec 13 06:43:47.956773 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Dec 13 06:43:47.956955 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Dec 13 06:43:47.957136 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Dec 13 06:43:47.957323 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Dec 13 06:43:47.957506 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Dec 13 06:43:47.957662 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Dec 13 06:43:47.957815 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Dec 13 06:43:47.957985 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Dec 13 06:43:47.958140 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Dec 13 06:43:47.958292 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Dec 13 06:43:47.958479 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Dec 13 06:43:47.958634 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Dec 13 06:43:47.958786 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Dec 13 06:43:47.958939 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Dec 13 06:43:47.959102 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Dec 13 06:43:47.959257 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Dec 13 06:43:47.959275 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 13 06:43:47.959288 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 13 06:43:47.959307 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 13 06:43:47.959319 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 13 06:43:47.959332 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Dec 13 06:43:47.959344 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Dec 13 06:43:47.959356 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Dec 13 06:43:47.959369 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Dec 13 06:43:47.959381 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Dec 13 06:43:47.967453 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Dec 13 06:43:47.967472 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Dec 13 06:43:47.967491 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Dec 13 06:43:47.967504 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Dec 13 06:43:47.967516 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Dec 13 06:43:47.967529 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Dec 13 06:43:47.967541 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Dec 13 06:43:47.967553 kernel: iommu: Default domain type: Translated Dec 13 06:43:47.967565 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 13 06:43:47.967736 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Dec 13 06:43:47.967897 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 13 06:43:47.968069 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Dec 13 06:43:47.968088 kernel: vgaarb: loaded Dec 13 06:43:47.968100 kernel: pps_core: LinuxPPS API ver. 1 registered Dec 13 06:43:47.968123 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Dec 13 06:43:47.968136 kernel: PTP clock support registered Dec 13 06:43:47.968148 kernel: PCI: Using ACPI for IRQ routing Dec 13 06:43:47.968161 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 13 06:43:47.968178 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Dec 13 06:43:47.968196 kernel: e820: reserve RAM buffer [mem 0x7ffdc000-0x7fffffff] Dec 13 06:43:47.968208 kernel: clocksource: Switched to clocksource kvm-clock Dec 13 06:43:47.968221 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 06:43:47.968239 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 06:43:47.968251 kernel: pnp: PnP ACPI init Dec 13 06:43:47.968458 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Dec 13 06:43:47.968479 kernel: pnp: PnP ACPI: found 5 devices Dec 13 06:43:47.968492 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 13 06:43:47.968510 kernel: NET: Registered PF_INET protocol family Dec 13 06:43:47.968523 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 06:43:47.968536 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Dec 13 06:43:47.968548 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 06:43:47.968561 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 13 06:43:47.968573 kernel: TCP bind hash table entries: 16384 (order: 6, 262144 bytes, linear) Dec 13 06:43:47.968585 kernel: TCP: Hash tables configured (established 16384 bind 16384) Dec 13 06:43:47.968598 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 13 06:43:47.968610 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 13 06:43:47.968627 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 06:43:47.968639 kernel: NET: Registered PF_XDP protocol family Dec 13 06:43:47.968795 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01-02] add_size 1000 Dec 13 06:43:47.968951 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Dec 13 06:43:47.969106 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Dec 13 06:43:47.969279 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Dec 13 06:43:47.969462 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Dec 13 06:43:47.969658 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Dec 13 06:43:47.969814 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Dec 13 06:43:47.969967 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Dec 13 06:43:47.970122 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Dec 13 06:43:47.970273 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Dec 13 06:43:47.970461 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Dec 13 06:43:47.970622 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Dec 13 06:43:47.970776 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Dec 13 06:43:47.970928 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Dec 13 06:43:47.971080 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Dec 13 06:43:47.971231 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Dec 13 06:43:47.971416 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Dec 13 06:43:47.971594 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Dec 13 06:43:47.971749 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Dec 13 06:43:47.971903 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Dec 13 06:43:47.972065 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Dec 13 06:43:47.972227 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Dec 13 06:43:47.972428 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Dec 13 06:43:47.972604 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Dec 13 06:43:47.972757 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Dec 13 06:43:47.972909 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Dec 13 06:43:47.973063 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Dec 13 06:43:47.973221 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Dec 13 06:43:47.973381 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Dec 13 06:43:47.973567 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Dec 13 06:43:47.973723 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Dec 13 06:43:47.973878 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Dec 13 06:43:47.974037 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Dec 13 06:43:47.974199 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Dec 13 06:43:47.974352 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Dec 13 06:43:47.983276 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Dec 13 06:43:47.983479 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Dec 13 06:43:47.983639 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Dec 13 06:43:47.983793 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Dec 13 06:43:47.983948 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Dec 13 06:43:47.984102 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Dec 13 06:43:47.984255 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Dec 13 06:43:47.984422 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Dec 13 06:43:47.984600 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Dec 13 06:43:47.984754 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Dec 13 06:43:47.984906 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Dec 13 06:43:47.985059 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Dec 13 06:43:47.985212 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Dec 13 06:43:47.985371 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Dec 13 06:43:47.985551 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Dec 13 06:43:47.985697 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 13 06:43:47.985838 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 13 06:43:47.985978 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 13 06:43:47.986117 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Dec 13 06:43:47.986257 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Dec 13 06:43:47.986457 kernel: pci_bus 0000:00: resource 9 [mem 0x20c0000000-0x28bfffffff window] Dec 13 06:43:47.986640 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Dec 13 06:43:47.986793 kernel: pci_bus 0000:01: resource 1 [mem 0xfd800000-0xfdbfffff] Dec 13 06:43:47.986960 kernel: pci_bus 0000:01: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Dec 13 06:43:47.987141 kernel: pci_bus 0000:02: resource 1 [mem 0xfd800000-0xfd9fffff] Dec 13 06:43:47.987312 kernel: pci_bus 0000:03: resource 0 [io 0x2000-0x2fff] Dec 13 06:43:47.987500 kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff] Dec 13 06:43:47.987649 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Dec 13 06:43:47.987836 kernel: pci_bus 0000:04: resource 0 [io 0x3000-0x3fff] Dec 13 06:43:47.987993 kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff] Dec 13 06:43:47.988140 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Dec 13 06:43:47.988299 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Dec 13 06:43:47.988478 kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff] Dec 13 06:43:47.988628 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Dec 13 06:43:47.988800 kernel: pci_bus 0000:06: resource 0 [io 0x5000-0x5fff] Dec 13 06:43:47.988967 kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff] Dec 13 06:43:47.989126 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Dec 13 06:43:47.989284 kernel: pci_bus 0000:07: resource 0 [io 0x6000-0x6fff] Dec 13 06:43:47.990551 kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff] Dec 13 06:43:47.990706 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Dec 13 06:43:47.990884 kernel: pci_bus 0000:08: resource 0 [io 0x7000-0x7fff] Dec 13 06:43:47.991039 kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff] Dec 13 06:43:47.991186 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Dec 13 06:43:47.991341 kernel: pci_bus 0000:09: resource 0 [io 0x8000-0x8fff] Dec 13 06:43:47.991524 kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff] Dec 13 06:43:47.991674 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Dec 13 06:43:47.991694 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Dec 13 06:43:47.991707 kernel: PCI: CLS 0 bytes, default 64 Dec 13 06:43:47.991721 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Dec 13 06:43:47.991740 kernel: software IO TLB: mapped [mem 0x0000000079800000-0x000000007d800000] (64MB) Dec 13 06:43:47.991754 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Dec 13 06:43:47.991768 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Dec 13 06:43:47.991781 kernel: Initialise system trusted keyrings Dec 13 06:43:47.991794 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Dec 13 06:43:47.991807 kernel: Key type asymmetric registered Dec 13 06:43:47.991820 kernel: Asymmetric key parser 'x509' registered Dec 13 06:43:47.991832 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Dec 13 06:43:47.991845 kernel: io scheduler mq-deadline registered Dec 13 06:43:47.991862 kernel: io scheduler kyber registered Dec 13 06:43:47.991875 kernel: io scheduler bfq registered Dec 13 06:43:47.992029 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Dec 13 06:43:47.992185 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Dec 13 06:43:47.992340 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 06:43:47.992522 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Dec 13 06:43:47.992678 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Dec 13 06:43:47.992839 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 06:43:47.992993 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Dec 13 06:43:47.993157 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Dec 13 06:43:47.993310 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 06:43:47.993492 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Dec 13 06:43:47.993648 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Dec 13 06:43:47.993810 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 06:43:47.993966 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Dec 13 06:43:47.994122 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Dec 13 06:43:47.994276 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 06:43:47.994468 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Dec 13 06:43:47.994625 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Dec 13 06:43:47.994786 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 06:43:47.994942 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Dec 13 06:43:47.995097 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Dec 13 06:43:47.995252 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 06:43:47.995426 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Dec 13 06:43:47.995593 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Dec 13 06:43:47.995753 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 06:43:47.995776 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 13 06:43:47.995791 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Dec 13 06:43:47.995804 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Dec 13 06:43:47.995818 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 06:43:47.995831 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 13 06:43:47.995844 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 13 06:43:47.995857 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 13 06:43:47.995876 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 13 06:43:47.996060 kernel: rtc_cmos 00:03: RTC can wake from S4 Dec 13 06:43:47.996081 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 13 06:43:47.996227 kernel: rtc_cmos 00:03: registered as rtc0 Dec 13 06:43:47.996372 kernel: rtc_cmos 00:03: setting system clock to 2024-12-13T06:43:47 UTC (1734072227) Dec 13 06:43:47.996546 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Dec 13 06:43:47.996566 kernel: intel_pstate: CPU model not supported Dec 13 06:43:47.996585 kernel: NET: Registered PF_INET6 protocol family Dec 13 06:43:47.996599 kernel: Segment Routing with IPv6 Dec 13 06:43:47.996612 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 06:43:47.996625 kernel: NET: Registered PF_PACKET protocol family Dec 13 06:43:47.996638 kernel: Key type dns_resolver registered Dec 13 06:43:47.996651 kernel: IPI shorthand broadcast: enabled Dec 13 06:43:47.996664 kernel: sched_clock: Marking stable (985028432, 225620900)->(1503317625, -292668293) Dec 13 06:43:47.996677 kernel: registered taskstats version 1 Dec 13 06:43:47.996690 kernel: Loading compiled-in X.509 certificates Dec 13 06:43:47.996703 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.173-flatcar: d9defb0205602bee9bb670636cbe5c74194fdb5e' Dec 13 06:43:47.996720 kernel: Key type .fscrypt registered Dec 13 06:43:47.996733 kernel: Key type fscrypt-provisioning registered Dec 13 06:43:47.996746 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 06:43:47.996759 kernel: ima: Allocated hash algorithm: sha1 Dec 13 06:43:47.996772 kernel: ima: No architecture policies found Dec 13 06:43:47.996786 kernel: clk: Disabling unused clocks Dec 13 06:43:47.996798 kernel: Freeing unused kernel image (initmem) memory: 47476K Dec 13 06:43:47.996811 kernel: Write protecting the kernel read-only data: 28672k Dec 13 06:43:47.996828 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Dec 13 06:43:47.996842 kernel: Freeing unused kernel image (rodata/data gap) memory: 620K Dec 13 06:43:47.996854 kernel: Run /init as init process Dec 13 06:43:47.996868 kernel: with arguments: Dec 13 06:43:47.996880 kernel: /init Dec 13 06:43:47.996893 kernel: with environment: Dec 13 06:43:47.996905 kernel: HOME=/ Dec 13 06:43:47.996918 kernel: TERM=linux Dec 13 06:43:47.996933 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 06:43:47.996956 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 06:43:47.996978 systemd[1]: Detected virtualization kvm. Dec 13 06:43:47.996996 systemd[1]: Detected architecture x86-64. Dec 13 06:43:47.997010 systemd[1]: Running in initrd. Dec 13 06:43:47.997023 systemd[1]: No hostname configured, using default hostname. Dec 13 06:43:47.997037 systemd[1]: Hostname set to . Dec 13 06:43:47.997059 systemd[1]: Initializing machine ID from VM UUID. Dec 13 06:43:47.997076 systemd[1]: Queued start job for default target initrd.target. Dec 13 06:43:47.997090 systemd[1]: Started systemd-ask-password-console.path. Dec 13 06:43:47.997103 systemd[1]: Reached target cryptsetup.target. Dec 13 06:43:47.997117 systemd[1]: Reached target paths.target. Dec 13 06:43:47.997130 systemd[1]: Reached target slices.target. Dec 13 06:43:47.997155 systemd[1]: Reached target swap.target. Dec 13 06:43:47.997169 systemd[1]: Reached target timers.target. Dec 13 06:43:47.997183 systemd[1]: Listening on iscsid.socket. Dec 13 06:43:47.997200 systemd[1]: Listening on iscsiuio.socket. Dec 13 06:43:47.997214 systemd[1]: Listening on systemd-journald-audit.socket. Dec 13 06:43:47.997231 systemd[1]: Listening on systemd-journald-dev-log.socket. Dec 13 06:43:47.997245 systemd[1]: Listening on systemd-journald.socket. Dec 13 06:43:47.997259 systemd[1]: Listening on systemd-networkd.socket. Dec 13 06:43:47.997273 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 06:43:47.997286 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 06:43:47.997300 systemd[1]: Reached target sockets.target. Dec 13 06:43:47.997314 systemd[1]: Starting kmod-static-nodes.service... Dec 13 06:43:47.997331 systemd[1]: Finished network-cleanup.service. Dec 13 06:43:47.997345 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 06:43:47.997359 systemd[1]: Starting systemd-journald.service... Dec 13 06:43:47.997372 systemd[1]: Starting systemd-modules-load.service... Dec 13 06:43:48.008478 systemd[1]: Starting systemd-resolved.service... Dec 13 06:43:48.008509 systemd[1]: Starting systemd-vconsole-setup.service... Dec 13 06:43:48.008524 systemd[1]: Finished kmod-static-nodes.service. Dec 13 06:43:48.008539 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 06:43:48.008553 kernel: Bridge firewalling registered Dec 13 06:43:48.008585 systemd-journald[201]: Journal started Dec 13 06:43:48.008663 systemd-journald[201]: Runtime Journal (/run/log/journal/00370428ee944804ae4377d98cdb75a8) is 4.7M, max 38.1M, 33.3M free. Dec 13 06:43:47.926940 systemd-modules-load[202]: Inserted module 'overlay' Dec 13 06:43:47.981348 systemd-resolved[203]: Positive Trust Anchors: Dec 13 06:43:48.040463 systemd[1]: Started systemd-resolved.service. Dec 13 06:43:48.040503 kernel: audit: type=1130 audit(1734072228.025:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:43:48.040532 kernel: audit: type=1130 audit(1734072228.032:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:43:48.040551 systemd[1]: Started systemd-journald.service. Dec 13 06:43:48.040569 kernel: SCSI subsystem initialized Dec 13 06:43:48.025000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:43:48.032000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:43:47.981370 systemd-resolved[203]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 06:43:48.042000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:43:47.981449 systemd-resolved[203]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 06:43:48.060278 kernel: audit: type=1130 audit(1734072228.042:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:43:48.060317 kernel: audit: type=1130 audit(1734072228.047:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:43:48.060336 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 06:43:48.060354 kernel: device-mapper: uevent: version 1.0.3 Dec 13 06:43:48.047000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:43:47.985348 systemd-resolved[203]: Defaulting to hostname 'linux'. Dec 13 06:43:48.070072 kernel: audit: type=1130 audit(1734072228.048:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:43:48.070099 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Dec 13 06:43:48.048000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:43:48.009018 systemd-modules-load[202]: Inserted module 'br_netfilter' Dec 13 06:43:48.042878 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 06:43:48.048350 systemd[1]: Finished systemd-vconsole-setup.service. Dec 13 06:43:48.056723 systemd[1]: Reached target nss-lookup.target. Dec 13 06:43:48.071748 systemd[1]: Starting dracut-cmdline-ask.service... Dec 13 06:43:48.074361 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Dec 13 06:43:48.074519 systemd-modules-load[202]: Inserted module 'dm_multipath' Dec 13 06:43:48.079698 systemd[1]: Finished systemd-modules-load.service. Dec 13 06:43:48.082039 systemd[1]: Starting systemd-sysctl.service... Dec 13 06:43:48.079000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:43:48.090431 kernel: audit: type=1130 audit(1734072228.079:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:43:48.094235 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Dec 13 06:43:48.094000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:43:48.101490 kernel: audit: type=1130 audit(1734072228.094:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:43:48.101746 systemd[1]: Finished dracut-cmdline-ask.service. Dec 13 06:43:48.109424 kernel: audit: type=1130 audit(1734072228.102:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:43:48.102000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:43:48.103157 systemd[1]: Finished systemd-sysctl.service. Dec 13 06:43:48.114660 kernel: audit: type=1130 audit(1734072228.108:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:43:48.108000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:43:48.109822 systemd[1]: Starting dracut-cmdline.service... Dec 13 06:43:48.130830 dracut-cmdline[225]: dracut-dracut-053 Dec 13 06:43:48.130830 dracut-cmdline[225]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LA Dec 13 06:43:48.130830 dracut-cmdline[225]: BEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=66bd2580285375a2ba5b0e34ba63606314bcd90aaed1de1996371bdcb032485c Dec 13 06:43:48.211420 kernel: Loading iSCSI transport class v2.0-870. Dec 13 06:43:48.233469 kernel: iscsi: registered transport (tcp) Dec 13 06:43:48.261961 kernel: iscsi: registered transport (qla4xxx) Dec 13 06:43:48.262016 kernel: QLogic iSCSI HBA Driver Dec 13 06:43:48.313000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:43:48.313452 systemd[1]: Finished dracut-cmdline.service. Dec 13 06:43:48.316739 systemd[1]: Starting dracut-pre-udev.service... Dec 13 06:43:48.377470 kernel: raid6: sse2x4 gen() 7576 MB/s Dec 13 06:43:48.395478 kernel: raid6: sse2x4 xor() 4948 MB/s Dec 13 06:43:48.413477 kernel: raid6: sse2x2 gen() 5371 MB/s Dec 13 06:43:48.431468 kernel: raid6: sse2x2 xor() 7755 MB/s Dec 13 06:43:48.449461 kernel: raid6: sse2x1 gen() 5315 MB/s Dec 13 06:43:48.468100 kernel: raid6: sse2x1 xor() 7086 MB/s Dec 13 06:43:48.468142 kernel: raid6: using algorithm sse2x4 gen() 7576 MB/s Dec 13 06:43:48.468162 kernel: raid6: .... xor() 4948 MB/s, rmw enabled Dec 13 06:43:48.469432 kernel: raid6: using ssse3x2 recovery algorithm Dec 13 06:43:48.487456 kernel: xor: automatically using best checksumming function avx Dec 13 06:43:48.607459 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Dec 13 06:43:48.621615 systemd[1]: Finished dracut-pre-udev.service. Dec 13 06:43:48.621000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:43:48.622000 audit: BPF prog-id=7 op=LOAD Dec 13 06:43:48.622000 audit: BPF prog-id=8 op=LOAD Dec 13 06:43:48.623767 systemd[1]: Starting systemd-udevd.service... Dec 13 06:43:48.641962 systemd-udevd[402]: Using default interface naming scheme 'v252'. Dec 13 06:43:48.650000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:43:48.650443 systemd[1]: Started systemd-udevd.service. Dec 13 06:43:48.652271 systemd[1]: Starting dracut-pre-trigger.service... Dec 13 06:43:48.672096 dracut-pre-trigger[407]: rd.md=0: removing MD RAID activation Dec 13 06:43:48.713602 systemd[1]: Finished dracut-pre-trigger.service. Dec 13 06:43:48.714000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:43:48.717156 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 06:43:48.814000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:43:48.813829 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 06:43:48.895749 kernel: virtio_blk virtio1: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Dec 13 06:43:48.947153 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 06:43:48.947189 kernel: GPT:17805311 != 125829119 Dec 13 06:43:48.947216 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 06:43:48.947233 kernel: GPT:17805311 != 125829119 Dec 13 06:43:48.947248 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 06:43:48.947276 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 06:43:48.947293 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 06:43:48.969428 kernel: libata version 3.00 loaded. Dec 13 06:43:48.988532 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Dec 13 06:43:48.991848 kernel: AVX version of gcm_enc/dec engaged. Dec 13 06:43:48.991874 kernel: AES CTR mode by8 optimization enabled Dec 13 06:43:48.994573 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (453) Dec 13 06:43:48.994992 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Dec 13 06:43:49.003722 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Dec 13 06:43:49.009043 kernel: ACPI: bus type USB registered Dec 13 06:43:49.009077 kernel: usbcore: registered new interface driver usbfs Dec 13 06:43:49.009095 kernel: usbcore: registered new interface driver hub Dec 13 06:43:49.009111 kernel: usbcore: registered new device driver usb Dec 13 06:43:49.017849 systemd[1]: Starting disk-uuid.service... Dec 13 06:43:49.027272 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Dec 13 06:43:49.030310 disk-uuid[471]: Primary Header is updated. Dec 13 06:43:49.030310 disk-uuid[471]: Secondary Entries is updated. Dec 13 06:43:49.030310 disk-uuid[471]: Secondary Header is updated. Dec 13 06:43:49.055705 kernel: ahci 0000:00:1f.2: version 3.0 Dec 13 06:43:49.096803 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Dec 13 06:43:49.096840 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Dec 13 06:43:49.097033 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Dec 13 06:43:49.097208 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Dec 13 06:43:49.097445 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 1 Dec 13 06:43:49.097642 kernel: scsi host0: ahci Dec 13 06:43:49.097851 kernel: xhci_hcd 0000:03:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Dec 13 06:43:49.098042 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Dec 13 06:43:49.098255 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 2 Dec 13 06:43:49.098457 kernel: xhci_hcd 0000:03:00.0: Host supports USB 3.0 SuperSpeed Dec 13 06:43:49.098635 kernel: hub 1-0:1.0: USB hub found Dec 13 06:43:49.098840 kernel: hub 1-0:1.0: 4 ports detected Dec 13 06:43:49.099032 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Dec 13 06:43:49.099535 kernel: hub 2-0:1.0: USB hub found Dec 13 06:43:49.099771 kernel: hub 2-0:1.0: 4 ports detected Dec 13 06:43:49.099981 kernel: scsi host1: ahci Dec 13 06:43:49.100212 kernel: scsi host2: ahci Dec 13 06:43:49.100485 kernel: scsi host3: ahci Dec 13 06:43:49.100716 kernel: scsi host4: ahci Dec 13 06:43:49.100940 kernel: scsi host5: ahci Dec 13 06:43:49.101132 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b100 irq 38 Dec 13 06:43:49.101159 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b180 irq 38 Dec 13 06:43:49.101178 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b200 irq 38 Dec 13 06:43:49.101194 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b280 irq 38 Dec 13 06:43:49.101211 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b300 irq 38 Dec 13 06:43:49.101228 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b380 irq 38 Dec 13 06:43:49.061615 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Dec 13 06:43:49.312515 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Dec 13 06:43:49.405496 kernel: ata3: SATA link down (SStatus 0 SControl 300) Dec 13 06:43:49.413415 kernel: ata6: SATA link down (SStatus 0 SControl 300) Dec 13 06:43:49.413455 kernel: ata2: SATA link down (SStatus 0 SControl 300) Dec 13 06:43:49.417728 kernel: ata1: SATA link down (SStatus 0 SControl 300) Dec 13 06:43:49.417767 kernel: ata5: SATA link down (SStatus 0 SControl 300) Dec 13 06:43:49.419363 kernel: ata4: SATA link down (SStatus 0 SControl 300) Dec 13 06:43:49.455430 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 13 06:43:49.461617 kernel: usbcore: registered new interface driver usbhid Dec 13 06:43:49.461660 kernel: usbhid: USB HID core driver Dec 13 06:43:49.470691 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:03:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input3 Dec 13 06:43:49.470735 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:03:00.0-1/input0 Dec 13 06:43:50.042433 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 06:43:50.043208 disk-uuid[472]: The operation has completed successfully. Dec 13 06:43:50.100599 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 06:43:50.101714 systemd[1]: Finished disk-uuid.service. Dec 13 06:43:50.102000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:43:50.102000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:43:50.109365 systemd[1]: Starting verity-setup.service... Dec 13 06:43:50.132425 kernel: device-mapper: verity: sha256 using implementation "sha256-avx" Dec 13 06:43:50.186999 systemd[1]: Found device dev-mapper-usr.device. Dec 13 06:43:50.189189 systemd[1]: Mounting sysusr-usr.mount... Dec 13 06:43:50.191337 systemd[1]: Finished verity-setup.service. Dec 13 06:43:50.191000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:43:50.288441 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Dec 13 06:43:50.289853 systemd[1]: Mounted sysusr-usr.mount. Dec 13 06:43:50.290758 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Dec 13 06:43:50.291982 systemd[1]: Starting ignition-setup.service... Dec 13 06:43:50.293955 systemd[1]: Starting parse-ip-for-networkd.service... Dec 13 06:43:50.313624 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 06:43:50.313685 kernel: BTRFS info (device vda6): using free space tree Dec 13 06:43:50.313715 kernel: BTRFS info (device vda6): has skinny extents Dec 13 06:43:50.331620 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 06:43:50.340878 systemd[1]: Finished ignition-setup.service. Dec 13 06:43:50.341000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:43:50.342750 systemd[1]: Starting ignition-fetch-offline.service... Dec 13 06:43:50.463929 systemd[1]: Finished parse-ip-for-networkd.service. Dec 13 06:43:50.464000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:43:50.466000 audit: BPF prog-id=9 op=LOAD Dec 13 06:43:50.467468 systemd[1]: Starting systemd-networkd.service... Dec 13 06:43:50.505937 systemd-networkd[710]: lo: Link UP Dec 13 06:43:50.505952 systemd-networkd[710]: lo: Gained carrier Dec 13 06:43:50.507602 systemd-networkd[710]: Enumeration completed Dec 13 06:43:50.511000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:43:50.508014 systemd-networkd[710]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 06:43:50.510475 systemd-networkd[710]: eth0: Link UP Dec 13 06:43:50.510483 systemd-networkd[710]: eth0: Gained carrier Dec 13 06:43:50.510842 systemd[1]: Started systemd-networkd.service. Dec 13 06:43:50.512211 systemd[1]: Reached target network.target. Dec 13 06:43:50.536000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:43:50.514620 systemd[1]: Starting iscsiuio.service... Dec 13 06:43:50.532057 systemd[1]: Started iscsiuio.service. Dec 13 06:43:50.537620 systemd[1]: Starting iscsid.service... Dec 13 06:43:50.543904 iscsid[715]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Dec 13 06:43:50.543904 iscsid[715]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Dec 13 06:43:50.543904 iscsid[715]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Dec 13 06:43:50.543904 iscsid[715]: If using hardware iscsi like qla4xxx this message can be ignored. Dec 13 06:43:50.543904 iscsid[715]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Dec 13 06:43:50.548000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:43:50.556000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:43:50.551954 ignition[632]: Ignition 2.14.0 Dec 13 06:43:50.547766 systemd[1]: Started iscsid.service. Dec 13 06:43:50.567507 iscsid[715]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Dec 13 06:43:50.551975 ignition[632]: Stage: fetch-offline Dec 13 06:43:50.550638 systemd[1]: Starting dracut-initqueue.service... Dec 13 06:43:50.552099 ignition[632]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 06:43:50.556295 systemd[1]: Finished ignition-fetch-offline.service. Dec 13 06:43:50.574000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:43:50.552144 ignition[632]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Dec 13 06:43:50.558405 systemd[1]: Starting ignition-fetch.service... Dec 13 06:43:50.553965 ignition[632]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 06:43:50.568540 systemd-networkd[710]: eth0: DHCPv4 address 10.230.56.170/30, gateway 10.230.56.169 acquired from 10.230.56.169 Dec 13 06:43:50.554328 ignition[632]: parsed url from cmdline: "" Dec 13 06:43:50.573657 systemd[1]: Finished dracut-initqueue.service. Dec 13 06:43:50.554336 ignition[632]: no config URL provided Dec 13 06:43:50.574791 systemd[1]: Reached target remote-fs-pre.target. Dec 13 06:43:50.554347 ignition[632]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 06:43:50.575404 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 06:43:50.554377 ignition[632]: no config at "/usr/lib/ignition/user.ign" Dec 13 06:43:50.576003 systemd[1]: Reached target remote-fs.target. Dec 13 06:43:50.554454 ignition[632]: failed to fetch config: resource requires networking Dec 13 06:43:50.577908 systemd[1]: Starting dracut-pre-mount.service... Dec 13 06:43:50.554638 ignition[632]: Ignition finished successfully Dec 13 06:43:50.585585 ignition[717]: Ignition 2.14.0 Dec 13 06:43:50.585596 ignition[717]: Stage: fetch Dec 13 06:43:50.585786 ignition[717]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 06:43:50.585819 ignition[717]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Dec 13 06:43:50.587174 ignition[717]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 06:43:50.593813 systemd[1]: Finished dracut-pre-mount.service. Dec 13 06:43:50.595000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:43:50.587318 ignition[717]: parsed url from cmdline: "" Dec 13 06:43:50.587335 ignition[717]: no config URL provided Dec 13 06:43:50.587345 ignition[717]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 06:43:50.587374 ignition[717]: no config at "/usr/lib/ignition/user.ign" Dec 13 06:43:50.598407 ignition[717]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Dec 13 06:43:50.598486 ignition[717]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Dec 13 06:43:50.598971 ignition[717]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Dec 13 06:43:50.614703 ignition[717]: GET result: OK Dec 13 06:43:50.614797 ignition[717]: parsing config with SHA512: 4d665fdae327dd691dc0a3237d9c15bc931c653759fb286ca93f816e5a89af2abadfe8fe01aa185e4cc96383904c641914463a974de5f186226290172ea923b6 Dec 13 06:43:50.622844 unknown[717]: fetched base config from "system" Dec 13 06:43:50.623723 unknown[717]: fetched base config from "system" Dec 13 06:43:50.623738 unknown[717]: fetched user config from "openstack" Dec 13 06:43:50.625822 ignition[717]: fetch: fetch complete Dec 13 06:43:50.625835 ignition[717]: fetch: fetch passed Dec 13 06:43:50.625960 ignition[717]: Ignition finished successfully Dec 13 06:43:50.629000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:43:50.629653 systemd[1]: Finished ignition-fetch.service. Dec 13 06:43:50.632513 systemd[1]: Starting ignition-kargs.service... Dec 13 06:43:50.645125 ignition[735]: Ignition 2.14.0 Dec 13 06:43:50.645146 ignition[735]: Stage: kargs Dec 13 06:43:50.645315 ignition[735]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 06:43:50.645376 ignition[735]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Dec 13 06:43:50.646729 ignition[735]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 06:43:50.649000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:43:50.647971 ignition[735]: kargs: kargs passed Dec 13 06:43:50.649350 systemd[1]: Finished ignition-kargs.service. Dec 13 06:43:50.648043 ignition[735]: Ignition finished successfully Dec 13 06:43:50.651668 systemd[1]: Starting ignition-disks.service... Dec 13 06:43:50.663250 ignition[740]: Ignition 2.14.0 Dec 13 06:43:50.663272 ignition[740]: Stage: disks Dec 13 06:43:50.663508 ignition[740]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 06:43:50.663547 ignition[740]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Dec 13 06:43:50.664861 ignition[740]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 06:43:50.666143 ignition[740]: disks: disks passed Dec 13 06:43:50.667000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:43:50.667156 systemd[1]: Finished ignition-disks.service. Dec 13 06:43:50.666209 ignition[740]: Ignition finished successfully Dec 13 06:43:50.667986 systemd[1]: Reached target initrd-root-device.target. Dec 13 06:43:50.668672 systemd[1]: Reached target local-fs-pre.target. Dec 13 06:43:50.669880 systemd[1]: Reached target local-fs.target. Dec 13 06:43:50.671097 systemd[1]: Reached target sysinit.target. Dec 13 06:43:50.672333 systemd[1]: Reached target basic.target. Dec 13 06:43:50.675007 systemd[1]: Starting systemd-fsck-root.service... Dec 13 06:43:50.695094 systemd-fsck[747]: ROOT: clean, 621/1628000 files, 124058/1617920 blocks Dec 13 06:43:50.700807 systemd[1]: Finished systemd-fsck-root.service. Dec 13 06:43:50.701000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:43:50.702892 systemd[1]: Mounting sysroot.mount... Dec 13 06:43:50.721447 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Dec 13 06:43:50.722162 systemd[1]: Mounted sysroot.mount. Dec 13 06:43:50.722999 systemd[1]: Reached target initrd-root-fs.target. Dec 13 06:43:50.725860 systemd[1]: Mounting sysroot-usr.mount... Dec 13 06:43:50.727083 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Dec 13 06:43:50.727989 systemd[1]: Starting flatcar-openstack-hostname.service... Dec 13 06:43:50.728795 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 06:43:50.728871 systemd[1]: Reached target ignition-diskful.target. Dec 13 06:43:50.736622 systemd[1]: Mounted sysroot-usr.mount. Dec 13 06:43:50.738948 systemd[1]: Starting initrd-setup-root.service... Dec 13 06:43:50.749423 initrd-setup-root[758]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 06:43:50.761802 initrd-setup-root[766]: cut: /sysroot/etc/group: No such file or directory Dec 13 06:43:50.770546 initrd-setup-root[774]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 06:43:50.780699 initrd-setup-root[783]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 06:43:50.846194 systemd[1]: Finished initrd-setup-root.service. Dec 13 06:43:50.846000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:43:50.848281 systemd[1]: Starting ignition-mount.service... Dec 13 06:43:50.849972 systemd[1]: Starting sysroot-boot.service... Dec 13 06:43:50.859989 bash[801]: umount: /sysroot/usr/share/oem: not mounted. Dec 13 06:43:50.872408 ignition[802]: INFO : Ignition 2.14.0 Dec 13 06:43:50.873456 ignition[802]: INFO : Stage: mount Dec 13 06:43:50.873456 ignition[802]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 06:43:50.873456 ignition[802]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Dec 13 06:43:50.876930 ignition[802]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 06:43:50.876930 ignition[802]: INFO : mount: mount passed Dec 13 06:43:50.876930 ignition[802]: INFO : Ignition finished successfully Dec 13 06:43:50.877000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:43:50.876886 systemd[1]: Finished ignition-mount.service. Dec 13 06:43:50.895382 coreos-metadata[753]: Dec 13 06:43:50.895 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Dec 13 06:43:50.901946 systemd[1]: Finished sysroot-boot.service. Dec 13 06:43:50.902000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:43:50.909379 coreos-metadata[753]: Dec 13 06:43:50.909 INFO Fetch successful Dec 13 06:43:50.910268 coreos-metadata[753]: Dec 13 06:43:50.910 INFO wrote hostname srv-8eon8.gb1.brightbox.com to /sysroot/etc/hostname Dec 13 06:43:50.913360 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Dec 13 06:43:50.913532 systemd[1]: Finished flatcar-openstack-hostname.service. Dec 13 06:43:50.914000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:43:50.914000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:43:51.210649 systemd[1]: Mounting sysroot-usr-share-oem.mount... Dec 13 06:43:51.234604 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (810) Dec 13 06:43:51.234640 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 06:43:51.234660 kernel: BTRFS info (device vda6): using free space tree Dec 13 06:43:51.234713 kernel: BTRFS info (device vda6): has skinny extents Dec 13 06:43:51.239118 systemd[1]: Mounted sysroot-usr-share-oem.mount. Dec 13 06:43:51.241018 systemd[1]: Starting ignition-files.service... Dec 13 06:43:51.265434 ignition[830]: INFO : Ignition 2.14.0 Dec 13 06:43:51.265434 ignition[830]: INFO : Stage: files Dec 13 06:43:51.267197 ignition[830]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 06:43:51.267197 ignition[830]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Dec 13 06:43:51.267197 ignition[830]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 06:43:51.270467 ignition[830]: DEBUG : files: compiled without relabeling support, skipping Dec 13 06:43:51.273500 ignition[830]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 06:43:51.273500 ignition[830]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 06:43:51.287205 ignition[830]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 06:43:51.288235 ignition[830]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 06:43:51.289220 ignition[830]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 06:43:51.289220 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Dec 13 06:43:51.288306 unknown[830]: wrote ssh authorized keys file for user: core Dec 13 06:43:51.292207 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 06:43:51.292207 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 06:43:51.292207 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 06:43:51.292207 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 06:43:51.292207 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 06:43:51.292207 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 06:43:51.292207 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Dec 13 06:43:51.876706 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Dec 13 06:43:52.516828 systemd-networkd[710]: eth0: Gained IPv6LL Dec 13 06:43:53.483563 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 06:43:53.489187 ignition[830]: INFO : files: op(7): [started] processing unit "coreos-metadata-sshkeys@.service" Dec 13 06:43:53.489187 ignition[830]: INFO : files: op(7): [finished] processing unit "coreos-metadata-sshkeys@.service" Dec 13 06:43:53.489187 ignition[830]: INFO : files: op(8): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Dec 13 06:43:53.489187 ignition[830]: INFO : files: op(8): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Dec 13 06:43:53.496724 ignition[830]: INFO : files: createResultFile: createFiles: op(9): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 06:43:53.498129 ignition[830]: INFO : files: createResultFile: createFiles: op(9): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 06:43:53.499306 ignition[830]: INFO : files: files passed Dec 13 06:43:53.500136 ignition[830]: INFO : Ignition finished successfully Dec 13 06:43:53.502329 systemd[1]: Finished ignition-files.service. Dec 13 06:43:53.512467 kernel: kauditd_printk_skb: 28 callbacks suppressed Dec 13 06:43:53.512509 kernel: audit: type=1130 audit(1734072233.504:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:43:53.504000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:43:53.507151 systemd[1]: Starting initrd-setup-root-after-ignition.service... Dec 13 06:43:53.513615 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Dec 13 06:43:53.516040 systemd[1]: Starting ignition-quench.service... Dec 13 06:43:53.520912 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 06:43:53.521714 initrd-setup-root-after-ignition[855]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 06:43:53.522568 systemd[1]: Finished ignition-quench.service. Dec 13 06:43:53.533442 kernel: audit: type=1130 audit(1734072233.523:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:43:53.533469 kernel: audit: type=1131 audit(1734072233.523:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:43:53.523000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:43:53.523000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:43:53.523768 systemd[1]: Finished initrd-setup-root-after-ignition.service. Dec 13 06:43:53.539706 kernel: audit: type=1130 audit(1734072233.533:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:43:53.533000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:43:53.534286 systemd[1]: Reached target ignition-complete.target. Dec 13 06:43:53.541314 systemd[1]: Starting initrd-parse-etc.service... Dec 13 06:43:53.559200 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 06:43:53.559442 systemd[1]: Finished initrd-parse-etc.service. Dec 13 06:43:53.571229 kernel: audit: type=1130 audit(1734072233.560:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:43:53.571261 kernel: audit: type=1131 audit(1734072233.560:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:43:53.560000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:43:53.560000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:43:53.561203 systemd[1]: Reached target initrd-fs.target. Dec 13 06:43:53.571850 systemd[1]: Reached target initrd.target. Dec 13 06:43:53.573150 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Dec 13 06:43:53.574235 systemd[1]: Starting dracut-pre-pivot.service... Dec 13 06:43:53.592054 systemd[1]: Finished dracut-pre-pivot.service. Dec 13 06:43:53.593000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:43:53.598443 kernel: audit: type=1130 audit(1734072233.593:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:43:53.599606 systemd[1]: Starting initrd-cleanup.service... Dec 13 06:43:53.617147 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 06:43:53.618128 systemd[1]: Finished initrd-cleanup.service. Dec 13 06:43:53.619000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:43:53.620517 systemd[1]: Stopped target nss-lookup.target. Dec 13 06:43:53.630146 kernel: audit: type=1130 audit(1734072233.619:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:43:53.630188 kernel: audit: type=1131 audit(1734072233.619:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:43:53.619000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:43:53.630746 systemd[1]: Stopped target remote-cryptsetup.target. Dec 13 06:43:53.631449 systemd[1]: Stopped target timers.target. Dec 13 06:43:53.632928 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 06:43:53.639657 kernel: audit: type=1131 audit(1734072233.633:48): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:43:53.633000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:43:53.633009 systemd[1]: Stopped dracut-pre-pivot.service. Dec 13 06:43:53.634296 systemd[1]: Stopped target initrd.target. Dec 13 06:43:53.640436 systemd[1]: Stopped target basic.target. Dec 13 06:43:53.641698 systemd[1]: Stopped target ignition-complete.target. Dec 13 06:43:53.642964 systemd[1]: Stopped target ignition-diskful.target. Dec 13 06:43:53.644191 systemd[1]: Stopped target initrd-root-device.target. Dec 13 06:43:53.645501 systemd[1]: Stopped target remote-fs.target. Dec 13 06:43:53.646741 systemd[1]: Stopped target remote-fs-pre.target. Dec 13 06:43:53.647989 systemd[1]: Stopped target sysinit.target. Dec 13 06:43:53.649167 systemd[1]: Stopped target local-fs.target. Dec 13 06:43:53.650494 systemd[1]: Stopped target local-fs-pre.target. Dec 13 06:43:53.651737 systemd[1]: Stopped target swap.target. Dec 13 06:43:53.653000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:43:53.652950 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 06:43:53.653023 systemd[1]: Stopped dracut-pre-mount.service. Dec 13 06:43:53.656000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:43:53.654216 systemd[1]: Stopped target cryptsetup.target. Dec 13 06:43:53.671000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:43:53.655330 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 06:43:53.672000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:43:53.655415 systemd[1]: Stopped dracut-initqueue.service. Dec 13 06:43:53.656730 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 06:43:53.656792 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Dec 13 06:43:53.672174 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 06:43:53.672246 systemd[1]: Stopped ignition-files.service. Dec 13 06:43:53.674459 systemd[1]: Stopping ignition-mount.service... Dec 13 06:43:53.683535 systemd[1]: Stopping sysroot-boot.service... Dec 13 06:43:53.689000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:43:53.692000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:43:53.694436 ignition[868]: INFO : Ignition 2.14.0 Dec 13 06:43:53.694436 ignition[868]: INFO : Stage: umount Dec 13 06:43:53.694436 ignition[868]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 06:43:53.694436 ignition[868]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Dec 13 06:43:53.694436 ignition[868]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 06:43:53.689317 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 06:43:53.700172 ignition[868]: INFO : umount: umount passed Dec 13 06:43:53.700172 ignition[868]: INFO : Ignition finished successfully Dec 13 06:43:53.689428 systemd[1]: Stopped systemd-udev-trigger.service. Dec 13 06:43:53.692498 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 06:43:53.702000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:43:53.692572 systemd[1]: Stopped dracut-pre-trigger.service. Dec 13 06:43:53.703000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:43:53.701782 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 06:43:53.705000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:43:53.701910 systemd[1]: Stopped ignition-mount.service. Dec 13 06:43:53.706000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:43:53.703002 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 06:43:53.703069 systemd[1]: Stopped ignition-disks.service. Dec 13 06:43:53.708000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:43:53.704251 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 06:43:53.704338 systemd[1]: Stopped ignition-kargs.service. Dec 13 06:43:53.705549 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 13 06:43:53.705619 systemd[1]: Stopped ignition-fetch.service. Dec 13 06:43:53.706901 systemd[1]: Stopped target network.target. Dec 13 06:43:53.708029 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 06:43:53.708091 systemd[1]: Stopped ignition-fetch-offline.service. Dec 13 06:43:53.709431 systemd[1]: Stopped target paths.target. Dec 13 06:43:53.721000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:43:53.712795 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 06:43:53.714737 systemd[1]: Stopped systemd-ask-password-console.path. Dec 13 06:43:53.715719 systemd[1]: Stopped target slices.target. Dec 13 06:43:53.716403 systemd[1]: Stopped target sockets.target. Dec 13 06:43:53.727000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:43:53.717708 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 06:43:53.717763 systemd[1]: Closed iscsid.socket. Dec 13 06:43:53.718886 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 06:43:53.718945 systemd[1]: Closed iscsiuio.socket. Dec 13 06:43:53.733000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:43:53.720196 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 06:43:53.735000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:43:53.720297 systemd[1]: Stopped ignition-setup.service. Dec 13 06:43:53.736000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:43:53.721648 systemd[1]: Stopping systemd-networkd.service... Dec 13 06:43:53.723431 systemd[1]: Stopping systemd-resolved.service... Dec 13 06:43:53.725446 systemd-networkd[710]: eth0: DHCPv6 lease lost Dec 13 06:43:53.726365 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 06:43:53.745000 audit: BPF prog-id=9 op=UNLOAD Dec 13 06:43:53.727071 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 06:43:53.747000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:43:53.727206 systemd[1]: Stopped systemd-networkd.service. Dec 13 06:43:53.729005 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 06:43:53.729063 systemd[1]: Closed systemd-networkd.socket. Dec 13 06:43:53.730667 systemd[1]: Stopping network-cleanup.service... Dec 13 06:43:53.751000 audit: BPF prog-id=6 op=UNLOAD Dec 13 06:43:53.752000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:43:53.731376 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 06:43:53.731500 systemd[1]: Stopped parse-ip-for-networkd.service. Dec 13 06:43:53.734376 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 06:43:53.755000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:43:53.734461 systemd[1]: Stopped systemd-sysctl.service. Dec 13 06:43:53.736158 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 06:43:53.736228 systemd[1]: Stopped systemd-modules-load.service. Dec 13 06:43:53.759000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:43:53.737222 systemd[1]: Stopping systemd-udevd.service... Dec 13 06:43:53.760000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:43:53.745148 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 13 06:43:53.762000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:43:53.768000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:43:53.769000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:43:53.770000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:43:53.745939 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 06:43:53.772000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:43:53.772000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:43:53.746075 systemd[1]: Stopped systemd-resolved.service. Dec 13 06:43:53.751015 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 06:43:53.751348 systemd[1]: Stopped systemd-udevd.service. Dec 13 06:43:53.754849 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 06:43:53.754978 systemd[1]: Stopped network-cleanup.service. Dec 13 06:43:53.756594 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 06:43:53.756657 systemd[1]: Closed systemd-udevd-control.socket. Dec 13 06:43:53.757655 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 06:43:53.757704 systemd[1]: Closed systemd-udevd-kernel.socket. Dec 13 06:43:53.758815 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 06:43:53.758874 systemd[1]: Stopped dracut-pre-udev.service. Dec 13 06:43:53.760213 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 06:43:53.760288 systemd[1]: Stopped dracut-cmdline.service. Dec 13 06:43:53.761460 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 06:43:53.761518 systemd[1]: Stopped dracut-cmdline-ask.service. Dec 13 06:43:53.763520 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Dec 13 06:43:53.768218 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 06:43:53.768309 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Dec 13 06:43:53.769120 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 06:43:53.769187 systemd[1]: Stopped kmod-static-nodes.service. Dec 13 06:43:53.769857 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 06:43:53.769925 systemd[1]: Stopped systemd-vconsole-setup.service. Dec 13 06:43:53.771838 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Dec 13 06:43:53.772470 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 06:43:53.772597 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Dec 13 06:43:53.920378 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 06:43:53.920594 systemd[1]: Stopped sysroot-boot.service. Dec 13 06:43:53.921000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:43:53.922341 systemd[1]: Reached target initrd-switch-root.target. Dec 13 06:43:53.923450 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 06:43:53.923521 systemd[1]: Stopped initrd-setup-root.service. Dec 13 06:43:53.925000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:43:53.926917 systemd[1]: Starting initrd-switch-root.service... Dec 13 06:43:53.943240 systemd[1]: Switching root. Dec 13 06:43:53.965733 iscsid[715]: iscsid shutting down. Dec 13 06:43:53.966674 systemd-journald[201]: Received SIGTERM from PID 1 (systemd). Dec 13 06:43:53.966755 systemd-journald[201]: Journal stopped Dec 13 06:43:57.957099 kernel: SELinux: Class mctp_socket not defined in policy. Dec 13 06:43:57.957204 kernel: SELinux: Class anon_inode not defined in policy. Dec 13 06:43:57.957287 kernel: SELinux: the above unknown classes and permissions will be allowed Dec 13 06:43:57.957319 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 06:43:57.957351 kernel: SELinux: policy capability open_perms=1 Dec 13 06:43:57.957377 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 06:43:57.957417 kernel: SELinux: policy capability always_check_network=0 Dec 13 06:43:57.957438 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 06:43:57.957457 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 06:43:57.957477 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 06:43:57.957496 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 06:43:57.957532 systemd[1]: Successfully loaded SELinux policy in 60.056ms. Dec 13 06:43:57.957570 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 15.740ms. Dec 13 06:43:57.957600 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 06:43:57.957622 systemd[1]: Detected virtualization kvm. Dec 13 06:43:57.957644 systemd[1]: Detected architecture x86-64. Dec 13 06:43:57.957670 systemd[1]: Detected first boot. Dec 13 06:43:57.957692 systemd[1]: Hostname set to . Dec 13 06:43:57.957715 systemd[1]: Initializing machine ID from VM UUID. Dec 13 06:43:57.957748 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Dec 13 06:43:57.957771 systemd[1]: Populated /etc with preset unit settings. Dec 13 06:43:57.957794 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 06:43:57.957816 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 06:43:57.957851 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 06:43:57.957874 systemd[1]: iscsiuio.service: Deactivated successfully. Dec 13 06:43:57.957906 systemd[1]: Stopped iscsiuio.service. Dec 13 06:43:57.957928 systemd[1]: iscsid.service: Deactivated successfully. Dec 13 06:43:57.957948 systemd[1]: Stopped iscsid.service. Dec 13 06:43:57.957968 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 13 06:43:57.958022 systemd[1]: Stopped initrd-switch-root.service. Dec 13 06:43:57.958045 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 13 06:43:57.958065 systemd[1]: Created slice system-addon\x2dconfig.slice. Dec 13 06:43:57.958086 systemd[1]: Created slice system-addon\x2drun.slice. Dec 13 06:43:57.958141 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Dec 13 06:43:57.958166 systemd[1]: Created slice system-getty.slice. Dec 13 06:43:57.958199 systemd[1]: Created slice system-modprobe.slice. Dec 13 06:43:57.958222 systemd[1]: Created slice system-serial\x2dgetty.slice. Dec 13 06:43:57.958251 systemd[1]: Created slice system-system\x2dcloudinit.slice. Dec 13 06:43:57.958274 systemd[1]: Created slice system-systemd\x2dfsck.slice. Dec 13 06:43:57.958306 systemd[1]: Created slice user.slice. Dec 13 06:43:57.958329 systemd[1]: Started systemd-ask-password-console.path. Dec 13 06:43:57.958351 systemd[1]: Started systemd-ask-password-wall.path. Dec 13 06:43:57.958372 systemd[1]: Set up automount boot.automount. Dec 13 06:43:57.958405 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Dec 13 06:43:57.958428 systemd[1]: Stopped target initrd-switch-root.target. Dec 13 06:43:57.958462 systemd[1]: Stopped target initrd-fs.target. Dec 13 06:43:57.958485 systemd[1]: Stopped target initrd-root-fs.target. Dec 13 06:43:57.958505 systemd[1]: Reached target integritysetup.target. Dec 13 06:43:57.958526 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 06:43:57.958547 systemd[1]: Reached target remote-fs.target. Dec 13 06:43:57.958597 systemd[1]: Reached target slices.target. Dec 13 06:43:57.958620 systemd[1]: Reached target swap.target. Dec 13 06:43:57.958641 systemd[1]: Reached target torcx.target. Dec 13 06:43:57.958663 systemd[1]: Reached target veritysetup.target. Dec 13 06:43:57.958684 systemd[1]: Listening on systemd-coredump.socket. Dec 13 06:43:57.962730 systemd[1]: Listening on systemd-initctl.socket. Dec 13 06:43:57.962761 systemd[1]: Listening on systemd-networkd.socket. Dec 13 06:43:57.962784 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 06:43:57.962820 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 06:43:57.962844 systemd[1]: Listening on systemd-userdbd.socket. Dec 13 06:43:57.962865 systemd[1]: Mounting dev-hugepages.mount... Dec 13 06:43:57.962885 systemd[1]: Mounting dev-mqueue.mount... Dec 13 06:43:57.962906 systemd[1]: Mounting media.mount... Dec 13 06:43:57.962928 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 06:43:57.962961 systemd[1]: Mounting sys-kernel-debug.mount... Dec 13 06:43:57.962984 systemd[1]: Mounting sys-kernel-tracing.mount... Dec 13 06:43:57.963035 systemd[1]: Mounting tmp.mount... Dec 13 06:43:57.963061 systemd[1]: Starting flatcar-tmpfiles.service... Dec 13 06:43:57.963083 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 06:43:57.963104 systemd[1]: Starting kmod-static-nodes.service... Dec 13 06:43:57.963135 systemd[1]: Starting modprobe@configfs.service... Dec 13 06:43:57.963160 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 06:43:57.963188 systemd[1]: Starting modprobe@drm.service... Dec 13 06:43:57.963222 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 06:43:57.963245 systemd[1]: Starting modprobe@fuse.service... Dec 13 06:43:57.963267 systemd[1]: Starting modprobe@loop.service... Dec 13 06:43:57.963290 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 06:43:57.963311 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 13 06:43:57.963331 systemd[1]: Stopped systemd-fsck-root.service. Dec 13 06:43:57.963353 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 13 06:43:57.963373 systemd[1]: Stopped systemd-fsck-usr.service. Dec 13 06:43:57.963417 systemd[1]: Stopped systemd-journald.service. Dec 13 06:43:57.963454 systemd[1]: Starting systemd-journald.service... Dec 13 06:43:57.963476 kernel: fuse: init (API version 7.34) Dec 13 06:43:57.963497 systemd[1]: Starting systemd-modules-load.service... Dec 13 06:43:57.963518 systemd[1]: Starting systemd-network-generator.service... Dec 13 06:43:57.963546 systemd[1]: Starting systemd-remount-fs.service... Dec 13 06:43:57.963569 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 06:43:57.963590 systemd[1]: verity-setup.service: Deactivated successfully. Dec 13 06:43:57.963612 systemd[1]: Stopped verity-setup.service. Dec 13 06:43:57.963632 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 06:43:57.963665 systemd[1]: Mounted dev-hugepages.mount. Dec 13 06:43:57.963695 systemd[1]: Mounted dev-mqueue.mount. Dec 13 06:43:57.963716 systemd[1]: Mounted media.mount. Dec 13 06:43:57.963738 systemd[1]: Mounted sys-kernel-debug.mount. Dec 13 06:43:57.963759 systemd[1]: Mounted sys-kernel-tracing.mount. Dec 13 06:43:57.963780 systemd[1]: Mounted tmp.mount. Dec 13 06:43:57.963801 systemd[1]: Finished kmod-static-nodes.service. Dec 13 06:43:57.963834 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 06:43:57.963857 systemd[1]: Finished modprobe@configfs.service. Dec 13 06:43:57.963877 kernel: loop: module loaded Dec 13 06:43:57.963897 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 06:43:57.963924 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 06:43:57.963952 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 06:43:57.963974 systemd[1]: Finished modprobe@drm.service. Dec 13 06:43:57.964006 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 06:43:57.964028 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 06:43:57.964060 systemd-journald[978]: Journal started Dec 13 06:43:57.964143 systemd-journald[978]: Runtime Journal (/run/log/journal/00370428ee944804ae4377d98cdb75a8) is 4.7M, max 38.1M, 33.3M free. Dec 13 06:43:54.134000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 06:43:54.223000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 06:43:54.223000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 06:43:54.223000 audit: BPF prog-id=10 op=LOAD Dec 13 06:43:54.223000 audit: BPF prog-id=10 op=UNLOAD Dec 13 06:43:54.223000 audit: BPF prog-id=11 op=LOAD Dec 13 06:43:54.223000 audit: BPF prog-id=11 op=UNLOAD Dec 13 06:43:57.967430 systemd[1]: Started systemd-journald.service. Dec 13 06:43:54.361000 audit[900]: AVC avc: denied { associate } for pid=900 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Dec 13 06:43:54.361000 audit[900]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c00018a2d2 a1=c000194378 a2=c000196800 a3=32 items=0 ppid=883 pid=900 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 06:43:54.361000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Dec 13 06:43:54.363000 audit[900]: AVC avc: denied { associate } for pid=900 comm="torcx-generator" name="bin" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Dec 13 06:43:54.363000 audit[900]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c00018a3a9 a2=1ed a3=0 items=2 ppid=883 pid=900 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 06:43:54.363000 audit: CWD cwd="/" Dec 13 06:43:54.363000 audit: PATH item=0 name=(null) inode=2 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:43:54.363000 audit: PATH item=1 name=(null) inode=3 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:43:54.363000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Dec 13 06:43:57.684000 audit: BPF prog-id=12 op=LOAD Dec 13 06:43:57.684000 audit: BPF prog-id=3 op=UNLOAD Dec 13 06:43:57.684000 audit: BPF prog-id=13 op=LOAD Dec 13 06:43:57.684000 audit: BPF prog-id=14 op=LOAD Dec 13 06:43:57.684000 audit: BPF prog-id=4 op=UNLOAD Dec 13 06:43:57.684000 audit: BPF prog-id=5 op=UNLOAD Dec 13 06:43:57.685000 audit: BPF prog-id=15 op=LOAD Dec 13 06:43:57.685000 audit: BPF prog-id=12 op=UNLOAD Dec 13 06:43:57.686000 audit: BPF prog-id=16 op=LOAD Dec 13 06:43:57.686000 audit: BPF prog-id=17 op=LOAD Dec 13 06:43:57.686000 audit: BPF prog-id=13 op=UNLOAD Dec 13 06:43:57.686000 audit: BPF prog-id=14 op=UNLOAD Dec 13 06:43:57.689000 audit: BPF prog-id=18 op=LOAD Dec 13 06:43:57.689000 audit: BPF prog-id=15 op=UNLOAD Dec 13 06:43:57.689000 audit: BPF prog-id=19 op=LOAD Dec 13 06:43:57.690000 audit: BPF prog-id=20 op=LOAD Dec 13 06:43:57.690000 audit: BPF prog-id=16 op=UNLOAD Dec 13 06:43:57.690000 audit: BPF prog-id=17 op=UNLOAD Dec 13 06:43:57.692000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:43:57.697000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:43:57.699000 audit: BPF prog-id=18 op=UNLOAD Dec 13 06:43:57.701000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:43:57.706000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:43:57.706000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:43:57.869000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:43:57.873000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:43:57.876000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:43:57.876000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:43:57.878000 audit: BPF prog-id=21 op=LOAD Dec 13 06:43:57.879000 audit: BPF prog-id=22 op=LOAD Dec 13 06:43:57.879000 audit: BPF prog-id=23 op=LOAD Dec 13 06:43:57.879000 audit: BPF prog-id=19 op=UNLOAD Dec 13 06:43:57.879000 audit: BPF prog-id=20 op=UNLOAD Dec 13 06:43:57.914000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:43:57.941000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:43:57.948000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:43:57.948000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:43:57.953000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:43:57.953000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Dec 13 06:43:57.953000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:43:57.953000 audit[978]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffd1190a8f0 a2=4000 a3=7ffd1190a98c items=0 ppid=1 pid=978 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 06:43:57.953000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Dec 13 06:43:57.959000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:43:57.959000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:43:57.966000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:43:57.966000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:43:57.968000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:43:57.681748 systemd[1]: Queued start job for default target multi-user.target. Dec 13 06:43:54.355945 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-12-13T06:43:54Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 06:43:57.971000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:43:57.971000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:43:57.681786 systemd[1]: Unnecessary job was removed for dev-vda6.device. Dec 13 06:43:54.356675 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-12-13T06:43:54Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Dec 13 06:43:57.972000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:43:57.972000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:43:57.692001 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 13 06:43:54.356724 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-12-13T06:43:54Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Dec 13 06:43:57.970844 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 06:43:54.356778 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-12-13T06:43:54Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Dec 13 06:43:57.973000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:43:57.974000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:43:57.971072 systemd[1]: Finished modprobe@fuse.service. Dec 13 06:43:54.356797 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-12-13T06:43:54Z" level=debug msg="skipped missing lower profile" missing profile=oem Dec 13 06:43:57.972192 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 06:43:54.356858 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-12-13T06:43:54Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Dec 13 06:43:57.972406 systemd[1]: Finished modprobe@loop.service. Dec 13 06:43:54.356888 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-12-13T06:43:54Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Dec 13 06:43:57.973572 systemd[1]: Finished systemd-modules-load.service. Dec 13 06:43:54.357292 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-12-13T06:43:54Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Dec 13 06:43:57.974641 systemd[1]: Finished systemd-network-generator.service. Dec 13 06:43:54.357378 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-12-13T06:43:54Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Dec 13 06:43:57.975864 systemd[1]: Finished systemd-remount-fs.service. Dec 13 06:43:54.357473 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-12-13T06:43:54Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Dec 13 06:43:54.359786 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-12-13T06:43:54Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Dec 13 06:43:57.977000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:43:54.359863 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-12-13T06:43:54Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Dec 13 06:43:57.978879 systemd[1]: Reached target network-pre.target. Dec 13 06:43:54.359900 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-12-13T06:43:54Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.6: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.6 Dec 13 06:43:54.359930 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-12-13T06:43:54Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Dec 13 06:43:54.359963 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-12-13T06:43:54Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.6: no such file or directory" path=/var/lib/torcx/store/3510.3.6 Dec 13 06:43:54.359989 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-12-13T06:43:54Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Dec 13 06:43:57.102548 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-12-13T06:43:57Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 06:43:57.103236 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-12-13T06:43:57Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 06:43:57.103945 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-12-13T06:43:57Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 06:43:57.104352 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-12-13T06:43:57Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 06:43:57.104468 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-12-13T06:43:57Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Dec 13 06:43:57.104632 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-12-13T06:43:57Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Dec 13 06:43:57.995786 systemd[1]: Mounting sys-fs-fuse-connections.mount... Dec 13 06:43:58.002928 systemd[1]: Mounting sys-kernel-config.mount... Dec 13 06:43:58.006499 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 06:43:58.008979 systemd[1]: Starting systemd-hwdb-update.service... Dec 13 06:43:58.012347 systemd[1]: Starting systemd-journal-flush.service... Dec 13 06:43:58.014008 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 06:43:58.016170 systemd[1]: Starting systemd-random-seed.service... Dec 13 06:43:58.017063 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 06:43:58.020878 systemd[1]: Starting systemd-sysctl.service... Dec 13 06:43:58.026359 systemd[1]: Finished flatcar-tmpfiles.service. Dec 13 06:43:58.029000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:43:58.029696 systemd[1]: Mounted sys-fs-fuse-connections.mount. Dec 13 06:43:58.030690 systemd[1]: Mounted sys-kernel-config.mount. Dec 13 06:43:58.033460 systemd[1]: Starting systemd-sysusers.service... Dec 13 06:43:58.041149 systemd-journald[978]: Time spent on flushing to /var/log/journal/00370428ee944804ae4377d98cdb75a8 is 45.291ms for 1279 entries. Dec 13 06:43:58.041149 systemd-journald[978]: System Journal (/var/log/journal/00370428ee944804ae4377d98cdb75a8) is 8.0M, max 584.8M, 576.8M free. Dec 13 06:43:58.097233 systemd-journald[978]: Received client request to flush runtime journal. Dec 13 06:43:58.043000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:43:58.074000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:43:58.089000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:43:58.098000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:43:58.042543 systemd[1]: Finished systemd-random-seed.service. Dec 13 06:43:58.044193 systemd[1]: Reached target first-boot-complete.target. Dec 13 06:43:58.074490 systemd[1]: Finished systemd-sysctl.service. Dec 13 06:43:58.089547 systemd[1]: Finished systemd-sysusers.service. Dec 13 06:43:58.092311 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Dec 13 06:43:58.098485 systemd[1]: Finished systemd-journal-flush.service. Dec 13 06:43:58.141000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:43:58.141515 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Dec 13 06:43:58.213000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:43:58.213640 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 06:43:58.216415 systemd[1]: Starting systemd-udev-settle.service... Dec 13 06:43:58.228759 udevadm[1012]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Dec 13 06:43:58.674833 systemd[1]: Finished systemd-hwdb-update.service. Dec 13 06:43:58.685864 kernel: kauditd_printk_skb: 108 callbacks suppressed Dec 13 06:43:58.686754 kernel: audit: type=1130 audit(1734072238.675:148): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:43:58.675000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:43:58.686662 systemd[1]: Starting systemd-udevd.service... Dec 13 06:43:58.684000 audit: BPF prog-id=24 op=LOAD Dec 13 06:43:58.691841 kernel: audit: type=1334 audit(1734072238.684:149): prog-id=24 op=LOAD Dec 13 06:43:58.691953 kernel: audit: type=1334 audit(1734072238.684:150): prog-id=25 op=LOAD Dec 13 06:43:58.684000 audit: BPF prog-id=25 op=LOAD Dec 13 06:43:58.693984 kernel: audit: type=1334 audit(1734072238.684:151): prog-id=7 op=UNLOAD Dec 13 06:43:58.684000 audit: BPF prog-id=7 op=UNLOAD Dec 13 06:43:58.695991 kernel: audit: type=1334 audit(1734072238.684:152): prog-id=8 op=UNLOAD Dec 13 06:43:58.684000 audit: BPF prog-id=8 op=UNLOAD Dec 13 06:43:58.720944 systemd-udevd[1013]: Using default interface naming scheme 'v252'. Dec 13 06:43:58.753123 systemd[1]: Started systemd-udevd.service. Dec 13 06:43:58.753000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:43:58.759423 kernel: audit: type=1130 audit(1734072238.753:153): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:43:58.762239 systemd[1]: Starting systemd-networkd.service... Dec 13 06:43:58.760000 audit: BPF prog-id=26 op=LOAD Dec 13 06:43:58.766301 kernel: audit: type=1334 audit(1734072238.760:154): prog-id=26 op=LOAD Dec 13 06:43:58.773000 audit: BPF prog-id=27 op=LOAD Dec 13 06:43:58.779522 kernel: audit: type=1334 audit(1734072238.773:155): prog-id=27 op=LOAD Dec 13 06:43:58.779573 kernel: audit: type=1334 audit(1734072238.775:156): prog-id=28 op=LOAD Dec 13 06:43:58.779619 kernel: audit: type=1334 audit(1734072238.776:157): prog-id=29 op=LOAD Dec 13 06:43:58.775000 audit: BPF prog-id=28 op=LOAD Dec 13 06:43:58.776000 audit: BPF prog-id=29 op=LOAD Dec 13 06:43:58.780585 systemd[1]: Starting systemd-userdbd.service... Dec 13 06:43:58.846146 systemd[1]: Started systemd-userdbd.service. Dec 13 06:43:58.846000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:43:58.855171 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Dec 13 06:43:58.957638 systemd-networkd[1017]: lo: Link UP Dec 13 06:43:58.957652 systemd-networkd[1017]: lo: Gained carrier Dec 13 06:43:58.958626 systemd-networkd[1017]: Enumeration completed Dec 13 06:43:58.958781 systemd[1]: Started systemd-networkd.service. Dec 13 06:43:58.959000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:43:58.959748 systemd-networkd[1017]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 06:43:58.962263 systemd-networkd[1017]: eth0: Link UP Dec 13 06:43:58.962278 systemd-networkd[1017]: eth0: Gained carrier Dec 13 06:43:58.978451 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Dec 13 06:43:58.986416 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Dec 13 06:43:58.997632 systemd-networkd[1017]: eth0: DHCPv4 address 10.230.56.170/30, gateway 10.230.56.169 acquired from 10.230.56.169 Dec 13 06:43:59.000413 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 06:43:59.008457 kernel: ACPI: button: Power Button [PWRF] Dec 13 06:43:59.070000 audit[1014]: AVC avc: denied { confidentiality } for pid=1014 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Dec 13 06:43:59.070000 audit[1014]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=555690bbf770 a1=337fc a2=7fa1ed607bc5 a3=5 items=110 ppid=1013 pid=1014 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 06:43:59.070000 audit: CWD cwd="/" Dec 13 06:43:59.070000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:43:59.070000 audit: PATH item=1 name=(null) inode=13963 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:43:59.070000 audit: PATH item=2 name=(null) inode=13963 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:43:59.070000 audit: PATH item=3 name=(null) inode=13964 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:43:59.070000 audit: PATH item=4 name=(null) inode=13963 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:43:59.070000 audit: PATH item=5 name=(null) inode=13965 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:43:59.070000 audit: PATH item=6 name=(null) inode=13963 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:43:59.070000 audit: PATH item=7 name=(null) inode=13966 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:43:59.070000 audit: PATH item=8 name=(null) inode=13966 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:43:59.070000 audit: PATH item=9 name=(null) inode=13967 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:43:59.070000 audit: PATH item=10 name=(null) inode=13966 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:43:59.070000 audit: PATH item=11 name=(null) inode=13968 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:43:59.070000 audit: PATH item=12 name=(null) inode=13966 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:43:59.070000 audit: PATH item=13 name=(null) inode=13969 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:43:59.070000 audit: PATH item=14 name=(null) inode=13966 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:43:59.070000 audit: PATH item=15 name=(null) inode=13970 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:43:59.070000 audit: PATH item=16 name=(null) inode=13966 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:43:59.070000 audit: PATH item=17 name=(null) inode=13971 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:43:59.070000 audit: PATH item=18 name=(null) inode=13963 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:43:59.070000 audit: PATH item=19 name=(null) inode=13972 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:43:59.070000 audit: PATH item=20 name=(null) inode=13972 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:43:59.070000 audit: PATH item=21 name=(null) inode=13973 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:43:59.070000 audit: PATH item=22 name=(null) inode=13972 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:43:59.070000 audit: PATH item=23 name=(null) inode=13974 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:43:59.070000 audit: PATH item=24 name=(null) inode=13972 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:43:59.070000 audit: PATH item=25 name=(null) inode=13975 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:43:59.070000 audit: PATH item=26 name=(null) inode=13972 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:43:59.070000 audit: PATH item=27 name=(null) inode=13976 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:43:59.070000 audit: PATH item=28 name=(null) inode=13972 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:43:59.070000 audit: PATH item=29 name=(null) inode=13977 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:43:59.070000 audit: PATH item=30 name=(null) inode=13963 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:43:59.070000 audit: PATH item=31 name=(null) inode=13978 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:43:59.070000 audit: PATH item=32 name=(null) inode=13978 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:43:59.070000 audit: PATH item=33 name=(null) inode=13979 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:43:59.070000 audit: PATH item=34 name=(null) inode=13978 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:43:59.070000 audit: PATH item=35 name=(null) inode=13980 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:43:59.070000 audit: PATH item=36 name=(null) inode=13978 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:43:59.070000 audit: PATH item=37 name=(null) inode=13981 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:43:59.070000 audit: PATH item=38 name=(null) inode=13978 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:43:59.070000 audit: PATH item=39 name=(null) inode=13982 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:43:59.070000 audit: PATH item=40 name=(null) inode=13978 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:43:59.070000 audit: PATH item=41 name=(null) inode=13983 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:43:59.070000 audit: PATH item=42 name=(null) inode=13963 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:43:59.070000 audit: PATH item=43 name=(null) inode=13984 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:43:59.070000 audit: PATH item=44 name=(null) inode=13984 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:43:59.070000 audit: PATH item=45 name=(null) inode=13985 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:43:59.070000 audit: PATH item=46 name=(null) inode=13984 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:43:59.070000 audit: PATH item=47 name=(null) inode=13986 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:43:59.070000 audit: PATH item=48 name=(null) inode=13984 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:43:59.070000 audit: PATH item=49 name=(null) inode=13987 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:43:59.070000 audit: PATH item=50 name=(null) inode=13984 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:43:59.070000 audit: PATH item=51 name=(null) inode=13988 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:43:59.070000 audit: PATH item=52 name=(null) inode=13984 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:43:59.070000 audit: PATH item=53 name=(null) inode=13989 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:43:59.070000 audit: PATH item=54 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:43:59.070000 audit: PATH item=55 name=(null) inode=13990 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:43:59.070000 audit: PATH item=56 name=(null) inode=13990 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:43:59.070000 audit: PATH item=57 name=(null) inode=13991 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:43:59.070000 audit: PATH item=58 name=(null) inode=13990 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:43:59.070000 audit: PATH item=59 name=(null) inode=13992 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:43:59.070000 audit: PATH item=60 name=(null) inode=13990 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:43:59.070000 audit: PATH item=61 name=(null) inode=13993 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:43:59.070000 audit: PATH item=62 name=(null) inode=13993 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:43:59.070000 audit: PATH item=63 name=(null) inode=13994 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:43:59.070000 audit: PATH item=64 name=(null) inode=13993 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:43:59.070000 audit: PATH item=65 name=(null) inode=13995 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:43:59.070000 audit: PATH item=66 name=(null) inode=13993 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:43:59.070000 audit: PATH item=67 name=(null) inode=13996 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:43:59.070000 audit: PATH item=68 name=(null) inode=13993 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:43:59.070000 audit: PATH item=69 name=(null) inode=13997 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:43:59.070000 audit: PATH item=70 name=(null) inode=13993 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:43:59.070000 audit: PATH item=71 name=(null) inode=13998 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:43:59.070000 audit: PATH item=72 name=(null) inode=13990 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:43:59.070000 audit: PATH item=73 name=(null) inode=13999 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:43:59.070000 audit: PATH item=74 name=(null) inode=13999 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:43:59.070000 audit: PATH item=75 name=(null) inode=14000 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:43:59.070000 audit: PATH item=76 name=(null) inode=13999 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:43:59.070000 audit: PATH item=77 name=(null) inode=14001 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:43:59.070000 audit: PATH item=78 name=(null) inode=13999 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:43:59.070000 audit: PATH item=79 name=(null) inode=14002 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:43:59.070000 audit: PATH item=80 name=(null) inode=13999 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:43:59.070000 audit: PATH item=81 name=(null) inode=14003 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:43:59.070000 audit: PATH item=82 name=(null) inode=13999 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:43:59.070000 audit: PATH item=83 name=(null) inode=14004 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:43:59.070000 audit: PATH item=84 name=(null) inode=13990 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:43:59.070000 audit: PATH item=85 name=(null) inode=14005 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:43:59.070000 audit: PATH item=86 name=(null) inode=14005 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:43:59.070000 audit: PATH item=87 name=(null) inode=14006 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:43:59.070000 audit: PATH item=88 name=(null) inode=14005 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:43:59.070000 audit: PATH item=89 name=(null) inode=14007 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:43:59.070000 audit: PATH item=90 name=(null) inode=14005 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:43:59.070000 audit: PATH item=91 name=(null) inode=14008 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:43:59.070000 audit: PATH item=92 name=(null) inode=14005 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:43:59.070000 audit: PATH item=93 name=(null) inode=14009 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:43:59.070000 audit: PATH item=94 name=(null) inode=14005 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:43:59.070000 audit: PATH item=95 name=(null) inode=14010 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:43:59.070000 audit: PATH item=96 name=(null) inode=13990 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:43:59.070000 audit: PATH item=97 name=(null) inode=14011 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:43:59.070000 audit: PATH item=98 name=(null) inode=14011 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:43:59.070000 audit: PATH item=99 name=(null) inode=14012 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:43:59.070000 audit: PATH item=100 name=(null) inode=14011 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:43:59.070000 audit: PATH item=101 name=(null) inode=14013 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:43:59.070000 audit: PATH item=102 name=(null) inode=14011 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:43:59.070000 audit: PATH item=103 name=(null) inode=14014 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:43:59.070000 audit: PATH item=104 name=(null) inode=14011 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:43:59.070000 audit: PATH item=105 name=(null) inode=14015 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:43:59.070000 audit: PATH item=106 name=(null) inode=14011 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:43:59.070000 audit: PATH item=107 name=(null) inode=14016 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:43:59.070000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:43:59.070000 audit: PATH item=109 name=(null) inode=14017 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 06:43:59.070000 audit: PROCTITLE proctitle="(udev-worker)" Dec 13 06:43:59.115442 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input5 Dec 13 06:43:59.132438 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Dec 13 06:43:59.156712 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Dec 13 06:43:59.157064 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Dec 13 06:43:59.297037 systemd[1]: Finished systemd-udev-settle.service. Dec 13 06:43:59.297000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:43:59.300072 systemd[1]: Starting lvm2-activation-early.service... Dec 13 06:43:59.325312 lvm[1042]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 06:43:59.356909 systemd[1]: Finished lvm2-activation-early.service. Dec 13 06:43:59.357000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:43:59.357919 systemd[1]: Reached target cryptsetup.target. Dec 13 06:43:59.360461 systemd[1]: Starting lvm2-activation.service... Dec 13 06:43:59.365881 lvm[1043]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 06:43:59.391367 systemd[1]: Finished lvm2-activation.service. Dec 13 06:43:59.393000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:43:59.394056 systemd[1]: Reached target local-fs-pre.target. Dec 13 06:43:59.394789 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 06:43:59.394849 systemd[1]: Reached target local-fs.target. Dec 13 06:43:59.395625 systemd[1]: Reached target machines.target. Dec 13 06:43:59.398406 systemd[1]: Starting ldconfig.service... Dec 13 06:43:59.399715 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 06:43:59.399770 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 06:43:59.401450 systemd[1]: Starting systemd-boot-update.service... Dec 13 06:43:59.403637 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Dec 13 06:43:59.411628 systemd[1]: Starting systemd-machine-id-commit.service... Dec 13 06:43:59.414319 systemd[1]: Starting systemd-sysext.service... Dec 13 06:43:59.426621 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1045 (bootctl) Dec 13 06:43:59.428427 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Dec 13 06:43:59.440116 systemd[1]: Unmounting usr-share-oem.mount... Dec 13 06:43:59.446315 systemd[1]: usr-share-oem.mount: Deactivated successfully. Dec 13 06:43:59.446584 systemd[1]: Unmounted usr-share-oem.mount. Dec 13 06:43:59.455972 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 06:43:59.457000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:43:59.457547 systemd[1]: Finished systemd-machine-id-commit.service. Dec 13 06:43:59.469419 kernel: loop0: detected capacity change from 0 to 210664 Dec 13 06:43:59.473000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:43:59.472989 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Dec 13 06:43:59.495427 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 06:43:59.517423 kernel: loop1: detected capacity change from 0 to 210664 Dec 13 06:43:59.534433 (sd-sysext)[1058]: Using extensions 'kubernetes'. Dec 13 06:43:59.537720 (sd-sysext)[1058]: Merged extensions into '/usr'. Dec 13 06:43:59.569821 systemd-fsck[1055]: fsck.fat 4.2 (2021-01-31) Dec 13 06:43:59.569821 systemd-fsck[1055]: /dev/vda1: 789 files, 119291/258078 clusters Dec 13 06:43:59.582881 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Dec 13 06:43:59.583000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:43:59.589620 systemd[1]: Mounting boot.mount... Dec 13 06:43:59.591479 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 06:43:59.595744 systemd[1]: Mounting usr-share-oem.mount... Dec 13 06:43:59.596655 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 06:43:59.598805 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 06:43:59.602059 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 06:43:59.606221 systemd[1]: Starting modprobe@loop.service... Dec 13 06:43:59.607637 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 06:43:59.607833 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 06:43:59.608053 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 06:43:59.616252 systemd[1]: Mounted usr-share-oem.mount. Dec 13 06:43:59.617377 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 06:43:59.617576 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 06:43:59.617000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:43:59.617000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:43:59.618793 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 06:43:59.618980 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 06:43:59.619000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:43:59.619000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:43:59.622924 systemd[1]: Finished systemd-sysext.service. Dec 13 06:43:59.623000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:43:59.624146 systemd[1]: Mounted boot.mount. Dec 13 06:43:59.625000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:43:59.625000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:43:59.625059 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 06:43:59.625257 systemd[1]: Finished modprobe@loop.service. Dec 13 06:43:59.629469 systemd[1]: Starting ensure-sysext.service... Dec 13 06:43:59.630144 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 06:43:59.630217 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 06:43:59.631647 systemd[1]: Starting systemd-tmpfiles-setup.service... Dec 13 06:43:59.645092 systemd[1]: Reloading. Dec 13 06:43:59.657822 systemd-tmpfiles[1066]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Dec 13 06:43:59.667377 systemd-tmpfiles[1066]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 06:43:59.678271 systemd-tmpfiles[1066]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 06:43:59.783334 /usr/lib/systemd/system-generators/torcx-generator[1085]: time="2024-12-13T06:43:59Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 06:43:59.783380 /usr/lib/systemd/system-generators/torcx-generator[1085]: time="2024-12-13T06:43:59Z" level=info msg="torcx already run" Dec 13 06:43:59.852977 ldconfig[1044]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 06:43:59.920242 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 06:43:59.920637 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 06:43:59.948814 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 06:44:00.029000 audit: BPF prog-id=30 op=LOAD Dec 13 06:44:00.029000 audit: BPF prog-id=26 op=UNLOAD Dec 13 06:44:00.031000 audit: BPF prog-id=31 op=LOAD Dec 13 06:44:00.031000 audit: BPF prog-id=32 op=LOAD Dec 13 06:44:00.031000 audit: BPF prog-id=24 op=UNLOAD Dec 13 06:44:00.031000 audit: BPF prog-id=25 op=UNLOAD Dec 13 06:44:00.033000 audit: BPF prog-id=33 op=LOAD Dec 13 06:44:00.033000 audit: BPF prog-id=27 op=UNLOAD Dec 13 06:44:00.033000 audit: BPF prog-id=34 op=LOAD Dec 13 06:44:00.034000 audit: BPF prog-id=35 op=LOAD Dec 13 06:44:00.034000 audit: BPF prog-id=28 op=UNLOAD Dec 13 06:44:00.034000 audit: BPF prog-id=29 op=UNLOAD Dec 13 06:44:00.038000 audit: BPF prog-id=36 op=LOAD Dec 13 06:44:00.038000 audit: BPF prog-id=21 op=UNLOAD Dec 13 06:44:00.038000 audit: BPF prog-id=37 op=LOAD Dec 13 06:44:00.038000 audit: BPF prog-id=38 op=LOAD Dec 13 06:44:00.039000 audit: BPF prog-id=22 op=UNLOAD Dec 13 06:44:00.039000 audit: BPF prog-id=23 op=UNLOAD Dec 13 06:44:00.050243 systemd[1]: Finished ldconfig.service. Dec 13 06:44:00.053000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:44:00.054171 systemd[1]: Finished systemd-boot-update.service. Dec 13 06:44:00.054000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:44:00.056790 systemd[1]: Finished systemd-tmpfiles-setup.service. Dec 13 06:44:00.057000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:44:00.062497 systemd[1]: Starting audit-rules.service... Dec 13 06:44:00.064681 systemd[1]: Starting clean-ca-certificates.service... Dec 13 06:44:00.067246 systemd[1]: Starting systemd-journal-catalog-update.service... Dec 13 06:44:00.070000 audit: BPF prog-id=39 op=LOAD Dec 13 06:44:00.073606 systemd[1]: Starting systemd-resolved.service... Dec 13 06:44:00.075000 audit: BPF prog-id=40 op=LOAD Dec 13 06:44:00.078169 systemd[1]: Starting systemd-timesyncd.service... Dec 13 06:44:00.081605 systemd[1]: Starting systemd-update-utmp.service... Dec 13 06:44:00.084257 systemd[1]: Finished clean-ca-certificates.service. Dec 13 06:44:00.084000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:44:00.090443 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 06:44:00.095312 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 06:44:00.097145 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 06:44:00.100482 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 06:44:00.102816 systemd[1]: Starting modprobe@loop.service... Dec 13 06:44:00.104549 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 06:44:00.104734 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 06:44:00.104890 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 06:44:00.106313 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 06:44:00.107569 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 06:44:00.107000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:44:00.107000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:44:00.108898 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 06:44:00.109093 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 06:44:00.110000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:44:00.110000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:44:00.111296 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 06:44:00.111558 systemd[1]: Finished modprobe@loop.service. Dec 13 06:44:00.111000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:44:00.111000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:44:00.113251 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 06:44:00.113445 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 06:44:00.117240 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 06:44:00.118000 audit[1139]: SYSTEM_BOOT pid=1139 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Dec 13 06:44:00.120684 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 06:44:00.123882 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 06:44:00.127171 systemd[1]: Starting modprobe@loop.service... Dec 13 06:44:00.128578 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 06:44:00.128837 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 06:44:00.129097 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 06:44:00.131283 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 06:44:00.132469 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 06:44:00.132000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:44:00.132000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:44:00.133978 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 06:44:00.134183 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 06:44:00.134000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:44:00.134000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:44:00.139349 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 06:44:00.139574 systemd[1]: Finished modprobe@loop.service. Dec 13 06:44:00.139000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:44:00.140000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:44:00.146371 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 06:44:00.148908 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 06:44:00.153159 systemd[1]: Starting modprobe@drm.service... Dec 13 06:44:00.157195 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 06:44:00.162086 systemd[1]: Starting modprobe@loop.service... Dec 13 06:44:00.162990 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 06:44:00.163249 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 06:44:00.167243 systemd[1]: Starting systemd-networkd-wait-online.service... Dec 13 06:44:00.168156 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 06:44:00.172045 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 06:44:00.172281 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 06:44:00.173000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:44:00.173000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:44:00.173986 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 06:44:00.175450 systemd[1]: Finished modprobe@drm.service. Dec 13 06:44:00.175000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:44:00.175000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:44:00.176917 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 06:44:00.177127 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 06:44:00.177000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:44:00.177000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:44:00.178736 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 06:44:00.179452 systemd[1]: Finished modprobe@loop.service. Dec 13 06:44:00.179000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:44:00.179000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:44:00.181892 systemd[1]: Finished systemd-update-utmp.service. Dec 13 06:44:00.183000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:44:00.184004 systemd[1]: Finished ensure-sysext.service. Dec 13 06:44:00.184000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:44:00.187972 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 06:44:00.188035 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 06:44:00.207186 systemd[1]: Finished systemd-journal-catalog-update.service. Dec 13 06:44:00.207000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 06:44:00.209912 systemd[1]: Starting systemd-update-done.service... Dec 13 06:44:00.217000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Dec 13 06:44:00.217000 audit[1164]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fff623abcb0 a2=420 a3=0 items=0 ppid=1133 pid=1164 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 06:44:00.217000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 13 06:44:00.217811 augenrules[1164]: No rules Dec 13 06:44:00.218818 systemd[1]: Finished audit-rules.service. Dec 13 06:44:00.230840 systemd[1]: Finished systemd-update-done.service. Dec 13 06:44:00.243856 systemd[1]: Started systemd-timesyncd.service. Dec 13 06:44:00.244608 systemd[1]: Reached target time-set.target. Dec 13 06:44:00.261142 systemd-resolved[1136]: Positive Trust Anchors: Dec 13 06:44:00.261665 systemd-resolved[1136]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 06:44:00.261806 systemd-resolved[1136]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 06:44:00.269655 systemd-resolved[1136]: Using system hostname 'srv-8eon8.gb1.brightbox.com'. Dec 13 06:44:00.272457 systemd[1]: Started systemd-resolved.service. Dec 13 06:44:00.273252 systemd[1]: Reached target network.target. Dec 13 06:44:00.273884 systemd[1]: Reached target nss-lookup.target. Dec 13 06:44:00.274547 systemd[1]: Reached target sysinit.target. Dec 13 06:44:00.275294 systemd[1]: Started motdgen.path. Dec 13 06:44:00.275944 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Dec 13 06:44:00.276968 systemd[1]: Started logrotate.timer. Dec 13 06:44:00.277737 systemd[1]: Started mdadm.timer. Dec 13 06:44:00.278316 systemd[1]: Started systemd-tmpfiles-clean.timer. Dec 13 06:44:00.278983 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 06:44:00.279046 systemd[1]: Reached target paths.target. Dec 13 06:44:00.279658 systemd[1]: Reached target timers.target. Dec 13 06:44:00.280766 systemd[1]: Listening on dbus.socket. Dec 13 06:44:00.283037 systemd[1]: Starting docker.socket... Dec 13 06:44:00.287470 systemd[1]: Listening on sshd.socket. Dec 13 06:44:00.288226 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 06:44:00.288870 systemd[1]: Listening on docker.socket. Dec 13 06:44:00.289628 systemd[1]: Reached target sockets.target. Dec 13 06:44:00.290246 systemd[1]: Reached target basic.target. Dec 13 06:44:00.290937 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 06:44:00.290990 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 06:44:00.293298 systemd[1]: Starting containerd.service... Dec 13 06:44:00.295373 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Dec 13 06:44:00.298987 systemd[1]: Starting dbus.service... Dec 13 06:44:00.301148 systemd[1]: Starting enable-oem-cloudinit.service... Dec 13 06:44:00.304840 systemd[1]: Starting extend-filesystems.service... Dec 13 06:44:00.310563 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Dec 13 06:44:00.312266 systemd[1]: Starting motdgen.service... Dec 13 06:44:00.314813 systemd[1]: Starting ssh-key-proc-cmdline.service... Dec 13 06:44:00.318918 systemd[1]: Starting sshd-keygen.service... Dec 13 06:44:00.324497 systemd[1]: Starting systemd-logind.service... Dec 13 06:44:00.328526 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 06:44:00.328675 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 06:44:00.329407 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 06:44:00.330553 systemd[1]: Starting update-engine.service... Dec 13 06:44:00.335368 jq[1178]: false Dec 13 06:44:00.335594 systemd[1]: Starting update-ssh-keys-after-ignition.service... Dec 13 06:44:00.339808 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 06:44:00.340213 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Dec 13 06:44:00.347063 jq[1188]: true Dec 13 06:44:00.364036 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 06:44:00.364308 systemd[1]: Finished ssh-key-proc-cmdline.service. Dec 13 06:44:00.384509 jq[1190]: true Dec 13 06:44:00.396094 extend-filesystems[1179]: Found loop1 Dec 13 06:44:01.419193 systemd-resolved[1136]: Clock change detected. Flushing caches. Dec 13 06:44:01.419386 systemd-timesyncd[1138]: Contacted time server 77.104.162.218:123 (0.flatcar.pool.ntp.org). Dec 13 06:44:01.419596 systemd-timesyncd[1138]: Initial clock synchronization to Fri 2024-12-13 06:44:01.418784 UTC. Dec 13 06:44:01.420873 extend-filesystems[1179]: Found vda Dec 13 06:44:01.424889 extend-filesystems[1179]: Found vda1 Dec 13 06:44:01.425865 extend-filesystems[1179]: Found vda2 Dec 13 06:44:01.427167 extend-filesystems[1179]: Found vda3 Dec 13 06:44:01.428092 extend-filesystems[1179]: Found usr Dec 13 06:44:01.428939 extend-filesystems[1179]: Found vda4 Dec 13 06:44:01.431199 extend-filesystems[1179]: Found vda6 Dec 13 06:44:01.434603 extend-filesystems[1179]: Found vda7 Dec 13 06:44:01.434603 extend-filesystems[1179]: Found vda9 Dec 13 06:44:01.434603 extend-filesystems[1179]: Checking size of /dev/vda9 Dec 13 06:44:01.456826 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 06:44:01.456880 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 06:44:01.458501 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 06:44:01.458748 systemd[1]: Finished motdgen.service. Dec 13 06:44:01.467929 dbus-daemon[1176]: [system] SELinux support is enabled Dec 13 06:44:01.469006 systemd[1]: Started dbus.service. Dec 13 06:44:01.472315 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 06:44:01.472356 systemd[1]: Reached target system-config.target. Dec 13 06:44:01.473062 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 06:44:01.473098 systemd[1]: Reached target user-config.target. Dec 13 06:44:01.476478 dbus-daemon[1176]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.2' (uid=244 pid=1017 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Dec 13 06:44:01.481844 systemd[1]: Starting systemd-hostnamed.service... Dec 13 06:44:01.493715 update_engine[1187]: I1213 06:44:01.492200 1187 main.cc:92] Flatcar Update Engine starting Dec 13 06:44:01.498439 extend-filesystems[1179]: Resized partition /dev/vda9 Dec 13 06:44:01.498521 systemd[1]: Started update-engine.service. Dec 13 06:44:01.499675 update_engine[1187]: I1213 06:44:01.498754 1187 update_check_scheduler.cc:74] Next update check in 11m13s Dec 13 06:44:01.502367 extend-filesystems[1226]: resize2fs 1.46.5 (30-Dec-2021) Dec 13 06:44:01.502793 systemd[1]: Started locksmithd.service. Dec 13 06:44:01.531167 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 15121403 blocks Dec 13 06:44:01.534590 bash[1217]: Updated "/home/core/.ssh/authorized_keys" Dec 13 06:44:01.535502 systemd[1]: Finished update-ssh-keys-after-ignition.service. Dec 13 06:44:01.588475 systemd-logind[1184]: Watching system buttons on /dev/input/event2 (Power Button) Dec 13 06:44:01.588540 systemd-logind[1184]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 13 06:44:01.588935 systemd-logind[1184]: New seat seat0. Dec 13 06:44:01.591235 systemd[1]: Started systemd-logind.service. Dec 13 06:44:01.639940 env[1191]: time="2024-12-13T06:44:01.639775621Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Dec 13 06:44:01.671580 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Dec 13 06:44:01.678926 dbus-daemon[1176]: [system] Successfully activated service 'org.freedesktop.hostname1' Dec 13 06:44:01.679117 systemd[1]: Started systemd-hostnamed.service. Dec 13 06:44:01.680856 dbus-daemon[1176]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1223 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Dec 13 06:44:01.687828 systemd[1]: Starting polkit.service... Dec 13 06:44:01.689778 extend-filesystems[1226]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Dec 13 06:44:01.689778 extend-filesystems[1226]: old_desc_blocks = 1, new_desc_blocks = 8 Dec 13 06:44:01.689778 extend-filesystems[1226]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Dec 13 06:44:01.695949 extend-filesystems[1179]: Resized filesystem in /dev/vda9 Dec 13 06:44:01.692275 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 06:44:01.692512 systemd[1]: Finished extend-filesystems.service. Dec 13 06:44:01.708725 polkitd[1233]: Started polkitd version 121 Dec 13 06:44:01.721060 env[1191]: time="2024-12-13T06:44:01.721002789Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 06:44:01.721408 env[1191]: time="2024-12-13T06:44:01.721377428Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 06:44:01.723368 env[1191]: time="2024-12-13T06:44:01.723326246Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.173-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 06:44:01.723490 env[1191]: time="2024-12-13T06:44:01.723460606Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 06:44:01.723890 env[1191]: time="2024-12-13T06:44:01.723852415Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 06:44:01.724408 env[1191]: time="2024-12-13T06:44:01.724374340Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 06:44:01.724573 env[1191]: time="2024-12-13T06:44:01.724524934Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Dec 13 06:44:01.724711 env[1191]: time="2024-12-13T06:44:01.724681328Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 06:44:01.725432 env[1191]: time="2024-12-13T06:44:01.725403333Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 06:44:01.726732 env[1191]: time="2024-12-13T06:44:01.726701275Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 06:44:01.728332 env[1191]: time="2024-12-13T06:44:01.727938375Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 06:44:01.728896 env[1191]: time="2024-12-13T06:44:01.728864597Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 06:44:01.729117 env[1191]: time="2024-12-13T06:44:01.729087359Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Dec 13 06:44:01.729296 env[1191]: time="2024-12-13T06:44:01.729267152Z" level=info msg="metadata content store policy set" policy=shared Dec 13 06:44:01.731382 polkitd[1233]: Loading rules from directory /etc/polkit-1/rules.d Dec 13 06:44:01.731459 polkitd[1233]: Loading rules from directory /usr/share/polkit-1/rules.d Dec 13 06:44:01.743409 polkitd[1233]: Finished loading, compiling and executing 2 rules Dec 13 06:44:01.747673 dbus-daemon[1176]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Dec 13 06:44:01.747920 systemd[1]: Started polkit.service. Dec 13 06:44:01.748272 polkitd[1233]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Dec 13 06:44:01.763102 systemd-hostnamed[1223]: Hostname set to (static) Dec 13 06:44:01.769868 env[1191]: time="2024-12-13T06:44:01.769819129Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 06:44:01.769974 env[1191]: time="2024-12-13T06:44:01.769876319Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 06:44:01.769974 env[1191]: time="2024-12-13T06:44:01.769901589Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 06:44:01.770086 env[1191]: time="2024-12-13T06:44:01.769987686Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 06:44:01.770157 env[1191]: time="2024-12-13T06:44:01.770092370Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 06:44:01.770157 env[1191]: time="2024-12-13T06:44:01.770132781Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 06:44:01.770270 env[1191]: time="2024-12-13T06:44:01.770156330Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 06:44:01.770270 env[1191]: time="2024-12-13T06:44:01.770177451Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 06:44:01.770270 env[1191]: time="2024-12-13T06:44:01.770218981Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Dec 13 06:44:01.770270 env[1191]: time="2024-12-13T06:44:01.770242507Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 06:44:01.770270 env[1191]: time="2024-12-13T06:44:01.770263118Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 06:44:01.770496 env[1191]: time="2024-12-13T06:44:01.770290310Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 06:44:01.770543 env[1191]: time="2024-12-13T06:44:01.770488991Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 06:44:01.770766 env[1191]: time="2024-12-13T06:44:01.770712525Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 06:44:01.771184 env[1191]: time="2024-12-13T06:44:01.771146888Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 06:44:01.771264 env[1191]: time="2024-12-13T06:44:01.771219367Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 06:44:01.771264 env[1191]: time="2024-12-13T06:44:01.771247202Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 06:44:01.771378 env[1191]: time="2024-12-13T06:44:01.771348358Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 06:44:01.771489 env[1191]: time="2024-12-13T06:44:01.771380897Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 06:44:01.771577 env[1191]: time="2024-12-13T06:44:01.771499493Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 06:44:01.771577 env[1191]: time="2024-12-13T06:44:01.771527793Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 06:44:01.771577 env[1191]: time="2024-12-13T06:44:01.771561228Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 06:44:01.771716 env[1191]: time="2024-12-13T06:44:01.771597671Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 06:44:01.771716 env[1191]: time="2024-12-13T06:44:01.771622396Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 06:44:01.771716 env[1191]: time="2024-12-13T06:44:01.771642238Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 06:44:01.771716 env[1191]: time="2024-12-13T06:44:01.771664402Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 06:44:01.771931 env[1191]: time="2024-12-13T06:44:01.771902876Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 06:44:01.771982 env[1191]: time="2024-12-13T06:44:01.771928829Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 06:44:01.771982 env[1191]: time="2024-12-13T06:44:01.771948968Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 06:44:01.771982 env[1191]: time="2024-12-13T06:44:01.771967108Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 06:44:01.772131 env[1191]: time="2024-12-13T06:44:01.771990892Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Dec 13 06:44:01.772131 env[1191]: time="2024-12-13T06:44:01.772009473Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 06:44:01.772131 env[1191]: time="2024-12-13T06:44:01.772063166Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Dec 13 06:44:01.772279 env[1191]: time="2024-12-13T06:44:01.772127549Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 06:44:01.772525 env[1191]: time="2024-12-13T06:44:01.772444281Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 06:44:01.775049 env[1191]: time="2024-12-13T06:44:01.772576131Z" level=info msg="Connect containerd service" Dec 13 06:44:01.775049 env[1191]: time="2024-12-13T06:44:01.772644779Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 06:44:01.775049 env[1191]: time="2024-12-13T06:44:01.774070978Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 06:44:01.775049 env[1191]: time="2024-12-13T06:44:01.774208341Z" level=info msg="Start subscribing containerd event" Dec 13 06:44:01.775049 env[1191]: time="2024-12-13T06:44:01.774280778Z" level=info msg="Start recovering state" Dec 13 06:44:01.775296 env[1191]: time="2024-12-13T06:44:01.775115258Z" level=info msg="Start event monitor" Dec 13 06:44:01.775296 env[1191]: time="2024-12-13T06:44:01.775154629Z" level=info msg="Start snapshots syncer" Dec 13 06:44:01.775296 env[1191]: time="2024-12-13T06:44:01.775176829Z" level=info msg="Start cni network conf syncer for default" Dec 13 06:44:01.775296 env[1191]: time="2024-12-13T06:44:01.775191814Z" level=info msg="Start streaming server" Dec 13 06:44:01.775966 env[1191]: time="2024-12-13T06:44:01.775853969Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 06:44:01.775966 env[1191]: time="2024-12-13T06:44:01.775958326Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 06:44:01.777374 systemd[1]: Started containerd.service. Dec 13 06:44:01.779347 env[1191]: time="2024-12-13T06:44:01.778419295Z" level=info msg="containerd successfully booted in 0.143202s" Dec 13 06:44:01.812826 locksmithd[1227]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 06:44:01.845781 systemd-networkd[1017]: eth0: Gained IPv6LL Dec 13 06:44:01.849473 systemd[1]: Finished systemd-networkd-wait-online.service. Dec 13 06:44:01.850685 systemd[1]: Reached target network-online.target. Dec 13 06:44:01.853702 systemd[1]: Starting kubelet.service... Dec 13 06:44:02.222818 systemd[1]: Created slice system-sshd.slice. Dec 13 06:44:02.732704 sshd_keygen[1201]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 06:44:02.775494 systemd[1]: Finished sshd-keygen.service. Dec 13 06:44:02.777489 systemd[1]: Started kubelet.service. Dec 13 06:44:02.783144 systemd[1]: Starting issuegen.service... Dec 13 06:44:02.786112 systemd[1]: Started sshd@0-10.230.56.170:22-139.178.89.65:35308.service. Dec 13 06:44:02.794937 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 06:44:02.795252 systemd[1]: Finished issuegen.service. Dec 13 06:44:02.798265 systemd[1]: Starting systemd-user-sessions.service... Dec 13 06:44:02.815588 systemd[1]: Finished systemd-user-sessions.service. Dec 13 06:44:02.818817 systemd[1]: Started getty@tty1.service. Dec 13 06:44:02.823235 systemd[1]: Started serial-getty@ttyS0.service. Dec 13 06:44:02.824695 systemd[1]: Reached target getty.target. Dec 13 06:44:03.355786 systemd-networkd[1017]: eth0: Ignoring DHCPv6 address 2a02:1348:179:8e2a:24:19ff:fee6:38aa/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:8e2a:24:19ff:fee6:38aa/64 assigned by NDisc. Dec 13 06:44:03.355799 systemd-networkd[1017]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Dec 13 06:44:03.498458 kubelet[1258]: E1213 06:44:03.498384 1258 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 06:44:03.501037 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 06:44:03.501316 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 06:44:03.501881 systemd[1]: kubelet.service: Consumed 1.130s CPU time. Dec 13 06:44:03.696400 sshd[1260]: Accepted publickey for core from 139.178.89.65 port 35308 ssh2: RSA SHA256:dQnQ6z9Pj/RNX8sNR4TqdGn8nHqynNIoEP6sXMH78jY Dec 13 06:44:03.699913 sshd[1260]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 06:44:03.716482 systemd[1]: Created slice user-500.slice. Dec 13 06:44:03.719164 systemd[1]: Starting user-runtime-dir@500.service... Dec 13 06:44:03.725684 systemd-logind[1184]: New session 1 of user core. Dec 13 06:44:03.733837 systemd[1]: Finished user-runtime-dir@500.service. Dec 13 06:44:03.736983 systemd[1]: Starting user@500.service... Dec 13 06:44:03.742962 (systemd)[1274]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 06:44:03.849041 systemd[1274]: Queued start job for default target default.target. Dec 13 06:44:03.849937 systemd[1274]: Reached target paths.target. Dec 13 06:44:03.849974 systemd[1274]: Reached target sockets.target. Dec 13 06:44:03.850008 systemd[1274]: Reached target timers.target. Dec 13 06:44:03.850026 systemd[1274]: Reached target basic.target. Dec 13 06:44:03.850088 systemd[1274]: Reached target default.target. Dec 13 06:44:03.850164 systemd[1274]: Startup finished in 97ms. Dec 13 06:44:03.851227 systemd[1]: Started user@500.service. Dec 13 06:44:03.855216 systemd[1]: Started session-1.scope. Dec 13 06:44:04.480156 systemd[1]: Started sshd@1-10.230.56.170:22-139.178.89.65:35314.service. Dec 13 06:44:05.368890 sshd[1284]: Accepted publickey for core from 139.178.89.65 port 35314 ssh2: RSA SHA256:dQnQ6z9Pj/RNX8sNR4TqdGn8nHqynNIoEP6sXMH78jY Dec 13 06:44:05.370843 sshd[1284]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 06:44:05.378821 systemd[1]: Started session-2.scope. Dec 13 06:44:05.379918 systemd-logind[1184]: New session 2 of user core. Dec 13 06:44:06.135085 systemd[1]: Started sshd@2-10.230.56.170:22-139.178.89.65:35330.service. Dec 13 06:44:06.156807 sshd[1284]: pam_unix(sshd:session): session closed for user core Dec 13 06:44:06.160286 systemd[1]: sshd@1-10.230.56.170:22-139.178.89.65:35314.service: Deactivated successfully. Dec 13 06:44:06.161440 systemd[1]: session-2.scope: Deactivated successfully. Dec 13 06:44:06.162227 systemd-logind[1184]: Session 2 logged out. Waiting for processes to exit. Dec 13 06:44:06.163370 systemd-logind[1184]: Removed session 2. Dec 13 06:44:07.027729 sshd[1289]: Accepted publickey for core from 139.178.89.65 port 35330 ssh2: RSA SHA256:dQnQ6z9Pj/RNX8sNR4TqdGn8nHqynNIoEP6sXMH78jY Dec 13 06:44:07.030362 sshd[1289]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 06:44:07.038113 systemd[1]: Started session-3.scope. Dec 13 06:44:07.042571 systemd-logind[1184]: New session 3 of user core. Dec 13 06:44:07.649790 sshd[1289]: pam_unix(sshd:session): session closed for user core Dec 13 06:44:07.653719 systemd-logind[1184]: Session 3 logged out. Waiting for processes to exit. Dec 13 06:44:07.654179 systemd[1]: sshd@2-10.230.56.170:22-139.178.89.65:35330.service: Deactivated successfully. Dec 13 06:44:07.655311 systemd[1]: session-3.scope: Deactivated successfully. Dec 13 06:44:07.656448 systemd-logind[1184]: Removed session 3. Dec 13 06:44:08.453210 coreos-metadata[1174]: Dec 13 06:44:08.453 WARN failed to locate config-drive, using the metadata service API instead Dec 13 06:44:08.507981 coreos-metadata[1174]: Dec 13 06:44:08.507 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Dec 13 06:44:08.539651 coreos-metadata[1174]: Dec 13 06:44:08.539 INFO Fetch successful Dec 13 06:44:08.539774 coreos-metadata[1174]: Dec 13 06:44:08.539 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Dec 13 06:44:08.576125 coreos-metadata[1174]: Dec 13 06:44:08.576 INFO Fetch successful Dec 13 06:44:08.578719 unknown[1174]: wrote ssh authorized keys file for user: core Dec 13 06:44:08.591476 update-ssh-keys[1297]: Updated "/home/core/.ssh/authorized_keys" Dec 13 06:44:08.592027 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Dec 13 06:44:08.592552 systemd[1]: Reached target multi-user.target. Dec 13 06:44:08.594588 systemd[1]: Starting systemd-update-utmp-runlevel.service... Dec 13 06:44:08.605236 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Dec 13 06:44:08.605481 systemd[1]: Finished systemd-update-utmp-runlevel.service. Dec 13 06:44:08.605741 systemd[1]: Startup finished in 1.152s (kernel) + 6.380s (initrd) + 13.533s (userspace) = 21.067s. Dec 13 06:44:13.752735 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 06:44:13.753076 systemd[1]: Stopped kubelet.service. Dec 13 06:44:13.753142 systemd[1]: kubelet.service: Consumed 1.130s CPU time. Dec 13 06:44:13.755600 systemd[1]: Starting kubelet.service... Dec 13 06:44:13.893040 systemd[1]: Started kubelet.service. Dec 13 06:44:13.984351 kubelet[1303]: E1213 06:44:13.984293 1303 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 06:44:13.988610 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 06:44:13.988867 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 06:44:17.798411 systemd[1]: Started sshd@3-10.230.56.170:22-139.178.89.65:49504.service. Dec 13 06:44:18.687864 sshd[1311]: Accepted publickey for core from 139.178.89.65 port 49504 ssh2: RSA SHA256:dQnQ6z9Pj/RNX8sNR4TqdGn8nHqynNIoEP6sXMH78jY Dec 13 06:44:18.690492 sshd[1311]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 06:44:18.697147 systemd-logind[1184]: New session 4 of user core. Dec 13 06:44:18.698018 systemd[1]: Started session-4.scope. Dec 13 06:44:19.306847 sshd[1311]: pam_unix(sshd:session): session closed for user core Dec 13 06:44:19.310829 systemd-logind[1184]: Session 4 logged out. Waiting for processes to exit. Dec 13 06:44:19.311297 systemd[1]: sshd@3-10.230.56.170:22-139.178.89.65:49504.service: Deactivated successfully. Dec 13 06:44:19.312198 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 06:44:19.313443 systemd-logind[1184]: Removed session 4. Dec 13 06:44:19.454670 systemd[1]: Started sshd@4-10.230.56.170:22-139.178.89.65:56284.service. Dec 13 06:44:20.342593 sshd[1317]: Accepted publickey for core from 139.178.89.65 port 56284 ssh2: RSA SHA256:dQnQ6z9Pj/RNX8sNR4TqdGn8nHqynNIoEP6sXMH78jY Dec 13 06:44:20.345278 sshd[1317]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 06:44:20.351853 systemd-logind[1184]: New session 5 of user core. Dec 13 06:44:20.352648 systemd[1]: Started session-5.scope. Dec 13 06:44:20.957186 sshd[1317]: pam_unix(sshd:session): session closed for user core Dec 13 06:44:20.960889 systemd[1]: sshd@4-10.230.56.170:22-139.178.89.65:56284.service: Deactivated successfully. Dec 13 06:44:20.962232 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 06:44:20.963239 systemd-logind[1184]: Session 5 logged out. Waiting for processes to exit. Dec 13 06:44:20.965217 systemd-logind[1184]: Removed session 5. Dec 13 06:44:21.104050 systemd[1]: Started sshd@5-10.230.56.170:22-139.178.89.65:56290.service. Dec 13 06:44:21.985993 sshd[1323]: Accepted publickey for core from 139.178.89.65 port 56290 ssh2: RSA SHA256:dQnQ6z9Pj/RNX8sNR4TqdGn8nHqynNIoEP6sXMH78jY Dec 13 06:44:21.987962 sshd[1323]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 06:44:21.994751 systemd-logind[1184]: New session 6 of user core. Dec 13 06:44:21.995448 systemd[1]: Started session-6.scope. Dec 13 06:44:22.601218 sshd[1323]: pam_unix(sshd:session): session closed for user core Dec 13 06:44:22.605243 systemd[1]: sshd@5-10.230.56.170:22-139.178.89.65:56290.service: Deactivated successfully. Dec 13 06:44:22.606202 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 06:44:22.607097 systemd-logind[1184]: Session 6 logged out. Waiting for processes to exit. Dec 13 06:44:22.608284 systemd-logind[1184]: Removed session 6. Dec 13 06:44:22.749893 systemd[1]: Started sshd@6-10.230.56.170:22-139.178.89.65:56302.service. Dec 13 06:44:23.639751 sshd[1329]: Accepted publickey for core from 139.178.89.65 port 56302 ssh2: RSA SHA256:dQnQ6z9Pj/RNX8sNR4TqdGn8nHqynNIoEP6sXMH78jY Dec 13 06:44:23.641685 sshd[1329]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 06:44:23.649092 systemd[1]: Started session-7.scope. Dec 13 06:44:23.649647 systemd-logind[1184]: New session 7 of user core. Dec 13 06:44:24.127419 sudo[1332]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 06:44:24.128384 sudo[1332]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Dec 13 06:44:24.129988 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 06:44:24.130306 systemd[1]: Stopped kubelet.service. Dec 13 06:44:24.132709 systemd[1]: Starting kubelet.service... Dec 13 06:44:24.157109 systemd[1]: Starting coreos-metadata.service... Dec 13 06:44:24.296388 systemd[1]: Started kubelet.service. Dec 13 06:44:24.369073 kubelet[1343]: E1213 06:44:24.368999 1343 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 06:44:24.371560 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 06:44:24.371779 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 06:44:31.211141 coreos-metadata[1338]: Dec 13 06:44:31.211 WARN failed to locate config-drive, using the metadata service API instead Dec 13 06:44:31.264167 coreos-metadata[1338]: Dec 13 06:44:31.264 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Dec 13 06:44:31.266363 coreos-metadata[1338]: Dec 13 06:44:31.266 INFO Fetch successful Dec 13 06:44:31.266608 coreos-metadata[1338]: Dec 13 06:44:31.266 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Dec 13 06:44:31.289512 coreos-metadata[1338]: Dec 13 06:44:31.289 INFO Fetch successful Dec 13 06:44:31.289771 coreos-metadata[1338]: Dec 13 06:44:31.289 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Dec 13 06:44:31.308018 coreos-metadata[1338]: Dec 13 06:44:31.307 INFO Fetch successful Dec 13 06:44:31.308224 coreos-metadata[1338]: Dec 13 06:44:31.308 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Dec 13 06:44:31.325351 coreos-metadata[1338]: Dec 13 06:44:31.325 INFO Fetch successful Dec 13 06:44:31.325590 coreos-metadata[1338]: Dec 13 06:44:31.325 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Dec 13 06:44:31.341877 coreos-metadata[1338]: Dec 13 06:44:31.341 INFO Fetch successful Dec 13 06:44:31.353024 systemd[1]: Finished coreos-metadata.service. Dec 13 06:44:32.190190 systemd[1]: Stopped kubelet.service. Dec 13 06:44:32.193665 systemd[1]: Starting kubelet.service... Dec 13 06:44:32.220967 systemd[1]: Reloading. Dec 13 06:44:32.376786 /usr/lib/systemd/system-generators/torcx-generator[1411]: time="2024-12-13T06:44:32Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 06:44:32.376860 /usr/lib/systemd/system-generators/torcx-generator[1411]: time="2024-12-13T06:44:32Z" level=info msg="torcx already run" Dec 13 06:44:32.466465 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 06:44:32.467600 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 06:44:32.496744 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 06:44:32.635506 systemd[1]: Started kubelet.service. Dec 13 06:44:32.640157 systemd[1]: Stopping kubelet.service... Dec 13 06:44:32.641134 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 06:44:32.641409 systemd[1]: Stopped kubelet.service. Dec 13 06:44:32.645203 systemd[1]: Starting kubelet.service... Dec 13 06:44:32.767686 systemd[1]: Started kubelet.service. Dec 13 06:44:32.857587 kubelet[1462]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 06:44:32.857587 kubelet[1462]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 06:44:32.857587 kubelet[1462]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 06:44:32.858264 kubelet[1462]: I1213 06:44:32.857734 1462 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 06:44:33.397614 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Dec 13 06:44:33.581502 kubelet[1462]: I1213 06:44:33.581427 1462 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Dec 13 06:44:33.581502 kubelet[1462]: I1213 06:44:33.581478 1462 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 06:44:33.581853 kubelet[1462]: I1213 06:44:33.581828 1462 server.go:927] "Client rotation is on, will bootstrap in background" Dec 13 06:44:33.601700 kubelet[1462]: I1213 06:44:33.601665 1462 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 06:44:33.622429 kubelet[1462]: I1213 06:44:33.622395 1462 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 06:44:33.625637 kubelet[1462]: I1213 06:44:33.625576 1462 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 06:44:33.626119 kubelet[1462]: I1213 06:44:33.625757 1462 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.230.56.170","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 06:44:33.627116 kubelet[1462]: I1213 06:44:33.627087 1462 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 06:44:33.627271 kubelet[1462]: I1213 06:44:33.627249 1462 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 06:44:33.629105 kubelet[1462]: I1213 06:44:33.629067 1462 state_mem.go:36] "Initialized new in-memory state store" Dec 13 06:44:33.630722 kubelet[1462]: I1213 06:44:33.630357 1462 kubelet.go:400] "Attempting to sync node with API server" Dec 13 06:44:33.630876 kubelet[1462]: I1213 06:44:33.630851 1462 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 06:44:33.631067 kubelet[1462]: I1213 06:44:33.631035 1462 kubelet.go:312] "Adding apiserver pod source" Dec 13 06:44:33.631433 kubelet[1462]: I1213 06:44:33.631402 1462 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 06:44:33.631659 kubelet[1462]: E1213 06:44:33.631342 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:44:33.631776 kubelet[1462]: E1213 06:44:33.631245 1462 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:44:33.637588 kubelet[1462]: I1213 06:44:33.637531 1462 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Dec 13 06:44:33.639412 kubelet[1462]: I1213 06:44:33.639387 1462 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 06:44:33.639684 kubelet[1462]: W1213 06:44:33.639660 1462 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 06:44:33.640854 kubelet[1462]: I1213 06:44:33.640817 1462 server.go:1264] "Started kubelet" Dec 13 06:44:33.643477 kubelet[1462]: W1213 06:44:33.643444 1462 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Dec 13 06:44:33.643616 kubelet[1462]: E1213 06:44:33.643491 1462 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Dec 13 06:44:33.644397 kubelet[1462]: W1213 06:44:33.643699 1462 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "10.230.56.170" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Dec 13 06:44:33.644397 kubelet[1462]: E1213 06:44:33.643731 1462 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.230.56.170" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Dec 13 06:44:33.644397 kubelet[1462]: I1213 06:44:33.643786 1462 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 06:44:33.646741 kubelet[1462]: I1213 06:44:33.646608 1462 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 06:44:33.649031 kubelet[1462]: I1213 06:44:33.647928 1462 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 06:44:33.650920 kubelet[1462]: I1213 06:44:33.650892 1462 server.go:455] "Adding debug handlers to kubelet server" Dec 13 06:44:33.655898 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Dec 13 06:44:33.656116 kubelet[1462]: I1213 06:44:33.655888 1462 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 06:44:33.660932 kubelet[1462]: E1213 06:44:33.660895 1462 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 06:44:33.668310 kubelet[1462]: I1213 06:44:33.668285 1462 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 06:44:33.669030 kubelet[1462]: I1213 06:44:33.669000 1462 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Dec 13 06:44:33.679730 kubelet[1462]: I1213 06:44:33.679685 1462 reconciler.go:26] "Reconciler: start to sync state" Dec 13 06:44:33.680487 kubelet[1462]: I1213 06:44:33.680451 1462 factory.go:221] Registration of the systemd container factory successfully Dec 13 06:44:33.680658 kubelet[1462]: I1213 06:44:33.680621 1462 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 06:44:33.682505 kubelet[1462]: I1213 06:44:33.682460 1462 factory.go:221] Registration of the containerd container factory successfully Dec 13 06:44:33.710677 kubelet[1462]: E1213 06:44:33.708260 1462 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.230.56.170\" not found" node="10.230.56.170" Dec 13 06:44:33.720103 kubelet[1462]: I1213 06:44:33.720077 1462 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 06:44:33.720267 kubelet[1462]: I1213 06:44:33.720242 1462 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 06:44:33.720411 kubelet[1462]: I1213 06:44:33.720388 1462 state_mem.go:36] "Initialized new in-memory state store" Dec 13 06:44:33.732820 kubelet[1462]: I1213 06:44:33.732789 1462 policy_none.go:49] "None policy: Start" Dec 13 06:44:33.733886 kubelet[1462]: I1213 06:44:33.733858 1462 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 06:44:33.734024 kubelet[1462]: I1213 06:44:33.734002 1462 state_mem.go:35] "Initializing new in-memory state store" Dec 13 06:44:33.749525 systemd[1]: Created slice kubepods.slice. Dec 13 06:44:33.756706 systemd[1]: Created slice kubepods-burstable.slice. Dec 13 06:44:33.762390 systemd[1]: Created slice kubepods-besteffort.slice. Dec 13 06:44:33.770959 kubelet[1462]: I1213 06:44:33.770910 1462 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 06:44:33.771258 kubelet[1462]: I1213 06:44:33.771176 1462 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 06:44:33.771417 kubelet[1462]: I1213 06:44:33.771390 1462 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 06:44:33.775021 kubelet[1462]: I1213 06:44:33.774820 1462 kubelet_node_status.go:73] "Attempting to register node" node="10.230.56.170" Dec 13 06:44:33.777225 kubelet[1462]: E1213 06:44:33.777186 1462 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.230.56.170\" not found" Dec 13 06:44:33.782376 kubelet[1462]: I1213 06:44:33.782350 1462 kubelet_node_status.go:76] "Successfully registered node" node="10.230.56.170" Dec 13 06:44:33.796376 kubelet[1462]: E1213 06:44:33.796312 1462 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.230.56.170\" not found" Dec 13 06:44:33.804050 sudo[1332]: pam_unix(sudo:session): session closed for user root Dec 13 06:44:33.842105 kubelet[1462]: I1213 06:44:33.842022 1462 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 06:44:33.844022 kubelet[1462]: I1213 06:44:33.843982 1462 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 06:44:33.844103 kubelet[1462]: I1213 06:44:33.844038 1462 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 06:44:33.844103 kubelet[1462]: I1213 06:44:33.844083 1462 kubelet.go:2337] "Starting kubelet main sync loop" Dec 13 06:44:33.844288 kubelet[1462]: E1213 06:44:33.844187 1462 kubelet.go:2361] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Dec 13 06:44:33.896724 kubelet[1462]: E1213 06:44:33.896684 1462 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.230.56.170\" not found" Dec 13 06:44:33.950023 sshd[1329]: pam_unix(sshd:session): session closed for user core Dec 13 06:44:33.955348 systemd[1]: sshd@6-10.230.56.170:22-139.178.89.65:56302.service: Deactivated successfully. Dec 13 06:44:33.956770 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 06:44:33.957838 systemd-logind[1184]: Session 7 logged out. Waiting for processes to exit. Dec 13 06:44:33.959498 systemd-logind[1184]: Removed session 7. Dec 13 06:44:33.997570 kubelet[1462]: E1213 06:44:33.997502 1462 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.230.56.170\" not found" Dec 13 06:44:34.098459 kubelet[1462]: E1213 06:44:34.098386 1462 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.230.56.170\" not found" Dec 13 06:44:34.199859 kubelet[1462]: E1213 06:44:34.199792 1462 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.230.56.170\" not found" Dec 13 06:44:34.300868 kubelet[1462]: E1213 06:44:34.300652 1462 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.230.56.170\" not found" Dec 13 06:44:34.402071 kubelet[1462]: E1213 06:44:34.401879 1462 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.230.56.170\" not found" Dec 13 06:44:34.502773 kubelet[1462]: E1213 06:44:34.502690 1462 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.230.56.170\" not found" Dec 13 06:44:34.585544 kubelet[1462]: I1213 06:44:34.585382 1462 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Dec 13 06:44:34.585998 kubelet[1462]: W1213 06:44:34.585928 1462 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Dec 13 06:44:34.586100 kubelet[1462]: W1213 06:44:34.586006 1462 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Dec 13 06:44:34.603834 kubelet[1462]: E1213 06:44:34.603792 1462 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.230.56.170\" not found" Dec 13 06:44:34.632130 kubelet[1462]: E1213 06:44:34.632086 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:44:34.704105 kubelet[1462]: E1213 06:44:34.704063 1462 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.230.56.170\" not found" Dec 13 06:44:34.804618 kubelet[1462]: E1213 06:44:34.804532 1462 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.230.56.170\" not found" Dec 13 06:44:34.905285 kubelet[1462]: E1213 06:44:34.905122 1462 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.230.56.170\" not found" Dec 13 06:44:35.006317 kubelet[1462]: E1213 06:44:35.006262 1462 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.230.56.170\" not found" Dec 13 06:44:35.107286 kubelet[1462]: I1213 06:44:35.107259 1462 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Dec 13 06:44:35.108256 env[1191]: time="2024-12-13T06:44:35.108090607Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 06:44:35.108823 kubelet[1462]: I1213 06:44:35.108510 1462 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Dec 13 06:44:35.633152 kubelet[1462]: E1213 06:44:35.632975 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:44:35.633152 kubelet[1462]: I1213 06:44:35.633100 1462 apiserver.go:52] "Watching apiserver" Dec 13 06:44:35.639536 kubelet[1462]: I1213 06:44:35.639493 1462 topology_manager.go:215] "Topology Admit Handler" podUID="f19f87fd-8f24-4d4d-9126-bdfd84bd6ee3" podNamespace="kube-system" podName="cilium-plfxs" Dec 13 06:44:35.639977 kubelet[1462]: I1213 06:44:35.639947 1462 topology_manager.go:215] "Topology Admit Handler" podUID="3473a739-763b-434a-9002-5264209e9e8d" podNamespace="kube-system" podName="kube-proxy-kd5k2" Dec 13 06:44:35.647796 systemd[1]: Created slice kubepods-burstable-podf19f87fd_8f24_4d4d_9126_bdfd84bd6ee3.slice. Dec 13 06:44:35.661321 systemd[1]: Created slice kubepods-besteffort-pod3473a739_763b_434a_9002_5264209e9e8d.slice. Dec 13 06:44:35.670542 kubelet[1462]: I1213 06:44:35.670519 1462 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Dec 13 06:44:35.693703 kubelet[1462]: I1213 06:44:35.693671 1462 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f19f87fd-8f24-4d4d-9126-bdfd84bd6ee3-lib-modules\") pod \"cilium-plfxs\" (UID: \"f19f87fd-8f24-4d4d-9126-bdfd84bd6ee3\") " pod="kube-system/cilium-plfxs" Dec 13 06:44:35.693867 kubelet[1462]: I1213 06:44:35.693727 1462 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f19f87fd-8f24-4d4d-9126-bdfd84bd6ee3-xtables-lock\") pod \"cilium-plfxs\" (UID: \"f19f87fd-8f24-4d4d-9126-bdfd84bd6ee3\") " pod="kube-system/cilium-plfxs" Dec 13 06:44:35.693867 kubelet[1462]: I1213 06:44:35.693783 1462 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f19f87fd-8f24-4d4d-9126-bdfd84bd6ee3-host-proc-sys-kernel\") pod \"cilium-plfxs\" (UID: \"f19f87fd-8f24-4d4d-9126-bdfd84bd6ee3\") " pod="kube-system/cilium-plfxs" Dec 13 06:44:35.693867 kubelet[1462]: I1213 06:44:35.693837 1462 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3473a739-763b-434a-9002-5264209e9e8d-kube-proxy\") pod \"kube-proxy-kd5k2\" (UID: \"3473a739-763b-434a-9002-5264209e9e8d\") " pod="kube-system/kube-proxy-kd5k2" Dec 13 06:44:35.694132 kubelet[1462]: I1213 06:44:35.693866 1462 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3473a739-763b-434a-9002-5264209e9e8d-xtables-lock\") pod \"kube-proxy-kd5k2\" (UID: \"3473a739-763b-434a-9002-5264209e9e8d\") " pod="kube-system/kube-proxy-kd5k2" Dec 13 06:44:35.694132 kubelet[1462]: I1213 06:44:35.693891 1462 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f19f87fd-8f24-4d4d-9126-bdfd84bd6ee3-hostproc\") pod \"cilium-plfxs\" (UID: \"f19f87fd-8f24-4d4d-9126-bdfd84bd6ee3\") " pod="kube-system/cilium-plfxs" Dec 13 06:44:35.694132 kubelet[1462]: I1213 06:44:35.693918 1462 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f19f87fd-8f24-4d4d-9126-bdfd84bd6ee3-cilium-cgroup\") pod \"cilium-plfxs\" (UID: \"f19f87fd-8f24-4d4d-9126-bdfd84bd6ee3\") " pod="kube-system/cilium-plfxs" Dec 13 06:44:35.694132 kubelet[1462]: I1213 06:44:35.693944 1462 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f19f87fd-8f24-4d4d-9126-bdfd84bd6ee3-cni-path\") pod \"cilium-plfxs\" (UID: \"f19f87fd-8f24-4d4d-9126-bdfd84bd6ee3\") " pod="kube-system/cilium-plfxs" Dec 13 06:44:35.694132 kubelet[1462]: I1213 06:44:35.693972 1462 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f19f87fd-8f24-4d4d-9126-bdfd84bd6ee3-hubble-tls\") pod \"cilium-plfxs\" (UID: \"f19f87fd-8f24-4d4d-9126-bdfd84bd6ee3\") " pod="kube-system/cilium-plfxs" Dec 13 06:44:35.694132 kubelet[1462]: I1213 06:44:35.694017 1462 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbh2s\" (UniqueName: \"kubernetes.io/projected/f19f87fd-8f24-4d4d-9126-bdfd84bd6ee3-kube-api-access-jbh2s\") pod \"cilium-plfxs\" (UID: \"f19f87fd-8f24-4d4d-9126-bdfd84bd6ee3\") " pod="kube-system/cilium-plfxs" Dec 13 06:44:35.694457 kubelet[1462]: I1213 06:44:35.694049 1462 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f19f87fd-8f24-4d4d-9126-bdfd84bd6ee3-bpf-maps\") pod \"cilium-plfxs\" (UID: \"f19f87fd-8f24-4d4d-9126-bdfd84bd6ee3\") " pod="kube-system/cilium-plfxs" Dec 13 06:44:35.694457 kubelet[1462]: I1213 06:44:35.694077 1462 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f19f87fd-8f24-4d4d-9126-bdfd84bd6ee3-host-proc-sys-net\") pod \"cilium-plfxs\" (UID: \"f19f87fd-8f24-4d4d-9126-bdfd84bd6ee3\") " pod="kube-system/cilium-plfxs" Dec 13 06:44:35.694457 kubelet[1462]: I1213 06:44:35.694113 1462 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3473a739-763b-434a-9002-5264209e9e8d-lib-modules\") pod \"kube-proxy-kd5k2\" (UID: \"3473a739-763b-434a-9002-5264209e9e8d\") " pod="kube-system/kube-proxy-kd5k2" Dec 13 06:44:35.694457 kubelet[1462]: I1213 06:44:35.694151 1462 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6mjt\" (UniqueName: \"kubernetes.io/projected/3473a739-763b-434a-9002-5264209e9e8d-kube-api-access-x6mjt\") pod \"kube-proxy-kd5k2\" (UID: \"3473a739-763b-434a-9002-5264209e9e8d\") " pod="kube-system/kube-proxy-kd5k2" Dec 13 06:44:35.694457 kubelet[1462]: I1213 06:44:35.694179 1462 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f19f87fd-8f24-4d4d-9126-bdfd84bd6ee3-cilium-run\") pod \"cilium-plfxs\" (UID: \"f19f87fd-8f24-4d4d-9126-bdfd84bd6ee3\") " pod="kube-system/cilium-plfxs" Dec 13 06:44:35.694731 kubelet[1462]: I1213 06:44:35.694213 1462 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f19f87fd-8f24-4d4d-9126-bdfd84bd6ee3-clustermesh-secrets\") pod \"cilium-plfxs\" (UID: \"f19f87fd-8f24-4d4d-9126-bdfd84bd6ee3\") " pod="kube-system/cilium-plfxs" Dec 13 06:44:35.694731 kubelet[1462]: I1213 06:44:35.694257 1462 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f19f87fd-8f24-4d4d-9126-bdfd84bd6ee3-cilium-config-path\") pod \"cilium-plfxs\" (UID: \"f19f87fd-8f24-4d4d-9126-bdfd84bd6ee3\") " pod="kube-system/cilium-plfxs" Dec 13 06:44:35.694731 kubelet[1462]: I1213 06:44:35.694308 1462 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f19f87fd-8f24-4d4d-9126-bdfd84bd6ee3-etc-cni-netd\") pod \"cilium-plfxs\" (UID: \"f19f87fd-8f24-4d4d-9126-bdfd84bd6ee3\") " pod="kube-system/cilium-plfxs" Dec 13 06:44:35.959726 env[1191]: time="2024-12-13T06:44:35.959636556Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-plfxs,Uid:f19f87fd-8f24-4d4d-9126-bdfd84bd6ee3,Namespace:kube-system,Attempt:0,}" Dec 13 06:44:35.970386 env[1191]: time="2024-12-13T06:44:35.970334021Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kd5k2,Uid:3473a739-763b-434a-9002-5264209e9e8d,Namespace:kube-system,Attempt:0,}" Dec 13 06:44:36.634378 kubelet[1462]: E1213 06:44:36.634295 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:44:36.763999 env[1191]: time="2024-12-13T06:44:36.763920964Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 06:44:36.768716 env[1191]: time="2024-12-13T06:44:36.768675565Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 06:44:36.774087 env[1191]: time="2024-12-13T06:44:36.774047633Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 06:44:36.780312 env[1191]: time="2024-12-13T06:44:36.780275119Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 06:44:36.782375 env[1191]: time="2024-12-13T06:44:36.782340109Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 06:44:36.784056 env[1191]: time="2024-12-13T06:44:36.784020348Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 06:44:36.786332 env[1191]: time="2024-12-13T06:44:36.786267361Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 06:44:36.787385 env[1191]: time="2024-12-13T06:44:36.787328665Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 06:44:36.803974 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3955575650.mount: Deactivated successfully. Dec 13 06:44:36.819461 env[1191]: time="2024-12-13T06:44:36.819352957Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 06:44:36.819685 env[1191]: time="2024-12-13T06:44:36.819446020Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 06:44:36.819685 env[1191]: time="2024-12-13T06:44:36.819464725Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 06:44:36.820315 env[1191]: time="2024-12-13T06:44:36.820115343Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/84fe67c416b7fa7290de89164c592bc709b6485997eac4b51b094d155564b2b9 pid=1525 runtime=io.containerd.runc.v2 Dec 13 06:44:36.820630 env[1191]: time="2024-12-13T06:44:36.820484451Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 06:44:36.820909 env[1191]: time="2024-12-13T06:44:36.820831030Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 06:44:36.821155 env[1191]: time="2024-12-13T06:44:36.821090757Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 06:44:36.821791 env[1191]: time="2024-12-13T06:44:36.821656733Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6946f6bf9e31df6c8baa77c3fec6aea21731f18d3faf77237e256ebfa3d5a029 pid=1526 runtime=io.containerd.runc.v2 Dec 13 06:44:36.854510 systemd[1]: Started cri-containerd-6946f6bf9e31df6c8baa77c3fec6aea21731f18d3faf77237e256ebfa3d5a029.scope. Dec 13 06:44:36.865027 systemd[1]: Started cri-containerd-84fe67c416b7fa7290de89164c592bc709b6485997eac4b51b094d155564b2b9.scope. Dec 13 06:44:36.924088 env[1191]: time="2024-12-13T06:44:36.924025386Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kd5k2,Uid:3473a739-763b-434a-9002-5264209e9e8d,Namespace:kube-system,Attempt:0,} returns sandbox id \"84fe67c416b7fa7290de89164c592bc709b6485997eac4b51b094d155564b2b9\"" Dec 13 06:44:36.927431 env[1191]: time="2024-12-13T06:44:36.927395432Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\"" Dec 13 06:44:36.934948 env[1191]: time="2024-12-13T06:44:36.934904152Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-plfxs,Uid:f19f87fd-8f24-4d4d-9126-bdfd84bd6ee3,Namespace:kube-system,Attempt:0,} returns sandbox id \"6946f6bf9e31df6c8baa77c3fec6aea21731f18d3faf77237e256ebfa3d5a029\"" Dec 13 06:44:37.635485 kubelet[1462]: E1213 06:44:37.635369 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:44:37.803526 systemd[1]: run-containerd-runc-k8s.io-84fe67c416b7fa7290de89164c592bc709b6485997eac4b51b094d155564b2b9-runc.7umWzI.mount: Deactivated successfully. Dec 13 06:44:38.518141 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3313743799.mount: Deactivated successfully. Dec 13 06:44:38.635912 kubelet[1462]: E1213 06:44:38.635868 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:44:39.510781 env[1191]: time="2024-12-13T06:44:39.510647097Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.30.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 06:44:39.515070 env[1191]: time="2024-12-13T06:44:39.513771742Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 06:44:39.516619 env[1191]: time="2024-12-13T06:44:39.516417189Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.30.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 06:44:39.525569 env[1191]: time="2024-12-13T06:44:39.525498619Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 06:44:39.526453 env[1191]: time="2024-12-13T06:44:39.526395647Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\" returns image reference \"sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2\"" Dec 13 06:44:39.534335 env[1191]: time="2024-12-13T06:44:39.530592530Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Dec 13 06:44:39.537592 env[1191]: time="2024-12-13T06:44:39.537519158Z" level=info msg="CreateContainer within sandbox \"84fe67c416b7fa7290de89164c592bc709b6485997eac4b51b094d155564b2b9\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 06:44:39.582608 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount892702432.mount: Deactivated successfully. Dec 13 06:44:39.589987 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2202320535.mount: Deactivated successfully. Dec 13 06:44:39.593879 env[1191]: time="2024-12-13T06:44:39.593827926Z" level=info msg="CreateContainer within sandbox \"84fe67c416b7fa7290de89164c592bc709b6485997eac4b51b094d155564b2b9\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"fb2fab4c6874f0de8cfb3d5669e83ffd9103e07e08239e57530ade6c3779e3f0\"" Dec 13 06:44:39.595282 env[1191]: time="2024-12-13T06:44:39.595224657Z" level=info msg="StartContainer for \"fb2fab4c6874f0de8cfb3d5669e83ffd9103e07e08239e57530ade6c3779e3f0\"" Dec 13 06:44:39.623473 systemd[1]: Started cri-containerd-fb2fab4c6874f0de8cfb3d5669e83ffd9103e07e08239e57530ade6c3779e3f0.scope. Dec 13 06:44:39.636994 kubelet[1462]: E1213 06:44:39.636921 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:44:39.678889 env[1191]: time="2024-12-13T06:44:39.678833700Z" level=info msg="StartContainer for \"fb2fab4c6874f0de8cfb3d5669e83ffd9103e07e08239e57530ade6c3779e3f0\" returns successfully" Dec 13 06:44:39.879434 kubelet[1462]: I1213 06:44:39.878768 1462 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-kd5k2" podStartSLOduration=4.276802887 podStartE2EDuration="6.878738296s" podCreationTimestamp="2024-12-13 06:44:33 +0000 UTC" firstStartedPulling="2024-12-13 06:44:36.926597309 +0000 UTC m=+4.152233400" lastFinishedPulling="2024-12-13 06:44:39.528532718 +0000 UTC m=+6.754168809" observedRunningTime="2024-12-13 06:44:39.878638993 +0000 UTC m=+7.104275098" watchObservedRunningTime="2024-12-13 06:44:39.878738296 +0000 UTC m=+7.104374394" Dec 13 06:44:40.637688 kubelet[1462]: E1213 06:44:40.637614 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:44:41.638692 kubelet[1462]: E1213 06:44:41.638625 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:44:42.638882 kubelet[1462]: E1213 06:44:42.638837 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:44:43.639369 kubelet[1462]: E1213 06:44:43.639247 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:44:44.639915 kubelet[1462]: E1213 06:44:44.639841 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:44:45.640284 kubelet[1462]: E1213 06:44:45.640149 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:44:46.546599 update_engine[1187]: I1213 06:44:46.545727 1187 update_attempter.cc:509] Updating boot flags... Dec 13 06:44:46.641293 kubelet[1462]: E1213 06:44:46.641164 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:44:47.266988 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2504712947.mount: Deactivated successfully. Dec 13 06:44:47.642576 kubelet[1462]: E1213 06:44:47.642087 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:44:48.642651 kubelet[1462]: E1213 06:44:48.642595 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:44:49.643397 kubelet[1462]: E1213 06:44:49.642749 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:44:50.643173 kubelet[1462]: E1213 06:44:50.643117 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:44:51.643346 kubelet[1462]: E1213 06:44:51.643256 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:44:51.820434 env[1191]: time="2024-12-13T06:44:51.820336529Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 06:44:51.822656 env[1191]: time="2024-12-13T06:44:51.822602172Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 06:44:51.826167 env[1191]: time="2024-12-13T06:44:51.826092860Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 06:44:51.828586 env[1191]: time="2024-12-13T06:44:51.827495479Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Dec 13 06:44:51.832283 env[1191]: time="2024-12-13T06:44:51.832242401Z" level=info msg="CreateContainer within sandbox \"6946f6bf9e31df6c8baa77c3fec6aea21731f18d3faf77237e256ebfa3d5a029\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 06:44:51.847740 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2719723461.mount: Deactivated successfully. Dec 13 06:44:51.857924 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2002948873.mount: Deactivated successfully. Dec 13 06:44:51.860241 env[1191]: time="2024-12-13T06:44:51.860167298Z" level=info msg="CreateContainer within sandbox \"6946f6bf9e31df6c8baa77c3fec6aea21731f18d3faf77237e256ebfa3d5a029\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"8d3e8f2e8ab19192f8f00b309a6bfbdd15a9ca457c6c29df86b521da6c61b28a\"" Dec 13 06:44:51.860941 env[1191]: time="2024-12-13T06:44:51.860905081Z" level=info msg="StartContainer for \"8d3e8f2e8ab19192f8f00b309a6bfbdd15a9ca457c6c29df86b521da6c61b28a\"" Dec 13 06:44:51.896100 systemd[1]: Started cri-containerd-8d3e8f2e8ab19192f8f00b309a6bfbdd15a9ca457c6c29df86b521da6c61b28a.scope. Dec 13 06:44:51.953244 env[1191]: time="2024-12-13T06:44:51.952345416Z" level=info msg="StartContainer for \"8d3e8f2e8ab19192f8f00b309a6bfbdd15a9ca457c6c29df86b521da6c61b28a\" returns successfully" Dec 13 06:44:51.963784 systemd[1]: cri-containerd-8d3e8f2e8ab19192f8f00b309a6bfbdd15a9ca457c6c29df86b521da6c61b28a.scope: Deactivated successfully. Dec 13 06:44:52.258125 env[1191]: time="2024-12-13T06:44:52.258039801Z" level=info msg="shim disconnected" id=8d3e8f2e8ab19192f8f00b309a6bfbdd15a9ca457c6c29df86b521da6c61b28a Dec 13 06:44:52.258125 env[1191]: time="2024-12-13T06:44:52.258111486Z" level=warning msg="cleaning up after shim disconnected" id=8d3e8f2e8ab19192f8f00b309a6bfbdd15a9ca457c6c29df86b521da6c61b28a namespace=k8s.io Dec 13 06:44:52.258125 env[1191]: time="2024-12-13T06:44:52.258137912Z" level=info msg="cleaning up dead shim" Dec 13 06:44:52.270591 env[1191]: time="2024-12-13T06:44:52.270506345Z" level=warning msg="cleanup warnings time=\"2024-12-13T06:44:52Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1820 runtime=io.containerd.runc.v2\n" Dec 13 06:44:52.644458 kubelet[1462]: E1213 06:44:52.643982 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:44:52.845062 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8d3e8f2e8ab19192f8f00b309a6bfbdd15a9ca457c6c29df86b521da6c61b28a-rootfs.mount: Deactivated successfully. Dec 13 06:44:52.943221 env[1191]: time="2024-12-13T06:44:52.943149436Z" level=info msg="CreateContainer within sandbox \"6946f6bf9e31df6c8baa77c3fec6aea21731f18d3faf77237e256ebfa3d5a029\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 06:44:52.963289 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1378136209.mount: Deactivated successfully. Dec 13 06:44:52.973588 env[1191]: time="2024-12-13T06:44:52.973504024Z" level=info msg="CreateContainer within sandbox \"6946f6bf9e31df6c8baa77c3fec6aea21731f18d3faf77237e256ebfa3d5a029\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"002814d2e60c7bacb25546ef861ad7c76d6a2d655b07fd4306fa0c0a0758107c\"" Dec 13 06:44:52.976336 env[1191]: time="2024-12-13T06:44:52.976296861Z" level=info msg="StartContainer for \"002814d2e60c7bacb25546ef861ad7c76d6a2d655b07fd4306fa0c0a0758107c\"" Dec 13 06:44:53.004661 systemd[1]: Started cri-containerd-002814d2e60c7bacb25546ef861ad7c76d6a2d655b07fd4306fa0c0a0758107c.scope. Dec 13 06:44:53.058097 env[1191]: time="2024-12-13T06:44:53.058040449Z" level=info msg="StartContainer for \"002814d2e60c7bacb25546ef861ad7c76d6a2d655b07fd4306fa0c0a0758107c\" returns successfully" Dec 13 06:44:53.074361 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 06:44:53.074711 systemd[1]: Stopped systemd-sysctl.service. Dec 13 06:44:53.076367 systemd[1]: Stopping systemd-sysctl.service... Dec 13 06:44:53.080862 systemd[1]: Starting systemd-sysctl.service... Dec 13 06:44:53.082605 systemd[1]: cri-containerd-002814d2e60c7bacb25546ef861ad7c76d6a2d655b07fd4306fa0c0a0758107c.scope: Deactivated successfully. Dec 13 06:44:53.105131 systemd[1]: Finished systemd-sysctl.service. Dec 13 06:44:53.114505 env[1191]: time="2024-12-13T06:44:53.114350866Z" level=info msg="shim disconnected" id=002814d2e60c7bacb25546ef861ad7c76d6a2d655b07fd4306fa0c0a0758107c Dec 13 06:44:53.114793 env[1191]: time="2024-12-13T06:44:53.114759611Z" level=warning msg="cleaning up after shim disconnected" id=002814d2e60c7bacb25546ef861ad7c76d6a2d655b07fd4306fa0c0a0758107c namespace=k8s.io Dec 13 06:44:53.114927 env[1191]: time="2024-12-13T06:44:53.114898668Z" level=info msg="cleaning up dead shim" Dec 13 06:44:53.126013 env[1191]: time="2024-12-13T06:44:53.125957473Z" level=warning msg="cleanup warnings time=\"2024-12-13T06:44:53Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1886 runtime=io.containerd.runc.v2\n" Dec 13 06:44:53.631861 kubelet[1462]: E1213 06:44:53.631805 1462 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:44:53.644145 kubelet[1462]: E1213 06:44:53.644114 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:44:53.844294 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-002814d2e60c7bacb25546ef861ad7c76d6a2d655b07fd4306fa0c0a0758107c-rootfs.mount: Deactivated successfully. Dec 13 06:44:53.946815 env[1191]: time="2024-12-13T06:44:53.946747225Z" level=info msg="CreateContainer within sandbox \"6946f6bf9e31df6c8baa77c3fec6aea21731f18d3faf77237e256ebfa3d5a029\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 06:44:53.969515 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2232523241.mount: Deactivated successfully. Dec 13 06:44:53.977377 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1278975262.mount: Deactivated successfully. Dec 13 06:44:53.980754 env[1191]: time="2024-12-13T06:44:53.980644050Z" level=info msg="CreateContainer within sandbox \"6946f6bf9e31df6c8baa77c3fec6aea21731f18d3faf77237e256ebfa3d5a029\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"5231e8f01485913409423a80e37b4b598bae7cb492b3ed873c0cdf8e074fca1e\"" Dec 13 06:44:53.981866 env[1191]: time="2024-12-13T06:44:53.981831664Z" level=info msg="StartContainer for \"5231e8f01485913409423a80e37b4b598bae7cb492b3ed873c0cdf8e074fca1e\"" Dec 13 06:44:54.004776 systemd[1]: Started cri-containerd-5231e8f01485913409423a80e37b4b598bae7cb492b3ed873c0cdf8e074fca1e.scope. Dec 13 06:44:54.057528 systemd[1]: cri-containerd-5231e8f01485913409423a80e37b4b598bae7cb492b3ed873c0cdf8e074fca1e.scope: Deactivated successfully. Dec 13 06:44:54.060783 env[1191]: time="2024-12-13T06:44:54.060702489Z" level=info msg="StartContainer for \"5231e8f01485913409423a80e37b4b598bae7cb492b3ed873c0cdf8e074fca1e\" returns successfully" Dec 13 06:44:54.087352 env[1191]: time="2024-12-13T06:44:54.087296079Z" level=info msg="shim disconnected" id=5231e8f01485913409423a80e37b4b598bae7cb492b3ed873c0cdf8e074fca1e Dec 13 06:44:54.087764 env[1191]: time="2024-12-13T06:44:54.087723449Z" level=warning msg="cleaning up after shim disconnected" id=5231e8f01485913409423a80e37b4b598bae7cb492b3ed873c0cdf8e074fca1e namespace=k8s.io Dec 13 06:44:54.087940 env[1191]: time="2024-12-13T06:44:54.087910555Z" level=info msg="cleaning up dead shim" Dec 13 06:44:54.097924 env[1191]: time="2024-12-13T06:44:54.097875721Z" level=warning msg="cleanup warnings time=\"2024-12-13T06:44:54Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1946 runtime=io.containerd.runc.v2\n" Dec 13 06:44:54.645278 kubelet[1462]: E1213 06:44:54.645213 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:44:54.951224 env[1191]: time="2024-12-13T06:44:54.951106735Z" level=info msg="CreateContainer within sandbox \"6946f6bf9e31df6c8baa77c3fec6aea21731f18d3faf77237e256ebfa3d5a029\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 06:44:54.992037 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1238717757.mount: Deactivated successfully. Dec 13 06:44:55.000807 env[1191]: time="2024-12-13T06:44:55.000745673Z" level=info msg="CreateContainer within sandbox \"6946f6bf9e31df6c8baa77c3fec6aea21731f18d3faf77237e256ebfa3d5a029\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"789294f5df35ed65d3ced9f3cacbce83135c0b582c1697a50433b7928567e0bd\"" Dec 13 06:44:55.001508 env[1191]: time="2024-12-13T06:44:55.001471637Z" level=info msg="StartContainer for \"789294f5df35ed65d3ced9f3cacbce83135c0b582c1697a50433b7928567e0bd\"" Dec 13 06:44:55.026265 systemd[1]: Started cri-containerd-789294f5df35ed65d3ced9f3cacbce83135c0b582c1697a50433b7928567e0bd.scope. Dec 13 06:44:55.066123 systemd[1]: cri-containerd-789294f5df35ed65d3ced9f3cacbce83135c0b582c1697a50433b7928567e0bd.scope: Deactivated successfully. Dec 13 06:44:55.068468 env[1191]: time="2024-12-13T06:44:55.068340685Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf19f87fd_8f24_4d4d_9126_bdfd84bd6ee3.slice/cri-containerd-789294f5df35ed65d3ced9f3cacbce83135c0b582c1697a50433b7928567e0bd.scope/memory.events\": no such file or directory" Dec 13 06:44:55.070529 env[1191]: time="2024-12-13T06:44:55.070475800Z" level=info msg="StartContainer for \"789294f5df35ed65d3ced9f3cacbce83135c0b582c1697a50433b7928567e0bd\" returns successfully" Dec 13 06:44:55.095646 env[1191]: time="2024-12-13T06:44:55.095515896Z" level=info msg="shim disconnected" id=789294f5df35ed65d3ced9f3cacbce83135c0b582c1697a50433b7928567e0bd Dec 13 06:44:55.095888 env[1191]: time="2024-12-13T06:44:55.095650078Z" level=warning msg="cleaning up after shim disconnected" id=789294f5df35ed65d3ced9f3cacbce83135c0b582c1697a50433b7928567e0bd namespace=k8s.io Dec 13 06:44:55.095888 env[1191]: time="2024-12-13T06:44:55.095669768Z" level=info msg="cleaning up dead shim" Dec 13 06:44:55.105923 env[1191]: time="2024-12-13T06:44:55.105884523Z" level=warning msg="cleanup warnings time=\"2024-12-13T06:44:55Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1998 runtime=io.containerd.runc.v2\n" Dec 13 06:44:55.645661 kubelet[1462]: E1213 06:44:55.645606 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:44:55.844499 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-789294f5df35ed65d3ced9f3cacbce83135c0b582c1697a50433b7928567e0bd-rootfs.mount: Deactivated successfully. Dec 13 06:44:55.956400 env[1191]: time="2024-12-13T06:44:55.956345231Z" level=info msg="CreateContainer within sandbox \"6946f6bf9e31df6c8baa77c3fec6aea21731f18d3faf77237e256ebfa3d5a029\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 06:44:55.974923 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1657074456.mount: Deactivated successfully. Dec 13 06:44:55.981862 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1167390297.mount: Deactivated successfully. Dec 13 06:44:55.985722 env[1191]: time="2024-12-13T06:44:55.985680167Z" level=info msg="CreateContainer within sandbox \"6946f6bf9e31df6c8baa77c3fec6aea21731f18d3faf77237e256ebfa3d5a029\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"23c3530bd745c372583cae2013a64e8f02ff85c06e5b642e3c69fece9b6a75df\"" Dec 13 06:44:55.986610 env[1191]: time="2024-12-13T06:44:55.986575735Z" level=info msg="StartContainer for \"23c3530bd745c372583cae2013a64e8f02ff85c06e5b642e3c69fece9b6a75df\"" Dec 13 06:44:56.009636 systemd[1]: Started cri-containerd-23c3530bd745c372583cae2013a64e8f02ff85c06e5b642e3c69fece9b6a75df.scope. Dec 13 06:44:56.062597 env[1191]: time="2024-12-13T06:44:56.058038058Z" level=info msg="StartContainer for \"23c3530bd745c372583cae2013a64e8f02ff85c06e5b642e3c69fece9b6a75df\" returns successfully" Dec 13 06:44:56.207988 kubelet[1462]: I1213 06:44:56.206829 1462 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Dec 13 06:44:56.646887 kubelet[1462]: E1213 06:44:56.646743 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:44:56.787652 kernel: Initializing XFRM netlink socket Dec 13 06:44:56.993675 kubelet[1462]: I1213 06:44:56.993573 1462 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-plfxs" podStartSLOduration=9.100234961 podStartE2EDuration="23.993516141s" podCreationTimestamp="2024-12-13 06:44:33 +0000 UTC" firstStartedPulling="2024-12-13 06:44:36.936440023 +0000 UTC m=+4.162076108" lastFinishedPulling="2024-12-13 06:44:51.829721203 +0000 UTC m=+19.055357288" observedRunningTime="2024-12-13 06:44:56.991391283 +0000 UTC m=+24.217027397" watchObservedRunningTime="2024-12-13 06:44:56.993516141 +0000 UTC m=+24.219152242" Dec 13 06:44:57.647215 kubelet[1462]: E1213 06:44:57.647121 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:44:58.523631 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Dec 13 06:44:58.523930 systemd-networkd[1017]: cilium_host: Link UP Dec 13 06:44:58.524239 systemd-networkd[1017]: cilium_net: Link UP Dec 13 06:44:58.528504 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Dec 13 06:44:58.527350 systemd-networkd[1017]: cilium_net: Gained carrier Dec 13 06:44:58.527924 systemd-networkd[1017]: cilium_host: Gained carrier Dec 13 06:44:58.531838 systemd-networkd[1017]: cilium_net: Gained IPv6LL Dec 13 06:44:58.648211 kubelet[1462]: E1213 06:44:58.648135 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:44:58.705581 systemd-networkd[1017]: cilium_vxlan: Link UP Dec 13 06:44:58.705595 systemd-networkd[1017]: cilium_vxlan: Gained carrier Dec 13 06:44:59.099586 kernel: NET: Registered PF_ALG protocol family Dec 13 06:44:59.400847 systemd-networkd[1017]: cilium_host: Gained IPv6LL Dec 13 06:44:59.649422 kubelet[1462]: E1213 06:44:59.649347 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:45:00.134331 systemd-networkd[1017]: lxc_health: Link UP Dec 13 06:45:00.150629 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Dec 13 06:45:00.150890 systemd-networkd[1017]: lxc_health: Gained carrier Dec 13 06:45:00.469959 systemd-networkd[1017]: cilium_vxlan: Gained IPv6LL Dec 13 06:45:00.650087 kubelet[1462]: E1213 06:45:00.649959 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:45:01.651056 kubelet[1462]: E1213 06:45:01.650977 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:45:01.749993 systemd-networkd[1017]: lxc_health: Gained IPv6LL Dec 13 06:45:02.651585 kubelet[1462]: E1213 06:45:02.651506 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:45:03.653130 kubelet[1462]: E1213 06:45:03.653066 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:45:04.653454 kubelet[1462]: E1213 06:45:04.653398 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:45:04.739926 kubelet[1462]: I1213 06:45:04.739860 1462 topology_manager.go:215] "Topology Admit Handler" podUID="d28d56a2-ff39-4313-883c-e6840741758a" podNamespace="default" podName="nginx-deployment-85f456d6dd-cfpdc" Dec 13 06:45:04.751142 systemd[1]: Created slice kubepods-besteffort-podd28d56a2_ff39_4313_883c_e6840741758a.slice. Dec 13 06:45:04.800770 kubelet[1462]: I1213 06:45:04.800704 1462 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78n5h\" (UniqueName: \"kubernetes.io/projected/d28d56a2-ff39-4313-883c-e6840741758a-kube-api-access-78n5h\") pod \"nginx-deployment-85f456d6dd-cfpdc\" (UID: \"d28d56a2-ff39-4313-883c-e6840741758a\") " pod="default/nginx-deployment-85f456d6dd-cfpdc" Dec 13 06:45:05.059949 env[1191]: time="2024-12-13T06:45:05.058625172Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-cfpdc,Uid:d28d56a2-ff39-4313-883c-e6840741758a,Namespace:default,Attempt:0,}" Dec 13 06:45:05.127860 systemd-networkd[1017]: lxccbe4ba07be08: Link UP Dec 13 06:45:05.140820 kernel: eth0: renamed from tmp1c3d4 Dec 13 06:45:05.151513 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 06:45:05.152761 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxccbe4ba07be08: link becomes ready Dec 13 06:45:05.151721 systemd-networkd[1017]: lxccbe4ba07be08: Gained carrier Dec 13 06:45:05.654383 kubelet[1462]: E1213 06:45:05.654254 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:45:06.586966 env[1191]: time="2024-12-13T06:45:06.585762518Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 06:45:06.586966 env[1191]: time="2024-12-13T06:45:06.585863910Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 06:45:06.586966 env[1191]: time="2024-12-13T06:45:06.585881879Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 06:45:06.587832 env[1191]: time="2024-12-13T06:45:06.587136434Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1c3d413a5713ab6e768025370e0d84540ce9870e0c8afeda48cb7deb7ea7eb2f pid=2550 runtime=io.containerd.runc.v2 Dec 13 06:45:06.617067 systemd[1]: run-containerd-runc-k8s.io-1c3d413a5713ab6e768025370e0d84540ce9870e0c8afeda48cb7deb7ea7eb2f-runc.htsI5c.mount: Deactivated successfully. Dec 13 06:45:06.623852 systemd[1]: Started cri-containerd-1c3d413a5713ab6e768025370e0d84540ce9870e0c8afeda48cb7deb7ea7eb2f.scope. Dec 13 06:45:06.654613 kubelet[1462]: E1213 06:45:06.654535 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:45:06.690306 env[1191]: time="2024-12-13T06:45:06.690248203Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-cfpdc,Uid:d28d56a2-ff39-4313-883c-e6840741758a,Namespace:default,Attempt:0,} returns sandbox id \"1c3d413a5713ab6e768025370e0d84540ce9870e0c8afeda48cb7deb7ea7eb2f\"" Dec 13 06:45:06.693462 env[1191]: time="2024-12-13T06:45:06.693429670Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Dec 13 06:45:07.062056 systemd-networkd[1017]: lxccbe4ba07be08: Gained IPv6LL Dec 13 06:45:07.655614 kubelet[1462]: E1213 06:45:07.655537 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:45:08.657225 kubelet[1462]: E1213 06:45:08.656954 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:45:09.657681 kubelet[1462]: E1213 06:45:09.657618 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:45:10.658617 kubelet[1462]: E1213 06:45:10.658538 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:45:11.379695 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1121382597.mount: Deactivated successfully. Dec 13 06:45:11.659784 kubelet[1462]: E1213 06:45:11.659640 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:45:12.660487 kubelet[1462]: E1213 06:45:12.660422 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:45:13.631931 kubelet[1462]: E1213 06:45:13.631876 1462 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:45:13.660633 kubelet[1462]: E1213 06:45:13.660599 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:45:13.822135 env[1191]: time="2024-12-13T06:45:13.822028913Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 06:45:13.825372 env[1191]: time="2024-12-13T06:45:13.825306486Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 06:45:13.829869 env[1191]: time="2024-12-13T06:45:13.829831132Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 06:45:13.841300 env[1191]: time="2024-12-13T06:45:13.841256517Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx@sha256:e04edf30a4ea4c5a4107110797c72d3ee8a654415f00acd4019be17218afd9a1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 06:45:13.842750 env[1191]: time="2024-12-13T06:45:13.842670074Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3\"" Dec 13 06:45:13.851436 env[1191]: time="2024-12-13T06:45:13.851394067Z" level=info msg="CreateContainer within sandbox \"1c3d413a5713ab6e768025370e0d84540ce9870e0c8afeda48cb7deb7ea7eb2f\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Dec 13 06:45:13.864377 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount643100893.mount: Deactivated successfully. Dec 13 06:45:13.872167 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount78462295.mount: Deactivated successfully. Dec 13 06:45:13.875745 env[1191]: time="2024-12-13T06:45:13.875678121Z" level=info msg="CreateContainer within sandbox \"1c3d413a5713ab6e768025370e0d84540ce9870e0c8afeda48cb7deb7ea7eb2f\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"f692dda2f732d1429af13ea6b88e00bde2994302496291905d64944156525166\"" Dec 13 06:45:13.876881 env[1191]: time="2024-12-13T06:45:13.876840911Z" level=info msg="StartContainer for \"f692dda2f732d1429af13ea6b88e00bde2994302496291905d64944156525166\"" Dec 13 06:45:13.903841 systemd[1]: Started cri-containerd-f692dda2f732d1429af13ea6b88e00bde2994302496291905d64944156525166.scope. Dec 13 06:45:13.954167 env[1191]: time="2024-12-13T06:45:13.954095779Z" level=info msg="StartContainer for \"f692dda2f732d1429af13ea6b88e00bde2994302496291905d64944156525166\" returns successfully" Dec 13 06:45:14.026118 kubelet[1462]: I1213 06:45:14.026005 1462 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-85f456d6dd-cfpdc" podStartSLOduration=2.870018657 podStartE2EDuration="10.02595207s" podCreationTimestamp="2024-12-13 06:45:04 +0000 UTC" firstStartedPulling="2024-12-13 06:45:06.692416856 +0000 UTC m=+33.918052942" lastFinishedPulling="2024-12-13 06:45:13.848350261 +0000 UTC m=+41.073986355" observedRunningTime="2024-12-13 06:45:14.025820794 +0000 UTC m=+41.251456884" watchObservedRunningTime="2024-12-13 06:45:14.02595207 +0000 UTC m=+41.251588168" Dec 13 06:45:14.661110 kubelet[1462]: E1213 06:45:14.661039 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:45:15.662055 kubelet[1462]: E1213 06:45:15.662000 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:45:16.663858 kubelet[1462]: E1213 06:45:16.663801 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:45:17.664805 kubelet[1462]: E1213 06:45:17.664746 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:45:18.666691 kubelet[1462]: E1213 06:45:18.666608 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:45:19.667781 kubelet[1462]: E1213 06:45:19.667732 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:45:20.669524 kubelet[1462]: E1213 06:45:20.669408 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:45:21.670344 kubelet[1462]: E1213 06:45:21.670288 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:45:22.671844 kubelet[1462]: E1213 06:45:22.671782 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:45:23.673747 kubelet[1462]: E1213 06:45:23.673685 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:45:24.675140 kubelet[1462]: E1213 06:45:24.675074 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:45:25.675907 kubelet[1462]: E1213 06:45:25.675827 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:45:26.085298 kubelet[1462]: I1213 06:45:26.085166 1462 topology_manager.go:215] "Topology Admit Handler" podUID="7dc89797-0363-4e0f-b6c9-7a26ab353554" podNamespace="default" podName="nfs-server-provisioner-0" Dec 13 06:45:26.093202 systemd[1]: Created slice kubepods-besteffort-pod7dc89797_0363_4e0f_b6c9_7a26ab353554.slice. Dec 13 06:45:26.145651 kubelet[1462]: I1213 06:45:26.145606 1462 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kcr96\" (UniqueName: \"kubernetes.io/projected/7dc89797-0363-4e0f-b6c9-7a26ab353554-kube-api-access-kcr96\") pod \"nfs-server-provisioner-0\" (UID: \"7dc89797-0363-4e0f-b6c9-7a26ab353554\") " pod="default/nfs-server-provisioner-0" Dec 13 06:45:26.145944 kubelet[1462]: I1213 06:45:26.145911 1462 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/7dc89797-0363-4e0f-b6c9-7a26ab353554-data\") pod \"nfs-server-provisioner-0\" (UID: \"7dc89797-0363-4e0f-b6c9-7a26ab353554\") " pod="default/nfs-server-provisioner-0" Dec 13 06:45:26.398036 env[1191]: time="2024-12-13T06:45:26.397886339Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:7dc89797-0363-4e0f-b6c9-7a26ab353554,Namespace:default,Attempt:0,}" Dec 13 06:45:26.459171 systemd-networkd[1017]: lxc03d09f641bfa: Link UP Dec 13 06:45:26.470652 kernel: eth0: renamed from tmp20674 Dec 13 06:45:26.479775 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 06:45:26.479895 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc03d09f641bfa: link becomes ready Dec 13 06:45:26.480111 systemd-networkd[1017]: lxc03d09f641bfa: Gained carrier Dec 13 06:45:26.677107 kubelet[1462]: E1213 06:45:26.677021 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:45:26.689882 env[1191]: time="2024-12-13T06:45:26.689748626Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 06:45:26.689882 env[1191]: time="2024-12-13T06:45:26.689843356Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 06:45:26.690137 env[1191]: time="2024-12-13T06:45:26.689862532Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 06:45:26.691039 env[1191]: time="2024-12-13T06:45:26.690884620Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/20674c8f6a18fa92b76d70f482bb1d069541308c44d71fb7b986f82bdb589cd2 pid=2684 runtime=io.containerd.runc.v2 Dec 13 06:45:26.722972 systemd[1]: Started cri-containerd-20674c8f6a18fa92b76d70f482bb1d069541308c44d71fb7b986f82bdb589cd2.scope. Dec 13 06:45:26.787949 env[1191]: time="2024-12-13T06:45:26.787888207Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:7dc89797-0363-4e0f-b6c9-7a26ab353554,Namespace:default,Attempt:0,} returns sandbox id \"20674c8f6a18fa92b76d70f482bb1d069541308c44d71fb7b986f82bdb589cd2\"" Dec 13 06:45:26.790779 env[1191]: time="2024-12-13T06:45:26.790744572Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Dec 13 06:45:27.677526 kubelet[1462]: E1213 06:45:27.677472 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:45:28.437945 systemd-networkd[1017]: lxc03d09f641bfa: Gained IPv6LL Dec 13 06:45:28.678423 kubelet[1462]: E1213 06:45:28.678337 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:45:29.680308 kubelet[1462]: E1213 06:45:29.679542 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:45:30.680499 kubelet[1462]: E1213 06:45:30.680341 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:45:31.681081 kubelet[1462]: E1213 06:45:31.681028 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:45:32.681740 kubelet[1462]: E1213 06:45:32.681672 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:45:33.524438 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1120867897.mount: Deactivated successfully. Dec 13 06:45:33.631256 kubelet[1462]: E1213 06:45:33.631152 1462 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:45:33.682667 kubelet[1462]: E1213 06:45:33.682599 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:45:34.683663 kubelet[1462]: E1213 06:45:34.683597 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:45:35.684629 kubelet[1462]: E1213 06:45:35.684460 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:45:36.685467 kubelet[1462]: E1213 06:45:36.685358 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:45:37.181624 env[1191]: time="2024-12-13T06:45:37.181432413Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 06:45:37.184872 env[1191]: time="2024-12-13T06:45:37.184826971Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 06:45:37.187152 env[1191]: time="2024-12-13T06:45:37.187117447Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 06:45:37.189506 env[1191]: time="2024-12-13T06:45:37.189469065Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 06:45:37.190644 env[1191]: time="2024-12-13T06:45:37.190592532Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Dec 13 06:45:37.194501 env[1191]: time="2024-12-13T06:45:37.194449923Z" level=info msg="CreateContainer within sandbox \"20674c8f6a18fa92b76d70f482bb1d069541308c44d71fb7b986f82bdb589cd2\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Dec 13 06:45:37.207299 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3930635699.mount: Deactivated successfully. Dec 13 06:45:37.215271 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4055425786.mount: Deactivated successfully. Dec 13 06:45:37.229062 env[1191]: time="2024-12-13T06:45:37.228984664Z" level=info msg="CreateContainer within sandbox \"20674c8f6a18fa92b76d70f482bb1d069541308c44d71fb7b986f82bdb589cd2\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"77b7dc0d8dc8ec209f354d97345be334d19d9f8b7da75ce0067839a8f2d7dd5e\"" Dec 13 06:45:37.229772 env[1191]: time="2024-12-13T06:45:37.229733871Z" level=info msg="StartContainer for \"77b7dc0d8dc8ec209f354d97345be334d19d9f8b7da75ce0067839a8f2d7dd5e\"" Dec 13 06:45:37.258102 systemd[1]: Started cri-containerd-77b7dc0d8dc8ec209f354d97345be334d19d9f8b7da75ce0067839a8f2d7dd5e.scope. Dec 13 06:45:37.310265 env[1191]: time="2024-12-13T06:45:37.310202235Z" level=info msg="StartContainer for \"77b7dc0d8dc8ec209f354d97345be334d19d9f8b7da75ce0067839a8f2d7dd5e\" returns successfully" Dec 13 06:45:37.686057 kubelet[1462]: E1213 06:45:37.685967 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:45:38.090680 kubelet[1462]: I1213 06:45:38.090444 1462 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=1.687874926 podStartE2EDuration="12.090393246s" podCreationTimestamp="2024-12-13 06:45:26 +0000 UTC" firstStartedPulling="2024-12-13 06:45:26.78981682 +0000 UTC m=+54.015452905" lastFinishedPulling="2024-12-13 06:45:37.19233514 +0000 UTC m=+64.417971225" observedRunningTime="2024-12-13 06:45:38.089768753 +0000 UTC m=+65.315404856" watchObservedRunningTime="2024-12-13 06:45:38.090393246 +0000 UTC m=+65.316029344" Dec 13 06:45:38.686748 kubelet[1462]: E1213 06:45:38.686601 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:45:39.687706 kubelet[1462]: E1213 06:45:39.687652 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:45:40.689506 kubelet[1462]: E1213 06:45:40.689447 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:45:41.690483 kubelet[1462]: E1213 06:45:41.690374 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:45:42.691502 kubelet[1462]: E1213 06:45:42.691446 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:45:43.693452 kubelet[1462]: E1213 06:45:43.693398 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:45:44.695331 kubelet[1462]: E1213 06:45:44.695255 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:45:45.696650 kubelet[1462]: E1213 06:45:45.696599 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:45:46.698385 kubelet[1462]: E1213 06:45:46.698316 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:45:47.020752 kubelet[1462]: I1213 06:45:47.020604 1462 topology_manager.go:215] "Topology Admit Handler" podUID="4c1b0469-2d87-46c6-a0f0-bb26b28f07be" podNamespace="default" podName="test-pod-1" Dec 13 06:45:47.028794 systemd[1]: Created slice kubepods-besteffort-pod4c1b0469_2d87_46c6_a0f0_bb26b28f07be.slice. Dec 13 06:45:47.185637 kubelet[1462]: I1213 06:45:47.185566 1462 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-328488ae-33c4-46ca-89ce-65657fc26cb7\" (UniqueName: \"kubernetes.io/nfs/4c1b0469-2d87-46c6-a0f0-bb26b28f07be-pvc-328488ae-33c4-46ca-89ce-65657fc26cb7\") pod \"test-pod-1\" (UID: \"4c1b0469-2d87-46c6-a0f0-bb26b28f07be\") " pod="default/test-pod-1" Dec 13 06:45:47.185984 kubelet[1462]: I1213 06:45:47.185945 1462 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jq9bv\" (UniqueName: \"kubernetes.io/projected/4c1b0469-2d87-46c6-a0f0-bb26b28f07be-kube-api-access-jq9bv\") pod \"test-pod-1\" (UID: \"4c1b0469-2d87-46c6-a0f0-bb26b28f07be\") " pod="default/test-pod-1" Dec 13 06:45:47.336608 kernel: FS-Cache: Loaded Dec 13 06:45:47.398930 kernel: RPC: Registered named UNIX socket transport module. Dec 13 06:45:47.399158 kernel: RPC: Registered udp transport module. Dec 13 06:45:47.399222 kernel: RPC: Registered tcp transport module. Dec 13 06:45:47.399281 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Dec 13 06:45:47.479709 kernel: FS-Cache: Netfs 'nfs' registered for caching Dec 13 06:45:47.699344 kubelet[1462]: E1213 06:45:47.699291 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:45:47.719356 kernel: NFS: Registering the id_resolver key type Dec 13 06:45:47.719513 kernel: Key type id_resolver registered Dec 13 06:45:47.719593 kernel: Key type id_legacy registered Dec 13 06:45:47.778530 nfsidmap[2804]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'gb1.brightbox.com' Dec 13 06:45:47.786135 nfsidmap[2807]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'gb1.brightbox.com' Dec 13 06:45:47.935016 env[1191]: time="2024-12-13T06:45:47.934583772Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:4c1b0469-2d87-46c6-a0f0-bb26b28f07be,Namespace:default,Attempt:0,}" Dec 13 06:45:48.025469 systemd-networkd[1017]: lxcf8e59341017f: Link UP Dec 13 06:45:48.031719 kernel: eth0: renamed from tmp3ab7c Dec 13 06:45:48.046320 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 06:45:48.046454 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcf8e59341017f: link becomes ready Dec 13 06:45:48.046382 systemd-networkd[1017]: lxcf8e59341017f: Gained carrier Dec 13 06:45:48.288655 env[1191]: time="2024-12-13T06:45:48.287806245Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 06:45:48.288655 env[1191]: time="2024-12-13T06:45:48.287872857Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 06:45:48.288655 env[1191]: time="2024-12-13T06:45:48.287891077Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 06:45:48.289393 env[1191]: time="2024-12-13T06:45:48.289203896Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3ab7ce8c7d18b050ae4837a2c6dc70ad5eedea525b2f3f410063cc613f54ac10 pid=2844 runtime=io.containerd.runc.v2 Dec 13 06:45:48.315698 systemd[1]: Started cri-containerd-3ab7ce8c7d18b050ae4837a2c6dc70ad5eedea525b2f3f410063cc613f54ac10.scope. Dec 13 06:45:48.326413 systemd[1]: run-containerd-runc-k8s.io-3ab7ce8c7d18b050ae4837a2c6dc70ad5eedea525b2f3f410063cc613f54ac10-runc.6O2MNm.mount: Deactivated successfully. Dec 13 06:45:48.403709 env[1191]: time="2024-12-13T06:45:48.403640793Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:4c1b0469-2d87-46c6-a0f0-bb26b28f07be,Namespace:default,Attempt:0,} returns sandbox id \"3ab7ce8c7d18b050ae4837a2c6dc70ad5eedea525b2f3f410063cc613f54ac10\"" Dec 13 06:45:48.406649 env[1191]: time="2024-12-13T06:45:48.406614872Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Dec 13 06:45:48.700936 kubelet[1462]: E1213 06:45:48.700852 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:45:48.777025 env[1191]: time="2024-12-13T06:45:48.776960766Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 06:45:48.780316 env[1191]: time="2024-12-13T06:45:48.780282268Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 06:45:48.787255 env[1191]: time="2024-12-13T06:45:48.787219329Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 06:45:48.789656 env[1191]: time="2024-12-13T06:45:48.789613204Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx@sha256:e04edf30a4ea4c5a4107110797c72d3ee8a654415f00acd4019be17218afd9a1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 06:45:48.791268 env[1191]: time="2024-12-13T06:45:48.791221766Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3\"" Dec 13 06:45:48.797660 env[1191]: time="2024-12-13T06:45:48.797612444Z" level=info msg="CreateContainer within sandbox \"3ab7ce8c7d18b050ae4837a2c6dc70ad5eedea525b2f3f410063cc613f54ac10\" for container &ContainerMetadata{Name:test,Attempt:0,}" Dec 13 06:45:48.813625 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3519338203.mount: Deactivated successfully. Dec 13 06:45:48.820609 env[1191]: time="2024-12-13T06:45:48.820479697Z" level=info msg="CreateContainer within sandbox \"3ab7ce8c7d18b050ae4837a2c6dc70ad5eedea525b2f3f410063cc613f54ac10\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"3d16b1026eb9be9dfc015ff83db9cf10f60bed753568048a02ddd7c4d5c26d12\"" Dec 13 06:45:48.821827 env[1191]: time="2024-12-13T06:45:48.821770364Z" level=info msg="StartContainer for \"3d16b1026eb9be9dfc015ff83db9cf10f60bed753568048a02ddd7c4d5c26d12\"" Dec 13 06:45:48.844871 systemd[1]: Started cri-containerd-3d16b1026eb9be9dfc015ff83db9cf10f60bed753568048a02ddd7c4d5c26d12.scope. Dec 13 06:45:48.904593 env[1191]: time="2024-12-13T06:45:48.904267222Z" level=info msg="StartContainer for \"3d16b1026eb9be9dfc015ff83db9cf10f60bed753568048a02ddd7c4d5c26d12\" returns successfully" Dec 13 06:45:49.125166 kubelet[1462]: I1213 06:45:49.124308 1462 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=20.734960176 podStartE2EDuration="21.124283392s" podCreationTimestamp="2024-12-13 06:45:28 +0000 UTC" firstStartedPulling="2024-12-13 06:45:48.406131972 +0000 UTC m=+75.631768063" lastFinishedPulling="2024-12-13 06:45:48.795455189 +0000 UTC m=+76.021091279" observedRunningTime="2024-12-13 06:45:49.123496963 +0000 UTC m=+76.349133077" watchObservedRunningTime="2024-12-13 06:45:49.124283392 +0000 UTC m=+76.349919490" Dec 13 06:45:49.701976 kubelet[1462]: E1213 06:45:49.701928 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:45:49.749920 systemd-networkd[1017]: lxcf8e59341017f: Gained IPv6LL Dec 13 06:45:50.703728 kubelet[1462]: E1213 06:45:50.703646 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:45:51.705099 kubelet[1462]: E1213 06:45:51.705043 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:45:52.706650 kubelet[1462]: E1213 06:45:52.706511 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:45:53.632161 kubelet[1462]: E1213 06:45:53.632106 1462 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:45:53.707040 kubelet[1462]: E1213 06:45:53.706995 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:45:54.243537 systemd[1]: run-containerd-runc-k8s.io-23c3530bd745c372583cae2013a64e8f02ff85c06e5b642e3c69fece9b6a75df-runc.WtAefC.mount: Deactivated successfully. Dec 13 06:45:54.269932 env[1191]: time="2024-12-13T06:45:54.269840523Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 06:45:54.278763 env[1191]: time="2024-12-13T06:45:54.278712632Z" level=info msg="StopContainer for \"23c3530bd745c372583cae2013a64e8f02ff85c06e5b642e3c69fece9b6a75df\" with timeout 2 (s)" Dec 13 06:45:54.279264 env[1191]: time="2024-12-13T06:45:54.279227800Z" level=info msg="Stop container \"23c3530bd745c372583cae2013a64e8f02ff85c06e5b642e3c69fece9b6a75df\" with signal terminated" Dec 13 06:45:54.288370 systemd-networkd[1017]: lxc_health: Link DOWN Dec 13 06:45:54.288388 systemd-networkd[1017]: lxc_health: Lost carrier Dec 13 06:45:54.330767 systemd[1]: cri-containerd-23c3530bd745c372583cae2013a64e8f02ff85c06e5b642e3c69fece9b6a75df.scope: Deactivated successfully. Dec 13 06:45:54.331256 systemd[1]: cri-containerd-23c3530bd745c372583cae2013a64e8f02ff85c06e5b642e3c69fece9b6a75df.scope: Consumed 9.839s CPU time. Dec 13 06:45:54.361086 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-23c3530bd745c372583cae2013a64e8f02ff85c06e5b642e3c69fece9b6a75df-rootfs.mount: Deactivated successfully. Dec 13 06:45:54.366448 env[1191]: time="2024-12-13T06:45:54.366392948Z" level=info msg="shim disconnected" id=23c3530bd745c372583cae2013a64e8f02ff85c06e5b642e3c69fece9b6a75df Dec 13 06:45:54.366892 env[1191]: time="2024-12-13T06:45:54.366859603Z" level=warning msg="cleaning up after shim disconnected" id=23c3530bd745c372583cae2013a64e8f02ff85c06e5b642e3c69fece9b6a75df namespace=k8s.io Dec 13 06:45:54.367058 env[1191]: time="2024-12-13T06:45:54.367027040Z" level=info msg="cleaning up dead shim" Dec 13 06:45:54.380346 env[1191]: time="2024-12-13T06:45:54.380287179Z" level=warning msg="cleanup warnings time=\"2024-12-13T06:45:54Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2971 runtime=io.containerd.runc.v2\n" Dec 13 06:45:54.382725 env[1191]: time="2024-12-13T06:45:54.382682868Z" level=info msg="StopContainer for \"23c3530bd745c372583cae2013a64e8f02ff85c06e5b642e3c69fece9b6a75df\" returns successfully" Dec 13 06:45:54.383785 env[1191]: time="2024-12-13T06:45:54.383748498Z" level=info msg="StopPodSandbox for \"6946f6bf9e31df6c8baa77c3fec6aea21731f18d3faf77237e256ebfa3d5a029\"" Dec 13 06:45:54.383985 env[1191]: time="2024-12-13T06:45:54.383947974Z" level=info msg="Container to stop \"8d3e8f2e8ab19192f8f00b309a6bfbdd15a9ca457c6c29df86b521da6c61b28a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 06:45:54.384250 env[1191]: time="2024-12-13T06:45:54.384215349Z" level=info msg="Container to stop \"789294f5df35ed65d3ced9f3cacbce83135c0b582c1697a50433b7928567e0bd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 06:45:54.384394 env[1191]: time="2024-12-13T06:45:54.384359968Z" level=info msg="Container to stop \"002814d2e60c7bacb25546ef861ad7c76d6a2d655b07fd4306fa0c0a0758107c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 06:45:54.384543 env[1191]: time="2024-12-13T06:45:54.384498589Z" level=info msg="Container to stop \"5231e8f01485913409423a80e37b4b598bae7cb492b3ed873c0cdf8e074fca1e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 06:45:54.384774 env[1191]: time="2024-12-13T06:45:54.384709412Z" level=info msg="Container to stop \"23c3530bd745c372583cae2013a64e8f02ff85c06e5b642e3c69fece9b6a75df\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 06:45:54.387375 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6946f6bf9e31df6c8baa77c3fec6aea21731f18d3faf77237e256ebfa3d5a029-shm.mount: Deactivated successfully. Dec 13 06:45:54.395349 systemd[1]: cri-containerd-6946f6bf9e31df6c8baa77c3fec6aea21731f18d3faf77237e256ebfa3d5a029.scope: Deactivated successfully. Dec 13 06:45:54.427349 env[1191]: time="2024-12-13T06:45:54.427277337Z" level=info msg="shim disconnected" id=6946f6bf9e31df6c8baa77c3fec6aea21731f18d3faf77237e256ebfa3d5a029 Dec 13 06:45:54.427349 env[1191]: time="2024-12-13T06:45:54.427344738Z" level=warning msg="cleaning up after shim disconnected" id=6946f6bf9e31df6c8baa77c3fec6aea21731f18d3faf77237e256ebfa3d5a029 namespace=k8s.io Dec 13 06:45:54.427649 env[1191]: time="2024-12-13T06:45:54.427361963Z" level=info msg="cleaning up dead shim" Dec 13 06:45:54.439687 env[1191]: time="2024-12-13T06:45:54.439644841Z" level=warning msg="cleanup warnings time=\"2024-12-13T06:45:54Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3004 runtime=io.containerd.runc.v2\n" Dec 13 06:45:54.440604 env[1191]: time="2024-12-13T06:45:54.440545048Z" level=info msg="TearDown network for sandbox \"6946f6bf9e31df6c8baa77c3fec6aea21731f18d3faf77237e256ebfa3d5a029\" successfully" Dec 13 06:45:54.440788 env[1191]: time="2024-12-13T06:45:54.440752927Z" level=info msg="StopPodSandbox for \"6946f6bf9e31df6c8baa77c3fec6aea21731f18d3faf77237e256ebfa3d5a029\" returns successfully" Dec 13 06:45:54.533961 kubelet[1462]: I1213 06:45:54.532957 1462 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f19f87fd-8f24-4d4d-9126-bdfd84bd6ee3-hostproc\") pod \"f19f87fd-8f24-4d4d-9126-bdfd84bd6ee3\" (UID: \"f19f87fd-8f24-4d4d-9126-bdfd84bd6ee3\") " Dec 13 06:45:54.534171 kubelet[1462]: I1213 06:45:54.534070 1462 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f19f87fd-8f24-4d4d-9126-bdfd84bd6ee3-hubble-tls\") pod \"f19f87fd-8f24-4d4d-9126-bdfd84bd6ee3\" (UID: \"f19f87fd-8f24-4d4d-9126-bdfd84bd6ee3\") " Dec 13 06:45:54.534171 kubelet[1462]: I1213 06:45:54.534158 1462 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f19f87fd-8f24-4d4d-9126-bdfd84bd6ee3-cilium-run\") pod \"f19f87fd-8f24-4d4d-9126-bdfd84bd6ee3\" (UID: \"f19f87fd-8f24-4d4d-9126-bdfd84bd6ee3\") " Dec 13 06:45:54.534309 kubelet[1462]: I1213 06:45:54.534208 1462 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f19f87fd-8f24-4d4d-9126-bdfd84bd6ee3-etc-cni-netd\") pod \"f19f87fd-8f24-4d4d-9126-bdfd84bd6ee3\" (UID: \"f19f87fd-8f24-4d4d-9126-bdfd84bd6ee3\") " Dec 13 06:45:54.534309 kubelet[1462]: I1213 06:45:54.534241 1462 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f19f87fd-8f24-4d4d-9126-bdfd84bd6ee3-lib-modules\") pod \"f19f87fd-8f24-4d4d-9126-bdfd84bd6ee3\" (UID: \"f19f87fd-8f24-4d4d-9126-bdfd84bd6ee3\") " Dec 13 06:45:54.534309 kubelet[1462]: I1213 06:45:54.534282 1462 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f19f87fd-8f24-4d4d-9126-bdfd84bd6ee3-xtables-lock\") pod \"f19f87fd-8f24-4d4d-9126-bdfd84bd6ee3\" (UID: \"f19f87fd-8f24-4d4d-9126-bdfd84bd6ee3\") " Dec 13 06:45:54.534488 kubelet[1462]: I1213 06:45:54.534313 1462 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f19f87fd-8f24-4d4d-9126-bdfd84bd6ee3-bpf-maps\") pod \"f19f87fd-8f24-4d4d-9126-bdfd84bd6ee3\" (UID: \"f19f87fd-8f24-4d4d-9126-bdfd84bd6ee3\") " Dec 13 06:45:54.534488 kubelet[1462]: I1213 06:45:54.534337 1462 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f19f87fd-8f24-4d4d-9126-bdfd84bd6ee3-host-proc-sys-net\") pod \"f19f87fd-8f24-4d4d-9126-bdfd84bd6ee3\" (UID: \"f19f87fd-8f24-4d4d-9126-bdfd84bd6ee3\") " Dec 13 06:45:54.534488 kubelet[1462]: I1213 06:45:54.534386 1462 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jbh2s\" (UniqueName: \"kubernetes.io/projected/f19f87fd-8f24-4d4d-9126-bdfd84bd6ee3-kube-api-access-jbh2s\") pod \"f19f87fd-8f24-4d4d-9126-bdfd84bd6ee3\" (UID: \"f19f87fd-8f24-4d4d-9126-bdfd84bd6ee3\") " Dec 13 06:45:54.534488 kubelet[1462]: I1213 06:45:54.534421 1462 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f19f87fd-8f24-4d4d-9126-bdfd84bd6ee3-clustermesh-secrets\") pod \"f19f87fd-8f24-4d4d-9126-bdfd84bd6ee3\" (UID: \"f19f87fd-8f24-4d4d-9126-bdfd84bd6ee3\") " Dec 13 06:45:54.534488 kubelet[1462]: I1213 06:45:54.534471 1462 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f19f87fd-8f24-4d4d-9126-bdfd84bd6ee3-cilium-config-path\") pod \"f19f87fd-8f24-4d4d-9126-bdfd84bd6ee3\" (UID: \"f19f87fd-8f24-4d4d-9126-bdfd84bd6ee3\") " Dec 13 06:45:54.534829 kubelet[1462]: I1213 06:45:54.534498 1462 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f19f87fd-8f24-4d4d-9126-bdfd84bd6ee3-host-proc-sys-kernel\") pod \"f19f87fd-8f24-4d4d-9126-bdfd84bd6ee3\" (UID: \"f19f87fd-8f24-4d4d-9126-bdfd84bd6ee3\") " Dec 13 06:45:54.534829 kubelet[1462]: I1213 06:45:54.534542 1462 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f19f87fd-8f24-4d4d-9126-bdfd84bd6ee3-cilium-cgroup\") pod \"f19f87fd-8f24-4d4d-9126-bdfd84bd6ee3\" (UID: \"f19f87fd-8f24-4d4d-9126-bdfd84bd6ee3\") " Dec 13 06:45:54.534829 kubelet[1462]: I1213 06:45:54.534592 1462 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f19f87fd-8f24-4d4d-9126-bdfd84bd6ee3-cni-path\") pod \"f19f87fd-8f24-4d4d-9126-bdfd84bd6ee3\" (UID: \"f19f87fd-8f24-4d4d-9126-bdfd84bd6ee3\") " Dec 13 06:45:54.534829 kubelet[1462]: I1213 06:45:54.533465 1462 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f19f87fd-8f24-4d4d-9126-bdfd84bd6ee3-hostproc" (OuterVolumeSpecName: "hostproc") pod "f19f87fd-8f24-4d4d-9126-bdfd84bd6ee3" (UID: "f19f87fd-8f24-4d4d-9126-bdfd84bd6ee3"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 06:45:54.534829 kubelet[1462]: I1213 06:45:54.534667 1462 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f19f87fd-8f24-4d4d-9126-bdfd84bd6ee3-cni-path" (OuterVolumeSpecName: "cni-path") pod "f19f87fd-8f24-4d4d-9126-bdfd84bd6ee3" (UID: "f19f87fd-8f24-4d4d-9126-bdfd84bd6ee3"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 06:45:54.537604 kubelet[1462]: I1213 06:45:54.535198 1462 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f19f87fd-8f24-4d4d-9126-bdfd84bd6ee3-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "f19f87fd-8f24-4d4d-9126-bdfd84bd6ee3" (UID: "f19f87fd-8f24-4d4d-9126-bdfd84bd6ee3"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 06:45:54.537604 kubelet[1462]: I1213 06:45:54.535249 1462 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f19f87fd-8f24-4d4d-9126-bdfd84bd6ee3-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "f19f87fd-8f24-4d4d-9126-bdfd84bd6ee3" (UID: "f19f87fd-8f24-4d4d-9126-bdfd84bd6ee3"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 06:45:54.537604 kubelet[1462]: I1213 06:45:54.535289 1462 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f19f87fd-8f24-4d4d-9126-bdfd84bd6ee3-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "f19f87fd-8f24-4d4d-9126-bdfd84bd6ee3" (UID: "f19f87fd-8f24-4d4d-9126-bdfd84bd6ee3"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 06:45:54.537604 kubelet[1462]: I1213 06:45:54.535317 1462 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f19f87fd-8f24-4d4d-9126-bdfd84bd6ee3-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "f19f87fd-8f24-4d4d-9126-bdfd84bd6ee3" (UID: "f19f87fd-8f24-4d4d-9126-bdfd84bd6ee3"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 06:45:54.537604 kubelet[1462]: I1213 06:45:54.535356 1462 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f19f87fd-8f24-4d4d-9126-bdfd84bd6ee3-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "f19f87fd-8f24-4d4d-9126-bdfd84bd6ee3" (UID: "f19f87fd-8f24-4d4d-9126-bdfd84bd6ee3"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 06:45:54.538031 kubelet[1462]: I1213 06:45:54.535383 1462 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f19f87fd-8f24-4d4d-9126-bdfd84bd6ee3-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "f19f87fd-8f24-4d4d-9126-bdfd84bd6ee3" (UID: "f19f87fd-8f24-4d4d-9126-bdfd84bd6ee3"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 06:45:54.538031 kubelet[1462]: I1213 06:45:54.536287 1462 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f19f87fd-8f24-4d4d-9126-bdfd84bd6ee3-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "f19f87fd-8f24-4d4d-9126-bdfd84bd6ee3" (UID: "f19f87fd-8f24-4d4d-9126-bdfd84bd6ee3"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 06:45:54.538031 kubelet[1462]: I1213 06:45:54.536349 1462 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f19f87fd-8f24-4d4d-9126-bdfd84bd6ee3-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "f19f87fd-8f24-4d4d-9126-bdfd84bd6ee3" (UID: "f19f87fd-8f24-4d4d-9126-bdfd84bd6ee3"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 06:45:54.540176 kubelet[1462]: I1213 06:45:54.540130 1462 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f19f87fd-8f24-4d4d-9126-bdfd84bd6ee3-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "f19f87fd-8f24-4d4d-9126-bdfd84bd6ee3" (UID: "f19f87fd-8f24-4d4d-9126-bdfd84bd6ee3"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 06:45:54.545083 kubelet[1462]: I1213 06:45:54.545042 1462 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f19f87fd-8f24-4d4d-9126-bdfd84bd6ee3-kube-api-access-jbh2s" (OuterVolumeSpecName: "kube-api-access-jbh2s") pod "f19f87fd-8f24-4d4d-9126-bdfd84bd6ee3" (UID: "f19f87fd-8f24-4d4d-9126-bdfd84bd6ee3"). InnerVolumeSpecName "kube-api-access-jbh2s". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 06:45:54.545339 kubelet[1462]: I1213 06:45:54.545300 1462 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f19f87fd-8f24-4d4d-9126-bdfd84bd6ee3-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "f19f87fd-8f24-4d4d-9126-bdfd84bd6ee3" (UID: "f19f87fd-8f24-4d4d-9126-bdfd84bd6ee3"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 06:45:54.545704 kubelet[1462]: I1213 06:45:54.545660 1462 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f19f87fd-8f24-4d4d-9126-bdfd84bd6ee3-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "f19f87fd-8f24-4d4d-9126-bdfd84bd6ee3" (UID: "f19f87fd-8f24-4d4d-9126-bdfd84bd6ee3"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 06:45:54.635350 kubelet[1462]: I1213 06:45:54.635244 1462 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f19f87fd-8f24-4d4d-9126-bdfd84bd6ee3-bpf-maps\") on node \"10.230.56.170\" DevicePath \"\"" Dec 13 06:45:54.635350 kubelet[1462]: I1213 06:45:54.635312 1462 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f19f87fd-8f24-4d4d-9126-bdfd84bd6ee3-host-proc-sys-net\") on node \"10.230.56.170\" DevicePath \"\"" Dec 13 06:45:54.635350 kubelet[1462]: I1213 06:45:54.635344 1462 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f19f87fd-8f24-4d4d-9126-bdfd84bd6ee3-host-proc-sys-kernel\") on node \"10.230.56.170\" DevicePath \"\"" Dec 13 06:45:54.635350 kubelet[1462]: I1213 06:45:54.635374 1462 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f19f87fd-8f24-4d4d-9126-bdfd84bd6ee3-cilium-cgroup\") on node \"10.230.56.170\" DevicePath \"\"" Dec 13 06:45:54.635808 kubelet[1462]: I1213 06:45:54.635388 1462 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f19f87fd-8f24-4d4d-9126-bdfd84bd6ee3-cni-path\") on node \"10.230.56.170\" DevicePath \"\"" Dec 13 06:45:54.635808 kubelet[1462]: I1213 06:45:54.635402 1462 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-jbh2s\" (UniqueName: \"kubernetes.io/projected/f19f87fd-8f24-4d4d-9126-bdfd84bd6ee3-kube-api-access-jbh2s\") on node \"10.230.56.170\" DevicePath \"\"" Dec 13 06:45:54.635808 kubelet[1462]: I1213 06:45:54.635426 1462 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f19f87fd-8f24-4d4d-9126-bdfd84bd6ee3-clustermesh-secrets\") on node \"10.230.56.170\" DevicePath \"\"" Dec 13 06:45:54.635808 kubelet[1462]: I1213 06:45:54.635440 1462 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f19f87fd-8f24-4d4d-9126-bdfd84bd6ee3-cilium-config-path\") on node \"10.230.56.170\" DevicePath \"\"" Dec 13 06:45:54.635808 kubelet[1462]: I1213 06:45:54.635453 1462 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f19f87fd-8f24-4d4d-9126-bdfd84bd6ee3-etc-cni-netd\") on node \"10.230.56.170\" DevicePath \"\"" Dec 13 06:45:54.635808 kubelet[1462]: I1213 06:45:54.635466 1462 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f19f87fd-8f24-4d4d-9126-bdfd84bd6ee3-lib-modules\") on node \"10.230.56.170\" DevicePath \"\"" Dec 13 06:45:54.635808 kubelet[1462]: I1213 06:45:54.635478 1462 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f19f87fd-8f24-4d4d-9126-bdfd84bd6ee3-xtables-lock\") on node \"10.230.56.170\" DevicePath \"\"" Dec 13 06:45:54.635808 kubelet[1462]: I1213 06:45:54.635491 1462 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f19f87fd-8f24-4d4d-9126-bdfd84bd6ee3-hostproc\") on node \"10.230.56.170\" DevicePath \"\"" Dec 13 06:45:54.636264 kubelet[1462]: I1213 06:45:54.635504 1462 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f19f87fd-8f24-4d4d-9126-bdfd84bd6ee3-hubble-tls\") on node \"10.230.56.170\" DevicePath \"\"" Dec 13 06:45:54.636264 kubelet[1462]: I1213 06:45:54.635518 1462 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f19f87fd-8f24-4d4d-9126-bdfd84bd6ee3-cilium-run\") on node \"10.230.56.170\" DevicePath \"\"" Dec 13 06:45:54.708220 kubelet[1462]: E1213 06:45:54.708124 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:45:55.115812 kubelet[1462]: I1213 06:45:55.115770 1462 scope.go:117] "RemoveContainer" containerID="23c3530bd745c372583cae2013a64e8f02ff85c06e5b642e3c69fece9b6a75df" Dec 13 06:45:55.119592 env[1191]: time="2024-12-13T06:45:55.119318275Z" level=info msg="RemoveContainer for \"23c3530bd745c372583cae2013a64e8f02ff85c06e5b642e3c69fece9b6a75df\"" Dec 13 06:45:55.123684 systemd[1]: Removed slice kubepods-burstable-podf19f87fd_8f24_4d4d_9126_bdfd84bd6ee3.slice. Dec 13 06:45:55.123844 systemd[1]: kubepods-burstable-podf19f87fd_8f24_4d4d_9126_bdfd84bd6ee3.slice: Consumed 10.007s CPU time. Dec 13 06:45:55.126522 env[1191]: time="2024-12-13T06:45:55.126482393Z" level=info msg="RemoveContainer for \"23c3530bd745c372583cae2013a64e8f02ff85c06e5b642e3c69fece9b6a75df\" returns successfully" Dec 13 06:45:55.127708 kubelet[1462]: I1213 06:45:55.127678 1462 scope.go:117] "RemoveContainer" containerID="789294f5df35ed65d3ced9f3cacbce83135c0b582c1697a50433b7928567e0bd" Dec 13 06:45:55.128991 env[1191]: time="2024-12-13T06:45:55.128954919Z" level=info msg="RemoveContainer for \"789294f5df35ed65d3ced9f3cacbce83135c0b582c1697a50433b7928567e0bd\"" Dec 13 06:45:55.132281 env[1191]: time="2024-12-13T06:45:55.132236829Z" level=info msg="RemoveContainer for \"789294f5df35ed65d3ced9f3cacbce83135c0b582c1697a50433b7928567e0bd\" returns successfully" Dec 13 06:45:55.132546 kubelet[1462]: I1213 06:45:55.132521 1462 scope.go:117] "RemoveContainer" containerID="5231e8f01485913409423a80e37b4b598bae7cb492b3ed873c0cdf8e074fca1e" Dec 13 06:45:55.135104 env[1191]: time="2024-12-13T06:45:55.134788175Z" level=info msg="RemoveContainer for \"5231e8f01485913409423a80e37b4b598bae7cb492b3ed873c0cdf8e074fca1e\"" Dec 13 06:45:55.137714 env[1191]: time="2024-12-13T06:45:55.137598311Z" level=info msg="RemoveContainer for \"5231e8f01485913409423a80e37b4b598bae7cb492b3ed873c0cdf8e074fca1e\" returns successfully" Dec 13 06:45:55.137953 kubelet[1462]: I1213 06:45:55.137927 1462 scope.go:117] "RemoveContainer" containerID="002814d2e60c7bacb25546ef861ad7c76d6a2d655b07fd4306fa0c0a0758107c" Dec 13 06:45:55.140730 env[1191]: time="2024-12-13T06:45:55.140691405Z" level=info msg="RemoveContainer for \"002814d2e60c7bacb25546ef861ad7c76d6a2d655b07fd4306fa0c0a0758107c\"" Dec 13 06:45:55.144957 env[1191]: time="2024-12-13T06:45:55.144920233Z" level=info msg="RemoveContainer for \"002814d2e60c7bacb25546ef861ad7c76d6a2d655b07fd4306fa0c0a0758107c\" returns successfully" Dec 13 06:45:55.146378 kubelet[1462]: I1213 06:45:55.146345 1462 scope.go:117] "RemoveContainer" containerID="8d3e8f2e8ab19192f8f00b309a6bfbdd15a9ca457c6c29df86b521da6c61b28a" Dec 13 06:45:55.147767 env[1191]: time="2024-12-13T06:45:55.147701779Z" level=info msg="RemoveContainer for \"8d3e8f2e8ab19192f8f00b309a6bfbdd15a9ca457c6c29df86b521da6c61b28a\"" Dec 13 06:45:55.150612 env[1191]: time="2024-12-13T06:45:55.150540665Z" level=info msg="RemoveContainer for \"8d3e8f2e8ab19192f8f00b309a6bfbdd15a9ca457c6c29df86b521da6c61b28a\" returns successfully" Dec 13 06:45:55.151458 kubelet[1462]: I1213 06:45:55.151423 1462 scope.go:117] "RemoveContainer" containerID="23c3530bd745c372583cae2013a64e8f02ff85c06e5b642e3c69fece9b6a75df" Dec 13 06:45:55.152000 env[1191]: time="2024-12-13T06:45:55.151880731Z" level=error msg="ContainerStatus for \"23c3530bd745c372583cae2013a64e8f02ff85c06e5b642e3c69fece9b6a75df\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"23c3530bd745c372583cae2013a64e8f02ff85c06e5b642e3c69fece9b6a75df\": not found" Dec 13 06:45:55.152297 kubelet[1462]: E1213 06:45:55.152262 1462 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"23c3530bd745c372583cae2013a64e8f02ff85c06e5b642e3c69fece9b6a75df\": not found" containerID="23c3530bd745c372583cae2013a64e8f02ff85c06e5b642e3c69fece9b6a75df" Dec 13 06:45:55.152558 kubelet[1462]: I1213 06:45:55.152454 1462 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"23c3530bd745c372583cae2013a64e8f02ff85c06e5b642e3c69fece9b6a75df"} err="failed to get container status \"23c3530bd745c372583cae2013a64e8f02ff85c06e5b642e3c69fece9b6a75df\": rpc error: code = NotFound desc = an error occurred when try to find container \"23c3530bd745c372583cae2013a64e8f02ff85c06e5b642e3c69fece9b6a75df\": not found" Dec 13 06:45:55.152715 kubelet[1462]: I1213 06:45:55.152690 1462 scope.go:117] "RemoveContainer" containerID="789294f5df35ed65d3ced9f3cacbce83135c0b582c1697a50433b7928567e0bd" Dec 13 06:45:55.153127 env[1191]: time="2024-12-13T06:45:55.153004883Z" level=error msg="ContainerStatus for \"789294f5df35ed65d3ced9f3cacbce83135c0b582c1697a50433b7928567e0bd\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"789294f5df35ed65d3ced9f3cacbce83135c0b582c1697a50433b7928567e0bd\": not found" Dec 13 06:45:55.153342 kubelet[1462]: E1213 06:45:55.153312 1462 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"789294f5df35ed65d3ced9f3cacbce83135c0b582c1697a50433b7928567e0bd\": not found" containerID="789294f5df35ed65d3ced9f3cacbce83135c0b582c1697a50433b7928567e0bd" Dec 13 06:45:55.153500 kubelet[1462]: I1213 06:45:55.153459 1462 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"789294f5df35ed65d3ced9f3cacbce83135c0b582c1697a50433b7928567e0bd"} err="failed to get container status \"789294f5df35ed65d3ced9f3cacbce83135c0b582c1697a50433b7928567e0bd\": rpc error: code = NotFound desc = an error occurred when try to find container \"789294f5df35ed65d3ced9f3cacbce83135c0b582c1697a50433b7928567e0bd\": not found" Dec 13 06:45:55.153680 kubelet[1462]: I1213 06:45:55.153655 1462 scope.go:117] "RemoveContainer" containerID="5231e8f01485913409423a80e37b4b598bae7cb492b3ed873c0cdf8e074fca1e" Dec 13 06:45:55.154084 env[1191]: time="2024-12-13T06:45:55.154007919Z" level=error msg="ContainerStatus for \"5231e8f01485913409423a80e37b4b598bae7cb492b3ed873c0cdf8e074fca1e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5231e8f01485913409423a80e37b4b598bae7cb492b3ed873c0cdf8e074fca1e\": not found" Dec 13 06:45:55.154282 kubelet[1462]: E1213 06:45:55.154226 1462 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5231e8f01485913409423a80e37b4b598bae7cb492b3ed873c0cdf8e074fca1e\": not found" containerID="5231e8f01485913409423a80e37b4b598bae7cb492b3ed873c0cdf8e074fca1e" Dec 13 06:45:55.154282 kubelet[1462]: I1213 06:45:55.154263 1462 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5231e8f01485913409423a80e37b4b598bae7cb492b3ed873c0cdf8e074fca1e"} err="failed to get container status \"5231e8f01485913409423a80e37b4b598bae7cb492b3ed873c0cdf8e074fca1e\": rpc error: code = NotFound desc = an error occurred when try to find container \"5231e8f01485913409423a80e37b4b598bae7cb492b3ed873c0cdf8e074fca1e\": not found" Dec 13 06:45:55.154438 kubelet[1462]: I1213 06:45:55.154286 1462 scope.go:117] "RemoveContainer" containerID="002814d2e60c7bacb25546ef861ad7c76d6a2d655b07fd4306fa0c0a0758107c" Dec 13 06:45:55.154928 env[1191]: time="2024-12-13T06:45:55.154788779Z" level=error msg="ContainerStatus for \"002814d2e60c7bacb25546ef861ad7c76d6a2d655b07fd4306fa0c0a0758107c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"002814d2e60c7bacb25546ef861ad7c76d6a2d655b07fd4306fa0c0a0758107c\": not found" Dec 13 06:45:55.155032 kubelet[1462]: E1213 06:45:55.154990 1462 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"002814d2e60c7bacb25546ef861ad7c76d6a2d655b07fd4306fa0c0a0758107c\": not found" containerID="002814d2e60c7bacb25546ef861ad7c76d6a2d655b07fd4306fa0c0a0758107c" Dec 13 06:45:55.155032 kubelet[1462]: I1213 06:45:55.155017 1462 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"002814d2e60c7bacb25546ef861ad7c76d6a2d655b07fd4306fa0c0a0758107c"} err="failed to get container status \"002814d2e60c7bacb25546ef861ad7c76d6a2d655b07fd4306fa0c0a0758107c\": rpc error: code = NotFound desc = an error occurred when try to find container \"002814d2e60c7bacb25546ef861ad7c76d6a2d655b07fd4306fa0c0a0758107c\": not found" Dec 13 06:45:55.155178 kubelet[1462]: I1213 06:45:55.155037 1462 scope.go:117] "RemoveContainer" containerID="8d3e8f2e8ab19192f8f00b309a6bfbdd15a9ca457c6c29df86b521da6c61b28a" Dec 13 06:45:55.155303 env[1191]: time="2024-12-13T06:45:55.155227764Z" level=error msg="ContainerStatus for \"8d3e8f2e8ab19192f8f00b309a6bfbdd15a9ca457c6c29df86b521da6c61b28a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8d3e8f2e8ab19192f8f00b309a6bfbdd15a9ca457c6c29df86b521da6c61b28a\": not found" Dec 13 06:45:55.155444 kubelet[1462]: E1213 06:45:55.155418 1462 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8d3e8f2e8ab19192f8f00b309a6bfbdd15a9ca457c6c29df86b521da6c61b28a\": not found" containerID="8d3e8f2e8ab19192f8f00b309a6bfbdd15a9ca457c6c29df86b521da6c61b28a" Dec 13 06:45:55.155533 kubelet[1462]: I1213 06:45:55.155447 1462 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8d3e8f2e8ab19192f8f00b309a6bfbdd15a9ca457c6c29df86b521da6c61b28a"} err="failed to get container status \"8d3e8f2e8ab19192f8f00b309a6bfbdd15a9ca457c6c29df86b521da6c61b28a\": rpc error: code = NotFound desc = an error occurred when try to find container \"8d3e8f2e8ab19192f8f00b309a6bfbdd15a9ca457c6c29df86b521da6c61b28a\": not found" Dec 13 06:45:55.235244 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6946f6bf9e31df6c8baa77c3fec6aea21731f18d3faf77237e256ebfa3d5a029-rootfs.mount: Deactivated successfully. Dec 13 06:45:55.235397 systemd[1]: var-lib-kubelet-pods-f19f87fd\x2d8f24\x2d4d4d\x2d9126\x2dbdfd84bd6ee3-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 06:45:55.235519 systemd[1]: var-lib-kubelet-pods-f19f87fd\x2d8f24\x2d4d4d\x2d9126\x2dbdfd84bd6ee3-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2djbh2s.mount: Deactivated successfully. Dec 13 06:45:55.235672 systemd[1]: var-lib-kubelet-pods-f19f87fd\x2d8f24\x2d4d4d\x2d9126\x2dbdfd84bd6ee3-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 06:45:55.709147 kubelet[1462]: E1213 06:45:55.709050 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:45:55.848583 kubelet[1462]: I1213 06:45:55.848463 1462 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f19f87fd-8f24-4d4d-9126-bdfd84bd6ee3" path="/var/lib/kubelet/pods/f19f87fd-8f24-4d4d-9126-bdfd84bd6ee3/volumes" Dec 13 06:45:56.709304 kubelet[1462]: E1213 06:45:56.709249 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:45:57.710705 kubelet[1462]: E1213 06:45:57.710650 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:45:58.711918 kubelet[1462]: E1213 06:45:58.711798 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:45:58.794625 kubelet[1462]: E1213 06:45:58.794533 1462 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 06:45:58.802081 kubelet[1462]: I1213 06:45:58.802037 1462 topology_manager.go:215] "Topology Admit Handler" podUID="8b1588d6-b6ad-413e-83ec-7b3c60b5b932" podNamespace="kube-system" podName="cilium-operator-599987898-r6qw5" Dec 13 06:45:58.802205 kubelet[1462]: E1213 06:45:58.802151 1462 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f19f87fd-8f24-4d4d-9126-bdfd84bd6ee3" containerName="apply-sysctl-overwrites" Dec 13 06:45:58.802205 kubelet[1462]: E1213 06:45:58.802174 1462 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f19f87fd-8f24-4d4d-9126-bdfd84bd6ee3" containerName="mount-bpf-fs" Dec 13 06:45:58.802205 kubelet[1462]: E1213 06:45:58.802186 1462 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f19f87fd-8f24-4d4d-9126-bdfd84bd6ee3" containerName="clean-cilium-state" Dec 13 06:45:58.802205 kubelet[1462]: E1213 06:45:58.802197 1462 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f19f87fd-8f24-4d4d-9126-bdfd84bd6ee3" containerName="mount-cgroup" Dec 13 06:45:58.802205 kubelet[1462]: E1213 06:45:58.802208 1462 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f19f87fd-8f24-4d4d-9126-bdfd84bd6ee3" containerName="cilium-agent" Dec 13 06:45:58.802508 kubelet[1462]: I1213 06:45:58.802263 1462 memory_manager.go:354] "RemoveStaleState removing state" podUID="f19f87fd-8f24-4d4d-9126-bdfd84bd6ee3" containerName="cilium-agent" Dec 13 06:45:58.809655 systemd[1]: Created slice kubepods-besteffort-pod8b1588d6_b6ad_413e_83ec_7b3c60b5b932.slice. Dec 13 06:45:58.839265 kubelet[1462]: I1213 06:45:58.839214 1462 topology_manager.go:215] "Topology Admit Handler" podUID="f288a694-2c0b-453c-8240-4689dd4bd822" podNamespace="kube-system" podName="cilium-c85r6" Dec 13 06:45:58.847513 systemd[1]: Created slice kubepods-burstable-podf288a694_2c0b_453c_8240_4689dd4bd822.slice. Dec 13 06:45:58.849315 kubelet[1462]: W1213 06:45:58.849280 1462 helpers.go:245] readString: Failed to read "/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf288a694_2c0b_453c_8240_4689dd4bd822.slice/cpuset.cpus.effective": read /sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf288a694_2c0b_453c_8240_4689dd4bd822.slice/cpuset.cpus.effective: no such device Dec 13 06:45:58.963148 kubelet[1462]: I1213 06:45:58.962873 1462 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f288a694-2c0b-453c-8240-4689dd4bd822-clustermesh-secrets\") pod \"cilium-c85r6\" (UID: \"f288a694-2c0b-453c-8240-4689dd4bd822\") " pod="kube-system/cilium-c85r6" Dec 13 06:45:58.963148 kubelet[1462]: I1213 06:45:58.963035 1462 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f288a694-2c0b-453c-8240-4689dd4bd822-cilium-config-path\") pod \"cilium-c85r6\" (UID: \"f288a694-2c0b-453c-8240-4689dd4bd822\") " pod="kube-system/cilium-c85r6" Dec 13 06:45:58.963148 kubelet[1462]: I1213 06:45:58.963083 1462 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f288a694-2c0b-453c-8240-4689dd4bd822-host-proc-sys-net\") pod \"cilium-c85r6\" (UID: \"f288a694-2c0b-453c-8240-4689dd4bd822\") " pod="kube-system/cilium-c85r6" Dec 13 06:45:58.964013 kubelet[1462]: I1213 06:45:58.963747 1462 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f288a694-2c0b-453c-8240-4689dd4bd822-hubble-tls\") pod \"cilium-c85r6\" (UID: \"f288a694-2c0b-453c-8240-4689dd4bd822\") " pod="kube-system/cilium-c85r6" Dec 13 06:45:58.964013 kubelet[1462]: I1213 06:45:58.963812 1462 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ssxs6\" (UniqueName: \"kubernetes.io/projected/f288a694-2c0b-453c-8240-4689dd4bd822-kube-api-access-ssxs6\") pod \"cilium-c85r6\" (UID: \"f288a694-2c0b-453c-8240-4689dd4bd822\") " pod="kube-system/cilium-c85r6" Dec 13 06:45:58.964013 kubelet[1462]: I1213 06:45:58.963894 1462 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f288a694-2c0b-453c-8240-4689dd4bd822-cilium-cgroup\") pod \"cilium-c85r6\" (UID: \"f288a694-2c0b-453c-8240-4689dd4bd822\") " pod="kube-system/cilium-c85r6" Dec 13 06:45:58.964013 kubelet[1462]: I1213 06:45:58.963926 1462 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f288a694-2c0b-453c-8240-4689dd4bd822-host-proc-sys-kernel\") pod \"cilium-c85r6\" (UID: \"f288a694-2c0b-453c-8240-4689dd4bd822\") " pod="kube-system/cilium-c85r6" Dec 13 06:45:58.964013 kubelet[1462]: I1213 06:45:58.963973 1462 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8b1588d6-b6ad-413e-83ec-7b3c60b5b932-cilium-config-path\") pod \"cilium-operator-599987898-r6qw5\" (UID: \"8b1588d6-b6ad-413e-83ec-7b3c60b5b932\") " pod="kube-system/cilium-operator-599987898-r6qw5" Dec 13 06:45:58.964323 kubelet[1462]: I1213 06:45:58.964018 1462 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f288a694-2c0b-453c-8240-4689dd4bd822-cni-path\") pod \"cilium-c85r6\" (UID: \"f288a694-2c0b-453c-8240-4689dd4bd822\") " pod="kube-system/cilium-c85r6" Dec 13 06:45:58.964323 kubelet[1462]: I1213 06:45:58.964082 1462 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f288a694-2c0b-453c-8240-4689dd4bd822-xtables-lock\") pod \"cilium-c85r6\" (UID: \"f288a694-2c0b-453c-8240-4689dd4bd822\") " pod="kube-system/cilium-c85r6" Dec 13 06:45:58.964323 kubelet[1462]: I1213 06:45:58.964117 1462 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/f288a694-2c0b-453c-8240-4689dd4bd822-cilium-ipsec-secrets\") pod \"cilium-c85r6\" (UID: \"f288a694-2c0b-453c-8240-4689dd4bd822\") " pod="kube-system/cilium-c85r6" Dec 13 06:45:58.964323 kubelet[1462]: I1213 06:45:58.964176 1462 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tqjcb\" (UniqueName: \"kubernetes.io/projected/8b1588d6-b6ad-413e-83ec-7b3c60b5b932-kube-api-access-tqjcb\") pod \"cilium-operator-599987898-r6qw5\" (UID: \"8b1588d6-b6ad-413e-83ec-7b3c60b5b932\") " pod="kube-system/cilium-operator-599987898-r6qw5" Dec 13 06:45:58.964323 kubelet[1462]: I1213 06:45:58.964224 1462 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f288a694-2c0b-453c-8240-4689dd4bd822-cilium-run\") pod \"cilium-c85r6\" (UID: \"f288a694-2c0b-453c-8240-4689dd4bd822\") " pod="kube-system/cilium-c85r6" Dec 13 06:45:58.964768 kubelet[1462]: I1213 06:45:58.964260 1462 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f288a694-2c0b-453c-8240-4689dd4bd822-bpf-maps\") pod \"cilium-c85r6\" (UID: \"f288a694-2c0b-453c-8240-4689dd4bd822\") " pod="kube-system/cilium-c85r6" Dec 13 06:45:58.964768 kubelet[1462]: I1213 06:45:58.964319 1462 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f288a694-2c0b-453c-8240-4689dd4bd822-hostproc\") pod \"cilium-c85r6\" (UID: \"f288a694-2c0b-453c-8240-4689dd4bd822\") " pod="kube-system/cilium-c85r6" Dec 13 06:45:58.964768 kubelet[1462]: I1213 06:45:58.964348 1462 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f288a694-2c0b-453c-8240-4689dd4bd822-etc-cni-netd\") pod \"cilium-c85r6\" (UID: \"f288a694-2c0b-453c-8240-4689dd4bd822\") " pod="kube-system/cilium-c85r6" Dec 13 06:45:58.964768 kubelet[1462]: I1213 06:45:58.964415 1462 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f288a694-2c0b-453c-8240-4689dd4bd822-lib-modules\") pod \"cilium-c85r6\" (UID: \"f288a694-2c0b-453c-8240-4689dd4bd822\") " pod="kube-system/cilium-c85r6" Dec 13 06:45:59.115243 env[1191]: time="2024-12-13T06:45:59.115151488Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-r6qw5,Uid:8b1588d6-b6ad-413e-83ec-7b3c60b5b932,Namespace:kube-system,Attempt:0,}" Dec 13 06:45:59.156827 env[1191]: time="2024-12-13T06:45:59.156444804Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-c85r6,Uid:f288a694-2c0b-453c-8240-4689dd4bd822,Namespace:kube-system,Attempt:0,}" Dec 13 06:45:59.175719 env[1191]: time="2024-12-13T06:45:59.175611673Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 06:45:59.175719 env[1191]: time="2024-12-13T06:45:59.175674091Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 06:45:59.176076 env[1191]: time="2024-12-13T06:45:59.175948389Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 06:45:59.176076 env[1191]: time="2024-12-13T06:45:59.175693045Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 06:45:59.176230 env[1191]: time="2024-12-13T06:45:59.176113819Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 06:45:59.176286 env[1191]: time="2024-12-13T06:45:59.176205118Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 06:45:59.176694 env[1191]: time="2024-12-13T06:45:59.176616936Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2afeb6c7d28bbfc7e84ba19f25a5911525212756198428bde7ca03cee27587e9 pid=3044 runtime=io.containerd.runc.v2 Dec 13 06:45:59.176809 env[1191]: time="2024-12-13T06:45:59.176625006Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/11935781dcb91ae7d8be009e6beb7a534766a2e10bea078de9850d0dc2c60ab8 pid=3043 runtime=io.containerd.runc.v2 Dec 13 06:45:59.199703 systemd[1]: Started cri-containerd-11935781dcb91ae7d8be009e6beb7a534766a2e10bea078de9850d0dc2c60ab8.scope. Dec 13 06:45:59.201294 systemd[1]: Started cri-containerd-2afeb6c7d28bbfc7e84ba19f25a5911525212756198428bde7ca03cee27587e9.scope. Dec 13 06:45:59.259646 env[1191]: time="2024-12-13T06:45:59.259484723Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-c85r6,Uid:f288a694-2c0b-453c-8240-4689dd4bd822,Namespace:kube-system,Attempt:0,} returns sandbox id \"11935781dcb91ae7d8be009e6beb7a534766a2e10bea078de9850d0dc2c60ab8\"" Dec 13 06:45:59.265167 env[1191]: time="2024-12-13T06:45:59.265110944Z" level=info msg="CreateContainer within sandbox \"11935781dcb91ae7d8be009e6beb7a534766a2e10bea078de9850d0dc2c60ab8\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 06:45:59.288436 env[1191]: time="2024-12-13T06:45:59.288381488Z" level=info msg="CreateContainer within sandbox \"11935781dcb91ae7d8be009e6beb7a534766a2e10bea078de9850d0dc2c60ab8\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"960db81de0ed1ec6a042a5e94643476e5e5548813dfc079ec3b3befffd0088f6\"" Dec 13 06:45:59.289820 env[1191]: time="2024-12-13T06:45:59.289787453Z" level=info msg="StartContainer for \"960db81de0ed1ec6a042a5e94643476e5e5548813dfc079ec3b3befffd0088f6\"" Dec 13 06:45:59.302434 env[1191]: time="2024-12-13T06:45:59.302381391Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-r6qw5,Uid:8b1588d6-b6ad-413e-83ec-7b3c60b5b932,Namespace:kube-system,Attempt:0,} returns sandbox id \"2afeb6c7d28bbfc7e84ba19f25a5911525212756198428bde7ca03cee27587e9\"" Dec 13 06:45:59.305348 env[1191]: time="2024-12-13T06:45:59.305304008Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Dec 13 06:45:59.317650 systemd[1]: Started cri-containerd-960db81de0ed1ec6a042a5e94643476e5e5548813dfc079ec3b3befffd0088f6.scope. Dec 13 06:45:59.337617 systemd[1]: cri-containerd-960db81de0ed1ec6a042a5e94643476e5e5548813dfc079ec3b3befffd0088f6.scope: Deactivated successfully. Dec 13 06:45:59.356094 env[1191]: time="2024-12-13T06:45:59.356034988Z" level=info msg="shim disconnected" id=960db81de0ed1ec6a042a5e94643476e5e5548813dfc079ec3b3befffd0088f6 Dec 13 06:45:59.356343 env[1191]: time="2024-12-13T06:45:59.356300632Z" level=warning msg="cleaning up after shim disconnected" id=960db81de0ed1ec6a042a5e94643476e5e5548813dfc079ec3b3befffd0088f6 namespace=k8s.io Dec 13 06:45:59.356511 env[1191]: time="2024-12-13T06:45:59.356484582Z" level=info msg="cleaning up dead shim" Dec 13 06:45:59.369803 env[1191]: time="2024-12-13T06:45:59.369763297Z" level=warning msg="cleanup warnings time=\"2024-12-13T06:45:59Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3133 runtime=io.containerd.runc.v2\ntime=\"2024-12-13T06:45:59Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/960db81de0ed1ec6a042a5e94643476e5e5548813dfc079ec3b3befffd0088f6/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Dec 13 06:45:59.370384 env[1191]: time="2024-12-13T06:45:59.370254240Z" level=error msg="copy shim log" error="read /proc/self/fd/64: file already closed" Dec 13 06:45:59.373721 env[1191]: time="2024-12-13T06:45:59.370783651Z" level=error msg="Failed to pipe stderr of container \"960db81de0ed1ec6a042a5e94643476e5e5548813dfc079ec3b3befffd0088f6\"" error="reading from a closed fifo" Dec 13 06:45:59.373829 env[1191]: time="2024-12-13T06:45:59.373617159Z" level=error msg="Failed to pipe stdout of container \"960db81de0ed1ec6a042a5e94643476e5e5548813dfc079ec3b3befffd0088f6\"" error="reading from a closed fifo" Dec 13 06:45:59.375224 env[1191]: time="2024-12-13T06:45:59.375168205Z" level=error msg="StartContainer for \"960db81de0ed1ec6a042a5e94643476e5e5548813dfc079ec3b3befffd0088f6\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Dec 13 06:45:59.376422 kubelet[1462]: E1213 06:45:59.375619 1462 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="960db81de0ed1ec6a042a5e94643476e5e5548813dfc079ec3b3befffd0088f6" Dec 13 06:45:59.376422 kubelet[1462]: E1213 06:45:59.375946 1462 kuberuntime_manager.go:1256] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Dec 13 06:45:59.376422 kubelet[1462]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Dec 13 06:45:59.376422 kubelet[1462]: rm /hostbin/cilium-mount Dec 13 06:45:59.376836 kubelet[1462]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ssxs6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:&AppArmorProfile{Type:Unconfined,LocalhostProfile:nil,},},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-c85r6_kube-system(f288a694-2c0b-453c-8240-4689dd4bd822): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Dec 13 06:45:59.377028 kubelet[1462]: E1213 06:45:59.376020 1462 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-c85r6" podUID="f288a694-2c0b-453c-8240-4689dd4bd822" Dec 13 06:45:59.712822 kubelet[1462]: E1213 06:45:59.712775 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:46:00.135379 env[1191]: time="2024-12-13T06:46:00.135209843Z" level=info msg="CreateContainer within sandbox \"11935781dcb91ae7d8be009e6beb7a534766a2e10bea078de9850d0dc2c60ab8\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:1,}" Dec 13 06:46:00.150644 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4066741883.mount: Deactivated successfully. Dec 13 06:46:00.157957 env[1191]: time="2024-12-13T06:46:00.157911539Z" level=info msg="CreateContainer within sandbox \"11935781dcb91ae7d8be009e6beb7a534766a2e10bea078de9850d0dc2c60ab8\" for &ContainerMetadata{Name:mount-cgroup,Attempt:1,} returns container id \"9b256def3f46857afd4681d4cf44fb4d62594d663595292986567f0828524871\"" Dec 13 06:46:00.159110 env[1191]: time="2024-12-13T06:46:00.159060871Z" level=info msg="StartContainer for \"9b256def3f46857afd4681d4cf44fb4d62594d663595292986567f0828524871\"" Dec 13 06:46:00.187499 systemd[1]: Started cri-containerd-9b256def3f46857afd4681d4cf44fb4d62594d663595292986567f0828524871.scope. Dec 13 06:46:00.203951 systemd[1]: cri-containerd-9b256def3f46857afd4681d4cf44fb4d62594d663595292986567f0828524871.scope: Deactivated successfully. Dec 13 06:46:00.215158 env[1191]: time="2024-12-13T06:46:00.215094390Z" level=info msg="shim disconnected" id=9b256def3f46857afd4681d4cf44fb4d62594d663595292986567f0828524871 Dec 13 06:46:00.215476 env[1191]: time="2024-12-13T06:46:00.215434060Z" level=warning msg="cleaning up after shim disconnected" id=9b256def3f46857afd4681d4cf44fb4d62594d663595292986567f0828524871 namespace=k8s.io Dec 13 06:46:00.215649 env[1191]: time="2024-12-13T06:46:00.215620959Z" level=info msg="cleaning up dead shim" Dec 13 06:46:00.227205 env[1191]: time="2024-12-13T06:46:00.227138285Z" level=warning msg="cleanup warnings time=\"2024-12-13T06:46:00Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3170 runtime=io.containerd.runc.v2\ntime=\"2024-12-13T06:46:00Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/9b256def3f46857afd4681d4cf44fb4d62594d663595292986567f0828524871/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Dec 13 06:46:00.227599 env[1191]: time="2024-12-13T06:46:00.227503587Z" level=error msg="copy shim log" error="read /proc/self/fd/67: file already closed" Dec 13 06:46:00.229660 env[1191]: time="2024-12-13T06:46:00.229611498Z" level=error msg="Failed to pipe stdout of container \"9b256def3f46857afd4681d4cf44fb4d62594d663595292986567f0828524871\"" error="reading from a closed fifo" Dec 13 06:46:00.229920 env[1191]: time="2024-12-13T06:46:00.229818942Z" level=error msg="Failed to pipe stderr of container \"9b256def3f46857afd4681d4cf44fb4d62594d663595292986567f0828524871\"" error="reading from a closed fifo" Dec 13 06:46:00.231359 env[1191]: time="2024-12-13T06:46:00.231311559Z" level=error msg="StartContainer for \"9b256def3f46857afd4681d4cf44fb4d62594d663595292986567f0828524871\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Dec 13 06:46:00.231708 kubelet[1462]: E1213 06:46:00.231645 1462 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="9b256def3f46857afd4681d4cf44fb4d62594d663595292986567f0828524871" Dec 13 06:46:00.231900 kubelet[1462]: E1213 06:46:00.231838 1462 kuberuntime_manager.go:1256] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Dec 13 06:46:00.231900 kubelet[1462]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Dec 13 06:46:00.231900 kubelet[1462]: rm /hostbin/cilium-mount Dec 13 06:46:00.232085 kubelet[1462]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ssxs6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:&AppArmorProfile{Type:Unconfined,LocalhostProfile:nil,},},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-c85r6_kube-system(f288a694-2c0b-453c-8240-4689dd4bd822): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Dec 13 06:46:00.232085 kubelet[1462]: E1213 06:46:00.231919 1462 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-c85r6" podUID="f288a694-2c0b-453c-8240-4689dd4bd822" Dec 13 06:46:00.713654 kubelet[1462]: E1213 06:46:00.713582 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:46:01.073099 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9b256def3f46857afd4681d4cf44fb4d62594d663595292986567f0828524871-rootfs.mount: Deactivated successfully. Dec 13 06:46:01.137999 kubelet[1462]: I1213 06:46:01.137677 1462 scope.go:117] "RemoveContainer" containerID="960db81de0ed1ec6a042a5e94643476e5e5548813dfc079ec3b3befffd0088f6" Dec 13 06:46:01.138361 env[1191]: time="2024-12-13T06:46:01.138310850Z" level=info msg="StopPodSandbox for \"11935781dcb91ae7d8be009e6beb7a534766a2e10bea078de9850d0dc2c60ab8\"" Dec 13 06:46:01.141790 env[1191]: time="2024-12-13T06:46:01.138410902Z" level=info msg="Container to stop \"960db81de0ed1ec6a042a5e94643476e5e5548813dfc079ec3b3befffd0088f6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 06:46:01.141790 env[1191]: time="2024-12-13T06:46:01.138457178Z" level=info msg="Container to stop \"9b256def3f46857afd4681d4cf44fb4d62594d663595292986567f0828524871\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 06:46:01.140976 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-11935781dcb91ae7d8be009e6beb7a534766a2e10bea078de9850d0dc2c60ab8-shm.mount: Deactivated successfully. Dec 13 06:46:01.144891 env[1191]: time="2024-12-13T06:46:01.144827226Z" level=info msg="RemoveContainer for \"960db81de0ed1ec6a042a5e94643476e5e5548813dfc079ec3b3befffd0088f6\"" Dec 13 06:46:01.148581 env[1191]: time="2024-12-13T06:46:01.148484547Z" level=info msg="RemoveContainer for \"960db81de0ed1ec6a042a5e94643476e5e5548813dfc079ec3b3befffd0088f6\" returns successfully" Dec 13 06:46:01.156971 systemd[1]: cri-containerd-11935781dcb91ae7d8be009e6beb7a534766a2e10bea078de9850d0dc2c60ab8.scope: Deactivated successfully. Dec 13 06:46:01.172960 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1705551774.mount: Deactivated successfully. Dec 13 06:46:01.197234 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-11935781dcb91ae7d8be009e6beb7a534766a2e10bea078de9850d0dc2c60ab8-rootfs.mount: Deactivated successfully. Dec 13 06:46:01.207338 env[1191]: time="2024-12-13T06:46:01.207282891Z" level=info msg="shim disconnected" id=11935781dcb91ae7d8be009e6beb7a534766a2e10bea078de9850d0dc2c60ab8 Dec 13 06:46:01.207669 env[1191]: time="2024-12-13T06:46:01.207637359Z" level=warning msg="cleaning up after shim disconnected" id=11935781dcb91ae7d8be009e6beb7a534766a2e10bea078de9850d0dc2c60ab8 namespace=k8s.io Dec 13 06:46:01.207825 env[1191]: time="2024-12-13T06:46:01.207795278Z" level=info msg="cleaning up dead shim" Dec 13 06:46:01.218725 env[1191]: time="2024-12-13T06:46:01.218674480Z" level=warning msg="cleanup warnings time=\"2024-12-13T06:46:01Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3203 runtime=io.containerd.runc.v2\n" Dec 13 06:46:01.219141 env[1191]: time="2024-12-13T06:46:01.219092930Z" level=info msg="TearDown network for sandbox \"11935781dcb91ae7d8be009e6beb7a534766a2e10bea078de9850d0dc2c60ab8\" successfully" Dec 13 06:46:01.219242 env[1191]: time="2024-12-13T06:46:01.219136640Z" level=info msg="StopPodSandbox for \"11935781dcb91ae7d8be009e6beb7a534766a2e10bea078de9850d0dc2c60ab8\" returns successfully" Dec 13 06:46:01.385505 kubelet[1462]: I1213 06:46:01.385318 1462 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f288a694-2c0b-453c-8240-4689dd4bd822-bpf-maps\") pod \"f288a694-2c0b-453c-8240-4689dd4bd822\" (UID: \"f288a694-2c0b-453c-8240-4689dd4bd822\") " Dec 13 06:46:01.385505 kubelet[1462]: I1213 06:46:01.385385 1462 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f288a694-2c0b-453c-8240-4689dd4bd822-etc-cni-netd\") pod \"f288a694-2c0b-453c-8240-4689dd4bd822\" (UID: \"f288a694-2c0b-453c-8240-4689dd4bd822\") " Dec 13 06:46:01.385505 kubelet[1462]: I1213 06:46:01.385413 1462 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f288a694-2c0b-453c-8240-4689dd4bd822-lib-modules\") pod \"f288a694-2c0b-453c-8240-4689dd4bd822\" (UID: \"f288a694-2c0b-453c-8240-4689dd4bd822\") " Dec 13 06:46:01.385505 kubelet[1462]: I1213 06:46:01.385437 1462 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f288a694-2c0b-453c-8240-4689dd4bd822-host-proc-sys-net\") pod \"f288a694-2c0b-453c-8240-4689dd4bd822\" (UID: \"f288a694-2c0b-453c-8240-4689dd4bd822\") " Dec 13 06:46:01.385505 kubelet[1462]: I1213 06:46:01.385465 1462 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f288a694-2c0b-453c-8240-4689dd4bd822-cilium-run\") pod \"f288a694-2c0b-453c-8240-4689dd4bd822\" (UID: \"f288a694-2c0b-453c-8240-4689dd4bd822\") " Dec 13 06:46:01.386464 kubelet[1462]: I1213 06:46:01.386110 1462 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f288a694-2c0b-453c-8240-4689dd4bd822-cni-path\") pod \"f288a694-2c0b-453c-8240-4689dd4bd822\" (UID: \"f288a694-2c0b-453c-8240-4689dd4bd822\") " Dec 13 06:46:01.386464 kubelet[1462]: I1213 06:46:01.386143 1462 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f288a694-2c0b-453c-8240-4689dd4bd822-xtables-lock\") pod \"f288a694-2c0b-453c-8240-4689dd4bd822\" (UID: \"f288a694-2c0b-453c-8240-4689dd4bd822\") " Dec 13 06:46:01.386464 kubelet[1462]: I1213 06:46:01.386168 1462 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f288a694-2c0b-453c-8240-4689dd4bd822-hostproc\") pod \"f288a694-2c0b-453c-8240-4689dd4bd822\" (UID: \"f288a694-2c0b-453c-8240-4689dd4bd822\") " Dec 13 06:46:01.386464 kubelet[1462]: I1213 06:46:01.386203 1462 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f288a694-2c0b-453c-8240-4689dd4bd822-clustermesh-secrets\") pod \"f288a694-2c0b-453c-8240-4689dd4bd822\" (UID: \"f288a694-2c0b-453c-8240-4689dd4bd822\") " Dec 13 06:46:01.386464 kubelet[1462]: I1213 06:46:01.386231 1462 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f288a694-2c0b-453c-8240-4689dd4bd822-hubble-tls\") pod \"f288a694-2c0b-453c-8240-4689dd4bd822\" (UID: \"f288a694-2c0b-453c-8240-4689dd4bd822\") " Dec 13 06:46:01.386464 kubelet[1462]: I1213 06:46:01.386292 1462 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ssxs6\" (UniqueName: \"kubernetes.io/projected/f288a694-2c0b-453c-8240-4689dd4bd822-kube-api-access-ssxs6\") pod \"f288a694-2c0b-453c-8240-4689dd4bd822\" (UID: \"f288a694-2c0b-453c-8240-4689dd4bd822\") " Dec 13 06:46:01.386464 kubelet[1462]: I1213 06:46:01.386329 1462 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f288a694-2c0b-453c-8240-4689dd4bd822-cilium-cgroup\") pod \"f288a694-2c0b-453c-8240-4689dd4bd822\" (UID: \"f288a694-2c0b-453c-8240-4689dd4bd822\") " Dec 13 06:46:01.386464 kubelet[1462]: I1213 06:46:01.386355 1462 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f288a694-2c0b-453c-8240-4689dd4bd822-host-proc-sys-kernel\") pod \"f288a694-2c0b-453c-8240-4689dd4bd822\" (UID: \"f288a694-2c0b-453c-8240-4689dd4bd822\") " Dec 13 06:46:01.386464 kubelet[1462]: I1213 06:46:01.386393 1462 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f288a694-2c0b-453c-8240-4689dd4bd822-cilium-config-path\") pod \"f288a694-2c0b-453c-8240-4689dd4bd822\" (UID: \"f288a694-2c0b-453c-8240-4689dd4bd822\") " Dec 13 06:46:01.386464 kubelet[1462]: I1213 06:46:01.386423 1462 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/f288a694-2c0b-453c-8240-4689dd4bd822-cilium-ipsec-secrets\") pod \"f288a694-2c0b-453c-8240-4689dd4bd822\" (UID: \"f288a694-2c0b-453c-8240-4689dd4bd822\") " Dec 13 06:46:01.387060 kubelet[1462]: I1213 06:46:01.386980 1462 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f288a694-2c0b-453c-8240-4689dd4bd822-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "f288a694-2c0b-453c-8240-4689dd4bd822" (UID: "f288a694-2c0b-453c-8240-4689dd4bd822"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 06:46:01.387060 kubelet[1462]: I1213 06:46:01.387031 1462 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f288a694-2c0b-453c-8240-4689dd4bd822-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "f288a694-2c0b-453c-8240-4689dd4bd822" (UID: "f288a694-2c0b-453c-8240-4689dd4bd822"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 06:46:01.387183 kubelet[1462]: I1213 06:46:01.387058 1462 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f288a694-2c0b-453c-8240-4689dd4bd822-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "f288a694-2c0b-453c-8240-4689dd4bd822" (UID: "f288a694-2c0b-453c-8240-4689dd4bd822"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 06:46:01.387183 kubelet[1462]: I1213 06:46:01.387085 1462 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f288a694-2c0b-453c-8240-4689dd4bd822-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "f288a694-2c0b-453c-8240-4689dd4bd822" (UID: "f288a694-2c0b-453c-8240-4689dd4bd822"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 06:46:01.387183 kubelet[1462]: I1213 06:46:01.387109 1462 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f288a694-2c0b-453c-8240-4689dd4bd822-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "f288a694-2c0b-453c-8240-4689dd4bd822" (UID: "f288a694-2c0b-453c-8240-4689dd4bd822"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 06:46:01.387183 kubelet[1462]: I1213 06:46:01.387133 1462 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f288a694-2c0b-453c-8240-4689dd4bd822-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "f288a694-2c0b-453c-8240-4689dd4bd822" (UID: "f288a694-2c0b-453c-8240-4689dd4bd822"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 06:46:01.387183 kubelet[1462]: I1213 06:46:01.387158 1462 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f288a694-2c0b-453c-8240-4689dd4bd822-cni-path" (OuterVolumeSpecName: "cni-path") pod "f288a694-2c0b-453c-8240-4689dd4bd822" (UID: "f288a694-2c0b-453c-8240-4689dd4bd822"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 06:46:01.391064 kubelet[1462]: I1213 06:46:01.391028 1462 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f288a694-2c0b-453c-8240-4689dd4bd822-kube-api-access-ssxs6" (OuterVolumeSpecName: "kube-api-access-ssxs6") pod "f288a694-2c0b-453c-8240-4689dd4bd822" (UID: "f288a694-2c0b-453c-8240-4689dd4bd822"). InnerVolumeSpecName "kube-api-access-ssxs6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 06:46:01.391204 kubelet[1462]: I1213 06:46:01.391073 1462 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f288a694-2c0b-453c-8240-4689dd4bd822-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "f288a694-2c0b-453c-8240-4689dd4bd822" (UID: "f288a694-2c0b-453c-8240-4689dd4bd822"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 06:46:01.391329 kubelet[1462]: I1213 06:46:01.391102 1462 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f288a694-2c0b-453c-8240-4689dd4bd822-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "f288a694-2c0b-453c-8240-4689dd4bd822" (UID: "f288a694-2c0b-453c-8240-4689dd4bd822"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 06:46:01.391457 kubelet[1462]: I1213 06:46:01.391122 1462 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f288a694-2c0b-453c-8240-4689dd4bd822-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "f288a694-2c0b-453c-8240-4689dd4bd822" (UID: "f288a694-2c0b-453c-8240-4689dd4bd822"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 06:46:01.391634 kubelet[1462]: I1213 06:46:01.391606 1462 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f288a694-2c0b-453c-8240-4689dd4bd822-hostproc" (OuterVolumeSpecName: "hostproc") pod "f288a694-2c0b-453c-8240-4689dd4bd822" (UID: "f288a694-2c0b-453c-8240-4689dd4bd822"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 06:46:01.394665 kubelet[1462]: I1213 06:46:01.394621 1462 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f288a694-2c0b-453c-8240-4689dd4bd822-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "f288a694-2c0b-453c-8240-4689dd4bd822" (UID: "f288a694-2c0b-453c-8240-4689dd4bd822"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 06:46:01.394938 kubelet[1462]: I1213 06:46:01.394893 1462 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f288a694-2c0b-453c-8240-4689dd4bd822-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "f288a694-2c0b-453c-8240-4689dd4bd822" (UID: "f288a694-2c0b-453c-8240-4689dd4bd822"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 06:46:01.397959 kubelet[1462]: I1213 06:46:01.397916 1462 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f288a694-2c0b-453c-8240-4689dd4bd822-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "f288a694-2c0b-453c-8240-4689dd4bd822" (UID: "f288a694-2c0b-453c-8240-4689dd4bd822"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 06:46:01.487689 kubelet[1462]: I1213 06:46:01.487353 1462 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f288a694-2c0b-453c-8240-4689dd4bd822-xtables-lock\") on node \"10.230.56.170\" DevicePath \"\"" Dec 13 06:46:01.487689 kubelet[1462]: I1213 06:46:01.487401 1462 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f288a694-2c0b-453c-8240-4689dd4bd822-hostproc\") on node \"10.230.56.170\" DevicePath \"\"" Dec 13 06:46:01.487689 kubelet[1462]: I1213 06:46:01.487417 1462 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f288a694-2c0b-453c-8240-4689dd4bd822-clustermesh-secrets\") on node \"10.230.56.170\" DevicePath \"\"" Dec 13 06:46:01.487689 kubelet[1462]: I1213 06:46:01.487438 1462 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f288a694-2c0b-453c-8240-4689dd4bd822-hubble-tls\") on node \"10.230.56.170\" DevicePath \"\"" Dec 13 06:46:01.487689 kubelet[1462]: I1213 06:46:01.487453 1462 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f288a694-2c0b-453c-8240-4689dd4bd822-cni-path\") on node \"10.230.56.170\" DevicePath \"\"" Dec 13 06:46:01.487689 kubelet[1462]: I1213 06:46:01.487467 1462 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-ssxs6\" (UniqueName: \"kubernetes.io/projected/f288a694-2c0b-453c-8240-4689dd4bd822-kube-api-access-ssxs6\") on node \"10.230.56.170\" DevicePath \"\"" Dec 13 06:46:01.487689 kubelet[1462]: I1213 06:46:01.487482 1462 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f288a694-2c0b-453c-8240-4689dd4bd822-cilium-cgroup\") on node \"10.230.56.170\" DevicePath \"\"" Dec 13 06:46:01.487689 kubelet[1462]: I1213 06:46:01.487496 1462 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f288a694-2c0b-453c-8240-4689dd4bd822-host-proc-sys-kernel\") on node \"10.230.56.170\" DevicePath \"\"" Dec 13 06:46:01.487689 kubelet[1462]: I1213 06:46:01.487509 1462 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f288a694-2c0b-453c-8240-4689dd4bd822-cilium-config-path\") on node \"10.230.56.170\" DevicePath \"\"" Dec 13 06:46:01.487689 kubelet[1462]: I1213 06:46:01.487522 1462 reconciler_common.go:289] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/f288a694-2c0b-453c-8240-4689dd4bd822-cilium-ipsec-secrets\") on node \"10.230.56.170\" DevicePath \"\"" Dec 13 06:46:01.487689 kubelet[1462]: I1213 06:46:01.487536 1462 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f288a694-2c0b-453c-8240-4689dd4bd822-bpf-maps\") on node \"10.230.56.170\" DevicePath \"\"" Dec 13 06:46:01.487689 kubelet[1462]: I1213 06:46:01.487564 1462 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f288a694-2c0b-453c-8240-4689dd4bd822-etc-cni-netd\") on node \"10.230.56.170\" DevicePath \"\"" Dec 13 06:46:01.487689 kubelet[1462]: I1213 06:46:01.487590 1462 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f288a694-2c0b-453c-8240-4689dd4bd822-lib-modules\") on node \"10.230.56.170\" DevicePath \"\"" Dec 13 06:46:01.487689 kubelet[1462]: I1213 06:46:01.487620 1462 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f288a694-2c0b-453c-8240-4689dd4bd822-host-proc-sys-net\") on node \"10.230.56.170\" DevicePath \"\"" Dec 13 06:46:01.487689 kubelet[1462]: I1213 06:46:01.487637 1462 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f288a694-2c0b-453c-8240-4689dd4bd822-cilium-run\") on node \"10.230.56.170\" DevicePath \"\"" Dec 13 06:46:01.714831 kubelet[1462]: E1213 06:46:01.714762 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:46:01.852855 systemd[1]: Removed slice kubepods-burstable-podf288a694_2c0b_453c_8240_4689dd4bd822.slice. Dec 13 06:46:02.073119 systemd[1]: var-lib-kubelet-pods-f288a694\x2d2c0b\x2d453c\x2d8240\x2d4689dd4bd822-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dssxs6.mount: Deactivated successfully. Dec 13 06:46:02.073261 systemd[1]: var-lib-kubelet-pods-f288a694\x2d2c0b\x2d453c\x2d8240\x2d4689dd4bd822-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Dec 13 06:46:02.073381 systemd[1]: var-lib-kubelet-pods-f288a694\x2d2c0b\x2d453c\x2d8240\x2d4689dd4bd822-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 06:46:02.073495 systemd[1]: var-lib-kubelet-pods-f288a694\x2d2c0b\x2d453c\x2d8240\x2d4689dd4bd822-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 06:46:02.142947 kubelet[1462]: I1213 06:46:02.142789 1462 scope.go:117] "RemoveContainer" containerID="9b256def3f46857afd4681d4cf44fb4d62594d663595292986567f0828524871" Dec 13 06:46:02.145635 env[1191]: time="2024-12-13T06:46:02.145589550Z" level=info msg="RemoveContainer for \"9b256def3f46857afd4681d4cf44fb4d62594d663595292986567f0828524871\"" Dec 13 06:46:02.149187 env[1191]: time="2024-12-13T06:46:02.149148668Z" level=info msg="RemoveContainer for \"9b256def3f46857afd4681d4cf44fb4d62594d663595292986567f0828524871\" returns successfully" Dec 13 06:46:02.462118 kubelet[1462]: W1213 06:46:02.462045 1462 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf288a694_2c0b_453c_8240_4689dd4bd822.slice/cri-containerd-960db81de0ed1ec6a042a5e94643476e5e5548813dfc079ec3b3befffd0088f6.scope WatchSource:0}: container "960db81de0ed1ec6a042a5e94643476e5e5548813dfc079ec3b3befffd0088f6" in namespace "k8s.io": not found Dec 13 06:46:02.479234 kubelet[1462]: I1213 06:46:02.479196 1462 topology_manager.go:215] "Topology Admit Handler" podUID="479d7c96-080d-4b8f-8f53-703fd23b332b" podNamespace="kube-system" podName="cilium-wvh2x" Dec 13 06:46:02.479450 kubelet[1462]: E1213 06:46:02.479423 1462 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f288a694-2c0b-453c-8240-4689dd4bd822" containerName="mount-cgroup" Dec 13 06:46:02.479627 kubelet[1462]: I1213 06:46:02.479602 1462 memory_manager.go:354] "RemoveStaleState removing state" podUID="f288a694-2c0b-453c-8240-4689dd4bd822" containerName="mount-cgroup" Dec 13 06:46:02.479767 kubelet[1462]: E1213 06:46:02.479742 1462 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f288a694-2c0b-453c-8240-4689dd4bd822" containerName="mount-cgroup" Dec 13 06:46:02.479906 kubelet[1462]: I1213 06:46:02.479871 1462 memory_manager.go:354] "RemoveStaleState removing state" podUID="f288a694-2c0b-453c-8240-4689dd4bd822" containerName="mount-cgroup" Dec 13 06:46:02.487400 systemd[1]: Created slice kubepods-burstable-pod479d7c96_080d_4b8f_8f53_703fd23b332b.slice. Dec 13 06:46:02.595605 kubelet[1462]: I1213 06:46:02.595534 1462 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/479d7c96-080d-4b8f-8f53-703fd23b332b-lib-modules\") pod \"cilium-wvh2x\" (UID: \"479d7c96-080d-4b8f-8f53-703fd23b332b\") " pod="kube-system/cilium-wvh2x" Dec 13 06:46:02.595986 kubelet[1462]: I1213 06:46:02.595957 1462 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/479d7c96-080d-4b8f-8f53-703fd23b332b-xtables-lock\") pod \"cilium-wvh2x\" (UID: \"479d7c96-080d-4b8f-8f53-703fd23b332b\") " pod="kube-system/cilium-wvh2x" Dec 13 06:46:02.596194 kubelet[1462]: I1213 06:46:02.596156 1462 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/479d7c96-080d-4b8f-8f53-703fd23b332b-hostproc\") pod \"cilium-wvh2x\" (UID: \"479d7c96-080d-4b8f-8f53-703fd23b332b\") " pod="kube-system/cilium-wvh2x" Dec 13 06:46:02.596379 kubelet[1462]: I1213 06:46:02.596342 1462 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/479d7c96-080d-4b8f-8f53-703fd23b332b-cilium-cgroup\") pod \"cilium-wvh2x\" (UID: \"479d7c96-080d-4b8f-8f53-703fd23b332b\") " pod="kube-system/cilium-wvh2x" Dec 13 06:46:02.596578 kubelet[1462]: I1213 06:46:02.596532 1462 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/479d7c96-080d-4b8f-8f53-703fd23b332b-cni-path\") pod \"cilium-wvh2x\" (UID: \"479d7c96-080d-4b8f-8f53-703fd23b332b\") " pod="kube-system/cilium-wvh2x" Dec 13 06:46:02.596769 kubelet[1462]: I1213 06:46:02.596733 1462 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/479d7c96-080d-4b8f-8f53-703fd23b332b-cilium-ipsec-secrets\") pod \"cilium-wvh2x\" (UID: \"479d7c96-080d-4b8f-8f53-703fd23b332b\") " pod="kube-system/cilium-wvh2x" Dec 13 06:46:02.596969 kubelet[1462]: I1213 06:46:02.596933 1462 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/479d7c96-080d-4b8f-8f53-703fd23b332b-hubble-tls\") pod \"cilium-wvh2x\" (UID: \"479d7c96-080d-4b8f-8f53-703fd23b332b\") " pod="kube-system/cilium-wvh2x" Dec 13 06:46:02.597148 kubelet[1462]: I1213 06:46:02.597112 1462 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/479d7c96-080d-4b8f-8f53-703fd23b332b-cilium-run\") pod \"cilium-wvh2x\" (UID: \"479d7c96-080d-4b8f-8f53-703fd23b332b\") " pod="kube-system/cilium-wvh2x" Dec 13 06:46:02.597327 kubelet[1462]: I1213 06:46:02.597289 1462 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/479d7c96-080d-4b8f-8f53-703fd23b332b-bpf-maps\") pod \"cilium-wvh2x\" (UID: \"479d7c96-080d-4b8f-8f53-703fd23b332b\") " pod="kube-system/cilium-wvh2x" Dec 13 06:46:02.597510 kubelet[1462]: I1213 06:46:02.597473 1462 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/479d7c96-080d-4b8f-8f53-703fd23b332b-etc-cni-netd\") pod \"cilium-wvh2x\" (UID: \"479d7c96-080d-4b8f-8f53-703fd23b332b\") " pod="kube-system/cilium-wvh2x" Dec 13 06:46:02.597710 kubelet[1462]: I1213 06:46:02.597674 1462 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/479d7c96-080d-4b8f-8f53-703fd23b332b-clustermesh-secrets\") pod \"cilium-wvh2x\" (UID: \"479d7c96-080d-4b8f-8f53-703fd23b332b\") " pod="kube-system/cilium-wvh2x" Dec 13 06:46:02.597889 kubelet[1462]: I1213 06:46:02.597852 1462 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/479d7c96-080d-4b8f-8f53-703fd23b332b-host-proc-sys-kernel\") pod \"cilium-wvh2x\" (UID: \"479d7c96-080d-4b8f-8f53-703fd23b332b\") " pod="kube-system/cilium-wvh2x" Dec 13 06:46:02.598070 kubelet[1462]: I1213 06:46:02.598029 1462 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lz7ww\" (UniqueName: \"kubernetes.io/projected/479d7c96-080d-4b8f-8f53-703fd23b332b-kube-api-access-lz7ww\") pod \"cilium-wvh2x\" (UID: \"479d7c96-080d-4b8f-8f53-703fd23b332b\") " pod="kube-system/cilium-wvh2x" Dec 13 06:46:02.598253 kubelet[1462]: I1213 06:46:02.598219 1462 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/479d7c96-080d-4b8f-8f53-703fd23b332b-cilium-config-path\") pod \"cilium-wvh2x\" (UID: \"479d7c96-080d-4b8f-8f53-703fd23b332b\") " pod="kube-system/cilium-wvh2x" Dec 13 06:46:02.598417 kubelet[1462]: I1213 06:46:02.598381 1462 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/479d7c96-080d-4b8f-8f53-703fd23b332b-host-proc-sys-net\") pod \"cilium-wvh2x\" (UID: \"479d7c96-080d-4b8f-8f53-703fd23b332b\") " pod="kube-system/cilium-wvh2x" Dec 13 06:46:02.719982 kubelet[1462]: E1213 06:46:02.719871 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:46:02.797141 env[1191]: time="2024-12-13T06:46:02.796681625Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wvh2x,Uid:479d7c96-080d-4b8f-8f53-703fd23b332b,Namespace:kube-system,Attempt:0,}" Dec 13 06:46:02.820533 env[1191]: time="2024-12-13T06:46:02.820198037Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 06:46:02.820533 env[1191]: time="2024-12-13T06:46:02.820264816Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 06:46:02.820533 env[1191]: time="2024-12-13T06:46:02.820282836Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 06:46:02.821175 env[1191]: time="2024-12-13T06:46:02.821125082Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/35d567667f528967b5a2becf3e2e8f8a438045abb9bf42de6f2be377f92b10f2 pid=3230 runtime=io.containerd.runc.v2 Dec 13 06:46:02.834831 env[1191]: time="2024-12-13T06:46:02.834755368Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 06:46:02.840123 systemd[1]: Started cri-containerd-35d567667f528967b5a2becf3e2e8f8a438045abb9bf42de6f2be377f92b10f2.scope. Dec 13 06:46:02.843876 env[1191]: time="2024-12-13T06:46:02.843836161Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 06:46:02.850607 env[1191]: time="2024-12-13T06:46:02.850505828Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 06:46:02.851438 env[1191]: time="2024-12-13T06:46:02.851207417Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Dec 13 06:46:02.857559 env[1191]: time="2024-12-13T06:46:02.857501187Z" level=info msg="CreateContainer within sandbox \"2afeb6c7d28bbfc7e84ba19f25a5911525212756198428bde7ca03cee27587e9\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Dec 13 06:46:02.882171 env[1191]: time="2024-12-13T06:46:02.882118533Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wvh2x,Uid:479d7c96-080d-4b8f-8f53-703fd23b332b,Namespace:kube-system,Attempt:0,} returns sandbox id \"35d567667f528967b5a2becf3e2e8f8a438045abb9bf42de6f2be377f92b10f2\"" Dec 13 06:46:02.886001 env[1191]: time="2024-12-13T06:46:02.885961139Z" level=info msg="CreateContainer within sandbox \"35d567667f528967b5a2becf3e2e8f8a438045abb9bf42de6f2be377f92b10f2\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 06:46:02.898227 env[1191]: time="2024-12-13T06:46:02.898173794Z" level=info msg="CreateContainer within sandbox \"2afeb6c7d28bbfc7e84ba19f25a5911525212756198428bde7ca03cee27587e9\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"1ffdb2b6cf86c9b5ec13f6a938adc572263ea94fa3d613aa5726a045341ede69\"" Dec 13 06:46:02.898770 env[1191]: time="2024-12-13T06:46:02.898717019Z" level=info msg="StartContainer for \"1ffdb2b6cf86c9b5ec13f6a938adc572263ea94fa3d613aa5726a045341ede69\"" Dec 13 06:46:02.905747 env[1191]: time="2024-12-13T06:46:02.905706627Z" level=info msg="CreateContainer within sandbox \"35d567667f528967b5a2becf3e2e8f8a438045abb9bf42de6f2be377f92b10f2\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"f6da922f0224405ed16fa53962a7d3fa3a0d6134f14c733362aa767041c2d82b\"" Dec 13 06:46:02.906850 env[1191]: time="2024-12-13T06:46:02.906788098Z" level=info msg="StartContainer for \"f6da922f0224405ed16fa53962a7d3fa3a0d6134f14c733362aa767041c2d82b\"" Dec 13 06:46:02.935186 systemd[1]: Started cri-containerd-1ffdb2b6cf86c9b5ec13f6a938adc572263ea94fa3d613aa5726a045341ede69.scope. Dec 13 06:46:02.958260 systemd[1]: Started cri-containerd-f6da922f0224405ed16fa53962a7d3fa3a0d6134f14c733362aa767041c2d82b.scope. Dec 13 06:46:03.008048 env[1191]: time="2024-12-13T06:46:03.007288901Z" level=info msg="StartContainer for \"1ffdb2b6cf86c9b5ec13f6a938adc572263ea94fa3d613aa5726a045341ede69\" returns successfully" Dec 13 06:46:03.020410 env[1191]: time="2024-12-13T06:46:03.020284780Z" level=info msg="StartContainer for \"f6da922f0224405ed16fa53962a7d3fa3a0d6134f14c733362aa767041c2d82b\" returns successfully" Dec 13 06:46:03.044239 systemd[1]: cri-containerd-f6da922f0224405ed16fa53962a7d3fa3a0d6134f14c733362aa767041c2d82b.scope: Deactivated successfully. Dec 13 06:46:03.104003 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f6da922f0224405ed16fa53962a7d3fa3a0d6134f14c733362aa767041c2d82b-rootfs.mount: Deactivated successfully. Dec 13 06:46:03.174291 env[1191]: time="2024-12-13T06:46:03.174235857Z" level=info msg="shim disconnected" id=f6da922f0224405ed16fa53962a7d3fa3a0d6134f14c733362aa767041c2d82b Dec 13 06:46:03.175095 env[1191]: time="2024-12-13T06:46:03.175058391Z" level=warning msg="cleaning up after shim disconnected" id=f6da922f0224405ed16fa53962a7d3fa3a0d6134f14c733362aa767041c2d82b namespace=k8s.io Dec 13 06:46:03.175290 env[1191]: time="2024-12-13T06:46:03.175247885Z" level=info msg="cleaning up dead shim" Dec 13 06:46:03.191162 env[1191]: time="2024-12-13T06:46:03.188196960Z" level=warning msg="cleanup warnings time=\"2024-12-13T06:46:03Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3354 runtime=io.containerd.runc.v2\n" Dec 13 06:46:03.722220 kubelet[1462]: E1213 06:46:03.722146 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:46:03.796373 kubelet[1462]: E1213 06:46:03.796290 1462 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 06:46:03.848545 kubelet[1462]: I1213 06:46:03.848492 1462 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f288a694-2c0b-453c-8240-4689dd4bd822" path="/var/lib/kubelet/pods/f288a694-2c0b-453c-8240-4689dd4bd822/volumes" Dec 13 06:46:04.155749 env[1191]: time="2024-12-13T06:46:04.155331935Z" level=info msg="CreateContainer within sandbox \"35d567667f528967b5a2becf3e2e8f8a438045abb9bf42de6f2be377f92b10f2\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 06:46:04.170825 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount645399551.mount: Deactivated successfully. Dec 13 06:46:04.176833 env[1191]: time="2024-12-13T06:46:04.176427540Z" level=info msg="CreateContainer within sandbox \"35d567667f528967b5a2becf3e2e8f8a438045abb9bf42de6f2be377f92b10f2\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"76599774d8abd88ad4408723240c06763861d952af3c40171c838b0ba54dbdb8\"" Dec 13 06:46:04.178228 env[1191]: time="2024-12-13T06:46:04.178193320Z" level=info msg="StartContainer for \"76599774d8abd88ad4408723240c06763861d952af3c40171c838b0ba54dbdb8\"" Dec 13 06:46:04.194103 kubelet[1462]: I1213 06:46:04.194026 1462 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-r6qw5" podStartSLOduration=2.642901015 podStartE2EDuration="6.193986801s" podCreationTimestamp="2024-12-13 06:45:58 +0000 UTC" firstStartedPulling="2024-12-13 06:45:59.304281261 +0000 UTC m=+86.529917352" lastFinishedPulling="2024-12-13 06:46:02.85536704 +0000 UTC m=+90.081003138" observedRunningTime="2024-12-13 06:46:03.284347047 +0000 UTC m=+90.509983164" watchObservedRunningTime="2024-12-13 06:46:04.193986801 +0000 UTC m=+91.419622900" Dec 13 06:46:04.210825 systemd[1]: Started cri-containerd-76599774d8abd88ad4408723240c06763861d952af3c40171c838b0ba54dbdb8.scope. Dec 13 06:46:04.257024 env[1191]: time="2024-12-13T06:46:04.256947623Z" level=info msg="StartContainer for \"76599774d8abd88ad4408723240c06763861d952af3c40171c838b0ba54dbdb8\" returns successfully" Dec 13 06:46:04.268907 systemd[1]: cri-containerd-76599774d8abd88ad4408723240c06763861d952af3c40171c838b0ba54dbdb8.scope: Deactivated successfully. Dec 13 06:46:04.295673 env[1191]: time="2024-12-13T06:46:04.295615561Z" level=info msg="shim disconnected" id=76599774d8abd88ad4408723240c06763861d952af3c40171c838b0ba54dbdb8 Dec 13 06:46:04.296018 env[1191]: time="2024-12-13T06:46:04.295983514Z" level=warning msg="cleaning up after shim disconnected" id=76599774d8abd88ad4408723240c06763861d952af3c40171c838b0ba54dbdb8 namespace=k8s.io Dec 13 06:46:04.296156 env[1191]: time="2024-12-13T06:46:04.296127108Z" level=info msg="cleaning up dead shim" Dec 13 06:46:04.307575 env[1191]: time="2024-12-13T06:46:04.307499732Z" level=warning msg="cleanup warnings time=\"2024-12-13T06:46:04Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3416 runtime=io.containerd.runc.v2\n" Dec 13 06:46:04.722903 kubelet[1462]: E1213 06:46:04.722850 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:46:05.159643 env[1191]: time="2024-12-13T06:46:05.159346638Z" level=info msg="CreateContainer within sandbox \"35d567667f528967b5a2becf3e2e8f8a438045abb9bf42de6f2be377f92b10f2\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 06:46:05.167365 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-76599774d8abd88ad4408723240c06763861d952af3c40171c838b0ba54dbdb8-rootfs.mount: Deactivated successfully. Dec 13 06:46:05.182912 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2173646021.mount: Deactivated successfully. Dec 13 06:46:05.197263 env[1191]: time="2024-12-13T06:46:05.197198169Z" level=info msg="CreateContainer within sandbox \"35d567667f528967b5a2becf3e2e8f8a438045abb9bf42de6f2be377f92b10f2\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"9559224a09b7bbdf5260d0651e38c06ad9912ce784a8191af0fe4d675099f729\"" Dec 13 06:46:05.198568 env[1191]: time="2024-12-13T06:46:05.198505770Z" level=info msg="StartContainer for \"9559224a09b7bbdf5260d0651e38c06ad9912ce784a8191af0fe4d675099f729\"" Dec 13 06:46:05.232061 systemd[1]: Started cri-containerd-9559224a09b7bbdf5260d0651e38c06ad9912ce784a8191af0fe4d675099f729.scope. Dec 13 06:46:05.283502 env[1191]: time="2024-12-13T06:46:05.283430023Z" level=info msg="StartContainer for \"9559224a09b7bbdf5260d0651e38c06ad9912ce784a8191af0fe4d675099f729\" returns successfully" Dec 13 06:46:05.290685 systemd[1]: cri-containerd-9559224a09b7bbdf5260d0651e38c06ad9912ce784a8191af0fe4d675099f729.scope: Deactivated successfully. Dec 13 06:46:05.322167 env[1191]: time="2024-12-13T06:46:05.322107737Z" level=info msg="shim disconnected" id=9559224a09b7bbdf5260d0651e38c06ad9912ce784a8191af0fe4d675099f729 Dec 13 06:46:05.322432 env[1191]: time="2024-12-13T06:46:05.322169675Z" level=warning msg="cleaning up after shim disconnected" id=9559224a09b7bbdf5260d0651e38c06ad9912ce784a8191af0fe4d675099f729 namespace=k8s.io Dec 13 06:46:05.322432 env[1191]: time="2024-12-13T06:46:05.322187330Z" level=info msg="cleaning up dead shim" Dec 13 06:46:05.333249 env[1191]: time="2024-12-13T06:46:05.333183896Z" level=warning msg="cleanup warnings time=\"2024-12-13T06:46:05Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3476 runtime=io.containerd.runc.v2\n" Dec 13 06:46:05.724868 kubelet[1462]: E1213 06:46:05.724797 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:46:06.164756 env[1191]: time="2024-12-13T06:46:06.164627584Z" level=info msg="CreateContainer within sandbox \"35d567667f528967b5a2becf3e2e8f8a438045abb9bf42de6f2be377f92b10f2\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 06:46:06.170204 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9559224a09b7bbdf5260d0651e38c06ad9912ce784a8191af0fe4d675099f729-rootfs.mount: Deactivated successfully. Dec 13 06:46:06.186659 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3268978257.mount: Deactivated successfully. Dec 13 06:46:06.191511 kubelet[1462]: I1213 06:46:06.191047 1462 setters.go:580] "Node became not ready" node="10.230.56.170" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-12-13T06:46:06Z","lastTransitionTime":"2024-12-13T06:46:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Dec 13 06:46:06.196660 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount684381307.mount: Deactivated successfully. Dec 13 06:46:06.200129 env[1191]: time="2024-12-13T06:46:06.200067001Z" level=info msg="CreateContainer within sandbox \"35d567667f528967b5a2becf3e2e8f8a438045abb9bf42de6f2be377f92b10f2\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"be2fb2567deacf1cca8fd8c9bf4147e7a10bbdc25cacb896dadef01cd90b8b1a\"" Dec 13 06:46:06.200959 env[1191]: time="2024-12-13T06:46:06.200924818Z" level=info msg="StartContainer for \"be2fb2567deacf1cca8fd8c9bf4147e7a10bbdc25cacb896dadef01cd90b8b1a\"" Dec 13 06:46:06.225004 systemd[1]: Started cri-containerd-be2fb2567deacf1cca8fd8c9bf4147e7a10bbdc25cacb896dadef01cd90b8b1a.scope. Dec 13 06:46:06.267119 systemd[1]: cri-containerd-be2fb2567deacf1cca8fd8c9bf4147e7a10bbdc25cacb896dadef01cd90b8b1a.scope: Deactivated successfully. Dec 13 06:46:06.271293 env[1191]: time="2024-12-13T06:46:06.271215651Z" level=info msg="StartContainer for \"be2fb2567deacf1cca8fd8c9bf4147e7a10bbdc25cacb896dadef01cd90b8b1a\" returns successfully" Dec 13 06:46:06.298488 env[1191]: time="2024-12-13T06:46:06.298426952Z" level=info msg="shim disconnected" id=be2fb2567deacf1cca8fd8c9bf4147e7a10bbdc25cacb896dadef01cd90b8b1a Dec 13 06:46:06.298877 env[1191]: time="2024-12-13T06:46:06.298839825Z" level=warning msg="cleaning up after shim disconnected" id=be2fb2567deacf1cca8fd8c9bf4147e7a10bbdc25cacb896dadef01cd90b8b1a namespace=k8s.io Dec 13 06:46:06.299036 env[1191]: time="2024-12-13T06:46:06.299005963Z" level=info msg="cleaning up dead shim" Dec 13 06:46:06.309892 env[1191]: time="2024-12-13T06:46:06.309831844Z" level=warning msg="cleanup warnings time=\"2024-12-13T06:46:06Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3532 runtime=io.containerd.runc.v2\n" Dec 13 06:46:06.725412 kubelet[1462]: E1213 06:46:06.725331 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:46:07.175695 env[1191]: time="2024-12-13T06:46:07.175644037Z" level=info msg="CreateContainer within sandbox \"35d567667f528967b5a2becf3e2e8f8a438045abb9bf42de6f2be377f92b10f2\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 06:46:07.200098 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount782282302.mount: Deactivated successfully. Dec 13 06:46:07.211754 env[1191]: time="2024-12-13T06:46:07.211647670Z" level=info msg="CreateContainer within sandbox \"35d567667f528967b5a2becf3e2e8f8a438045abb9bf42de6f2be377f92b10f2\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"a233770c18e2e0257c6c372226827d51f487f3da9a0ac002e1948c5d0a6c56e9\"" Dec 13 06:46:07.212686 env[1191]: time="2024-12-13T06:46:07.212521100Z" level=info msg="StartContainer for \"a233770c18e2e0257c6c372226827d51f487f3da9a0ac002e1948c5d0a6c56e9\"" Dec 13 06:46:07.240531 systemd[1]: Started cri-containerd-a233770c18e2e0257c6c372226827d51f487f3da9a0ac002e1948c5d0a6c56e9.scope. Dec 13 06:46:07.295717 env[1191]: time="2024-12-13T06:46:07.295625781Z" level=info msg="StartContainer for \"a233770c18e2e0257c6c372226827d51f487f3da9a0ac002e1948c5d0a6c56e9\" returns successfully" Dec 13 06:46:07.726498 kubelet[1462]: E1213 06:46:07.726394 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:46:08.024587 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Dec 13 06:46:08.205430 kubelet[1462]: I1213 06:46:08.205370 1462 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-wvh2x" podStartSLOduration=6.205351517 podStartE2EDuration="6.205351517s" podCreationTimestamp="2024-12-13 06:46:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 06:46:08.201218353 +0000 UTC m=+95.426854467" watchObservedRunningTime="2024-12-13 06:46:08.205351517 +0000 UTC m=+95.430987609" Dec 13 06:46:08.727074 kubelet[1462]: E1213 06:46:08.727030 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:46:08.751275 systemd[1]: run-containerd-runc-k8s.io-a233770c18e2e0257c6c372226827d51f487f3da9a0ac002e1948c5d0a6c56e9-runc.9dTJH5.mount: Deactivated successfully. Dec 13 06:46:09.728657 kubelet[1462]: E1213 06:46:09.728596 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:46:10.730287 kubelet[1462]: E1213 06:46:10.730219 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:46:10.994180 systemd[1]: run-containerd-runc-k8s.io-a233770c18e2e0257c6c372226827d51f487f3da9a0ac002e1948c5d0a6c56e9-runc.9t6mby.mount: Deactivated successfully. Dec 13 06:46:11.510274 systemd-networkd[1017]: lxc_health: Link UP Dec 13 06:46:11.518590 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Dec 13 06:46:11.522080 systemd-networkd[1017]: lxc_health: Gained carrier Dec 13 06:46:11.730997 kubelet[1462]: E1213 06:46:11.730827 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:46:12.732074 kubelet[1462]: E1213 06:46:12.732008 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:46:12.879828 systemd-networkd[1017]: lxc_health: Gained IPv6LL Dec 13 06:46:13.344060 systemd[1]: run-containerd-runc-k8s.io-a233770c18e2e0257c6c372226827d51f487f3da9a0ac002e1948c5d0a6c56e9-runc.PY8HYK.mount: Deactivated successfully. Dec 13 06:46:13.631833 kubelet[1462]: E1213 06:46:13.631635 1462 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:46:13.733561 kubelet[1462]: E1213 06:46:13.733489 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:46:14.734178 kubelet[1462]: E1213 06:46:14.734119 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:46:15.601312 systemd[1]: run-containerd-runc-k8s.io-a233770c18e2e0257c6c372226827d51f487f3da9a0ac002e1948c5d0a6c56e9-runc.qlSjZK.mount: Deactivated successfully. Dec 13 06:46:15.735988 kubelet[1462]: E1213 06:46:15.735905 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:46:16.736505 kubelet[1462]: E1213 06:46:16.736443 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:46:17.737922 kubelet[1462]: E1213 06:46:17.737851 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:46:17.898600 systemd[1]: run-containerd-runc-k8s.io-a233770c18e2e0257c6c372226827d51f487f3da9a0ac002e1948c5d0a6c56e9-runc.55zluG.mount: Deactivated successfully. Dec 13 06:46:18.739371 kubelet[1462]: E1213 06:46:18.739273 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:46:19.740032 kubelet[1462]: E1213 06:46:19.739971 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:46:20.741279 kubelet[1462]: E1213 06:46:20.741175 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:46:21.742280 kubelet[1462]: E1213 06:46:21.742208 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 06:46:22.744337 kubelet[1462]: E1213 06:46:22.744247 1462 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"