Dec 13 16:12:37.938698 kernel: Linux version 5.15.173-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri Dec 13 12:55:10 -00 2024 Dec 13 16:12:37.938741 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=8c474c3ec361ec863adbecaa85281a726e1b53f7863ecc4742be8c5f6d02a66e Dec 13 16:12:37.938762 kernel: BIOS-provided physical RAM map: Dec 13 16:12:37.938772 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Dec 13 16:12:37.938782 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Dec 13 16:12:37.938792 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Dec 13 16:12:37.938803 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdbfff] usable Dec 13 16:12:37.938814 kernel: BIOS-e820: [mem 0x000000007ffdc000-0x000000007fffffff] reserved Dec 13 16:12:37.938824 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Dec 13 16:12:37.938845 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Dec 13 16:12:37.938861 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Dec 13 16:12:37.938871 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Dec 13 16:12:37.938881 kernel: NX (Execute Disable) protection: active Dec 13 16:12:37.938892 kernel: SMBIOS 2.8 present. Dec 13 16:12:37.938904 kernel: DMI: Red Hat KVM/RHEL-AV, BIOS 1.13.0-2.module_el8.5.0+2608+72063365 04/01/2014 Dec 13 16:12:37.938915 kernel: Hypervisor detected: KVM Dec 13 16:12:37.938931 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 13 16:12:37.938942 kernel: kvm-clock: cpu 0, msr 7e19a001, primary cpu clock Dec 13 16:12:37.938953 kernel: kvm-clock: using sched offset of 4849894112 cycles Dec 13 16:12:37.938965 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 13 16:12:37.938976 kernel: tsc: Detected 2499.998 MHz processor Dec 13 16:12:37.938987 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 13 16:12:37.938999 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 13 16:12:37.939010 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Dec 13 16:12:37.939021 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 13 16:12:37.939036 kernel: Using GB pages for direct mapping Dec 13 16:12:37.939047 kernel: ACPI: Early table checksum verification disabled Dec 13 16:12:37.939058 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS ) Dec 13 16:12:37.939069 kernel: ACPI: RSDT 0x000000007FFE47A5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 16:12:37.939081 kernel: ACPI: FACP 0x000000007FFE438D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 16:12:37.939092 kernel: ACPI: DSDT 0x000000007FFDFD80 00460D (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 16:12:37.939103 kernel: ACPI: FACS 0x000000007FFDFD40 000040 Dec 13 16:12:37.939114 kernel: ACPI: APIC 0x000000007FFE4481 0000F0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 16:12:37.939125 kernel: ACPI: SRAT 0x000000007FFE4571 0001D0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 16:12:37.939140 kernel: ACPI: MCFG 0x000000007FFE4741 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 16:12:37.939151 kernel: ACPI: WAET 0x000000007FFE477D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 16:12:37.939162 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe438d-0x7ffe4480] Dec 13 16:12:37.939173 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfd80-0x7ffe438c] Dec 13 16:12:37.939184 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfd40-0x7ffdfd7f] Dec 13 16:12:37.939196 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe4481-0x7ffe4570] Dec 13 16:12:37.939213 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe4571-0x7ffe4740] Dec 13 16:12:37.939229 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe4741-0x7ffe477c] Dec 13 16:12:37.939241 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe477d-0x7ffe47a4] Dec 13 16:12:37.939253 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Dec 13 16:12:37.939264 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Dec 13 16:12:37.939276 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Dec 13 16:12:37.939288 kernel: SRAT: PXM 0 -> APIC 0x03 -> Node 0 Dec 13 16:12:37.939300 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Dec 13 16:12:37.939315 kernel: SRAT: PXM 0 -> APIC 0x05 -> Node 0 Dec 13 16:12:37.939327 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Dec 13 16:12:37.939339 kernel: SRAT: PXM 0 -> APIC 0x07 -> Node 0 Dec 13 16:12:37.939350 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Dec 13 16:12:37.939362 kernel: SRAT: PXM 0 -> APIC 0x09 -> Node 0 Dec 13 16:12:37.939374 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Dec 13 16:12:37.939385 kernel: SRAT: PXM 0 -> APIC 0x0b -> Node 0 Dec 13 16:12:37.939397 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Dec 13 16:12:37.939416 kernel: SRAT: PXM 0 -> APIC 0x0d -> Node 0 Dec 13 16:12:37.939428 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Dec 13 16:12:37.939443 kernel: SRAT: PXM 0 -> APIC 0x0f -> Node 0 Dec 13 16:12:37.939455 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Dec 13 16:12:37.939467 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Dec 13 16:12:37.939479 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x20800fffff] hotplug Dec 13 16:12:37.939491 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdbfff] -> [mem 0x00000000-0x7ffdbfff] Dec 13 16:12:37.939503 kernel: NODE_DATA(0) allocated [mem 0x7ffd6000-0x7ffdbfff] Dec 13 16:12:37.939515 kernel: Zone ranges: Dec 13 16:12:37.939527 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 13 16:12:37.939539 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdbfff] Dec 13 16:12:37.939555 kernel: Normal empty Dec 13 16:12:37.939567 kernel: Movable zone start for each node Dec 13 16:12:37.939578 kernel: Early memory node ranges Dec 13 16:12:37.939590 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Dec 13 16:12:37.939602 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdbfff] Dec 13 16:12:37.939614 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdbfff] Dec 13 16:12:37.939625 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 16:12:37.939652 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Dec 13 16:12:37.940682 kernel: On node 0, zone DMA32: 36 pages in unavailable ranges Dec 13 16:12:37.940708 kernel: ACPI: PM-Timer IO Port: 0x608 Dec 13 16:12:37.940721 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 13 16:12:37.940733 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 13 16:12:37.940745 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Dec 13 16:12:37.940757 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 13 16:12:37.940769 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 13 16:12:37.940781 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 13 16:12:37.940793 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 13 16:12:37.940805 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 13 16:12:37.940821 kernel: TSC deadline timer available Dec 13 16:12:37.940845 kernel: smpboot: Allowing 16 CPUs, 14 hotplug CPUs Dec 13 16:12:37.940858 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Dec 13 16:12:37.940870 kernel: Booting paravirtualized kernel on KVM Dec 13 16:12:37.940882 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 13 16:12:37.940894 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:16 nr_node_ids:1 Dec 13 16:12:37.940906 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u262144 Dec 13 16:12:37.940918 kernel: pcpu-alloc: s188696 r8192 d32488 u262144 alloc=1*2097152 Dec 13 16:12:37.940930 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Dec 13 16:12:37.940946 kernel: kvm-guest: stealtime: cpu 0, msr 7fa1c0c0 Dec 13 16:12:37.940958 kernel: kvm-guest: PV spinlocks enabled Dec 13 16:12:37.940970 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 13 16:12:37.940982 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515804 Dec 13 16:12:37.940994 kernel: Policy zone: DMA32 Dec 13 16:12:37.941007 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=8c474c3ec361ec863adbecaa85281a726e1b53f7863ecc4742be8c5f6d02a66e Dec 13 16:12:37.941020 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 16:12:37.941031 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 16:12:37.941048 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 13 16:12:37.941060 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 16:12:37.941072 kernel: Memory: 1903832K/2096616K available (12294K kernel code, 2275K rwdata, 13716K rodata, 47472K init, 4112K bss, 192524K reserved, 0K cma-reserved) Dec 13 16:12:37.941085 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Dec 13 16:12:37.941097 kernel: Kernel/User page tables isolation: enabled Dec 13 16:12:37.941109 kernel: ftrace: allocating 34549 entries in 135 pages Dec 13 16:12:37.941120 kernel: ftrace: allocated 135 pages with 4 groups Dec 13 16:12:37.941132 kernel: rcu: Hierarchical RCU implementation. Dec 13 16:12:37.941145 kernel: rcu: RCU event tracing is enabled. Dec 13 16:12:37.941162 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Dec 13 16:12:37.941174 kernel: Rude variant of Tasks RCU enabled. Dec 13 16:12:37.941186 kernel: Tracing variant of Tasks RCU enabled. Dec 13 16:12:37.941198 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 16:12:37.941210 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Dec 13 16:12:37.941222 kernel: NR_IRQS: 33024, nr_irqs: 552, preallocated irqs: 16 Dec 13 16:12:37.941234 kernel: random: crng init done Dec 13 16:12:37.941261 kernel: Console: colour VGA+ 80x25 Dec 13 16:12:37.941274 kernel: printk: console [tty0] enabled Dec 13 16:12:37.941286 kernel: printk: console [ttyS0] enabled Dec 13 16:12:37.941299 kernel: ACPI: Core revision 20210730 Dec 13 16:12:37.941311 kernel: APIC: Switch to symmetric I/O mode setup Dec 13 16:12:37.941328 kernel: x2apic enabled Dec 13 16:12:37.941340 kernel: Switched APIC routing to physical x2apic. Dec 13 16:12:37.941353 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Dec 13 16:12:37.941366 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499998) Dec 13 16:12:37.941379 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Dec 13 16:12:37.941396 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Dec 13 16:12:37.941408 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Dec 13 16:12:37.941425 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 13 16:12:37.941437 kernel: Spectre V2 : Mitigation: Retpolines Dec 13 16:12:37.941449 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Dec 13 16:12:37.941462 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Dec 13 16:12:37.941474 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Dec 13 16:12:37.941488 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 13 16:12:37.941500 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Dec 13 16:12:37.941512 kernel: MDS: Mitigation: Clear CPU buffers Dec 13 16:12:37.941529 kernel: MMIO Stale Data: Unknown: No mitigations Dec 13 16:12:37.941541 kernel: SRBDS: Unknown: Dependent on hypervisor status Dec 13 16:12:37.941553 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 13 16:12:37.941566 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 13 16:12:37.941578 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 13 16:12:37.941590 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 13 16:12:37.941603 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Dec 13 16:12:37.941621 kernel: Freeing SMP alternatives memory: 32K Dec 13 16:12:37.941644 kernel: pid_max: default: 32768 minimum: 301 Dec 13 16:12:37.941659 kernel: LSM: Security Framework initializing Dec 13 16:12:37.941672 kernel: SELinux: Initializing. Dec 13 16:12:37.941689 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 13 16:12:37.941702 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 13 16:12:37.941715 kernel: smpboot: CPU0: Intel Xeon E3-12xx v2 (Ivy Bridge, IBRS) (family: 0x6, model: 0x3a, stepping: 0x9) Dec 13 16:12:37.941727 kernel: Performance Events: unsupported p6 CPU model 58 no PMU driver, software events only. Dec 13 16:12:37.941740 kernel: signal: max sigframe size: 1776 Dec 13 16:12:37.941752 kernel: rcu: Hierarchical SRCU implementation. Dec 13 16:12:37.941765 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Dec 13 16:12:37.941777 kernel: smp: Bringing up secondary CPUs ... Dec 13 16:12:37.941790 kernel: x86: Booting SMP configuration: Dec 13 16:12:37.941802 kernel: .... node #0, CPUs: #1 Dec 13 16:12:37.941819 kernel: kvm-clock: cpu 1, msr 7e19a041, secondary cpu clock Dec 13 16:12:37.941840 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Dec 13 16:12:37.941854 kernel: kvm-guest: stealtime: cpu 1, msr 7fa5c0c0 Dec 13 16:12:37.941867 kernel: smp: Brought up 1 node, 2 CPUs Dec 13 16:12:37.941879 kernel: smpboot: Max logical packages: 16 Dec 13 16:12:37.941892 kernel: smpboot: Total of 2 processors activated (9999.99 BogoMIPS) Dec 13 16:12:37.941904 kernel: devtmpfs: initialized Dec 13 16:12:37.941917 kernel: x86/mm: Memory block size: 128MB Dec 13 16:12:37.941929 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 16:12:37.941946 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Dec 13 16:12:37.941959 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 16:12:37.941972 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 16:12:37.941984 kernel: audit: initializing netlink subsys (disabled) Dec 13 16:12:37.941997 kernel: audit: type=2000 audit(1734106357.045:1): state=initialized audit_enabled=0 res=1 Dec 13 16:12:37.942009 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 16:12:37.942022 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 13 16:12:37.942034 kernel: cpuidle: using governor menu Dec 13 16:12:37.942046 kernel: ACPI: bus type PCI registered Dec 13 16:12:37.942063 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 16:12:37.942076 kernel: dca service started, version 1.12.1 Dec 13 16:12:37.942089 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Dec 13 16:12:37.942101 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved in E820 Dec 13 16:12:37.942114 kernel: PCI: Using configuration type 1 for base access Dec 13 16:12:37.942127 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 13 16:12:37.942139 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 16:12:37.942152 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 16:12:37.942164 kernel: ACPI: Added _OSI(Module Device) Dec 13 16:12:37.942181 kernel: ACPI: Added _OSI(Processor Device) Dec 13 16:12:37.942194 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 16:12:37.942207 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 16:12:37.942219 kernel: ACPI: Added _OSI(Linux-Dell-Video) Dec 13 16:12:37.942232 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Dec 13 16:12:37.942244 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Dec 13 16:12:37.942257 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 16:12:37.942270 kernel: ACPI: Interpreter enabled Dec 13 16:12:37.942282 kernel: ACPI: PM: (supports S0 S5) Dec 13 16:12:37.942299 kernel: ACPI: Using IOAPIC for interrupt routing Dec 13 16:12:37.942312 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 13 16:12:37.942324 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Dec 13 16:12:37.942337 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 13 16:12:37.942649 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 13 16:12:37.942820 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Dec 13 16:12:37.942994 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Dec 13 16:12:37.943013 kernel: PCI host bridge to bus 0000:00 Dec 13 16:12:37.943188 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 13 16:12:37.943334 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 13 16:12:37.943489 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 13 16:12:37.943657 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Dec 13 16:12:37.943815 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Dec 13 16:12:37.943971 kernel: pci_bus 0000:00: root bus resource [mem 0x20c0000000-0x28bfffffff window] Dec 13 16:12:37.944124 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 13 16:12:37.944308 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Dec 13 16:12:37.944478 kernel: pci 0000:00:01.0: [1013:00b8] type 00 class 0x030000 Dec 13 16:12:37.951688 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfa000000-0xfbffffff pref] Dec 13 16:12:37.951909 kernel: pci 0000:00:01.0: reg 0x14: [mem 0xfea50000-0xfea50fff] Dec 13 16:12:37.952079 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea40000-0xfea4ffff pref] Dec 13 16:12:37.952261 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 13 16:12:37.952457 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Dec 13 16:12:37.952622 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea51000-0xfea51fff] Dec 13 16:12:37.952853 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Dec 13 16:12:37.953026 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea52000-0xfea52fff] Dec 13 16:12:37.953196 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Dec 13 16:12:37.953357 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea53000-0xfea53fff] Dec 13 16:12:37.953533 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Dec 13 16:12:37.953845 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea54000-0xfea54fff] Dec 13 16:12:37.954042 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Dec 13 16:12:37.954199 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea55000-0xfea55fff] Dec 13 16:12:37.954362 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Dec 13 16:12:37.954518 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea56000-0xfea56fff] Dec 13 16:12:37.954708 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Dec 13 16:12:37.954880 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea57000-0xfea57fff] Dec 13 16:12:37.955045 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Dec 13 16:12:37.955200 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea58000-0xfea58fff] Dec 13 16:12:37.955364 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Dec 13 16:12:37.955524 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc0c0-0xc0df] Dec 13 16:12:37.955716 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfea59000-0xfea59fff] Dec 13 16:12:37.955886 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Dec 13 16:12:37.956042 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfea00000-0xfea3ffff pref] Dec 13 16:12:37.956227 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Dec 13 16:12:37.956386 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Dec 13 16:12:37.956540 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfea5a000-0xfea5afff] Dec 13 16:12:37.956710 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfd004000-0xfd007fff 64bit pref] Dec 13 16:12:37.956899 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Dec 13 16:12:37.957064 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Dec 13 16:12:37.957236 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Dec 13 16:12:37.957391 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc0e0-0xc0ff] Dec 13 16:12:37.957555 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea5b000-0xfea5bfff] Dec 13 16:12:37.957763 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Dec 13 16:12:37.957942 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Dec 13 16:12:37.958124 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 Dec 13 16:12:37.958291 kernel: pci 0000:01:00.0: reg 0x10: [mem 0xfda00000-0xfda000ff 64bit] Dec 13 16:12:37.958452 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Dec 13 16:12:37.958608 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Dec 13 16:12:37.967862 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Dec 13 16:12:37.968060 kernel: pci_bus 0000:02: extended config space not accessible Dec 13 16:12:37.968264 kernel: pci 0000:02:01.0: [8086:25ab] type 00 class 0x088000 Dec 13 16:12:37.968443 kernel: pci 0000:02:01.0: reg 0x10: [mem 0xfd800000-0xfd80000f] Dec 13 16:12:37.968613 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Dec 13 16:12:37.968841 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Dec 13 16:12:37.969043 kernel: pci 0000:03:00.0: [1b36:000d] type 00 class 0x0c0330 Dec 13 16:12:37.969210 kernel: pci 0000:03:00.0: reg 0x10: [mem 0xfe800000-0xfe803fff 64bit] Dec 13 16:12:37.969370 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Dec 13 16:12:37.969535 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Dec 13 16:12:37.969708 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Dec 13 16:12:37.969901 kernel: pci 0000:04:00.0: [1af4:1044] type 00 class 0x00ff00 Dec 13 16:12:37.970070 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Dec 13 16:12:37.970248 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Dec 13 16:12:37.970415 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Dec 13 16:12:37.970571 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Dec 13 16:12:37.970758 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Dec 13 16:12:37.970931 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Dec 13 16:12:37.971093 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Dec 13 16:12:37.971257 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Dec 13 16:12:37.971415 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Dec 13 16:12:37.971570 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Dec 13 16:12:37.971759 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Dec 13 16:12:37.971929 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Dec 13 16:12:37.972104 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Dec 13 16:12:37.972269 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Dec 13 16:12:37.972423 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Dec 13 16:12:37.972577 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Dec 13 16:12:37.972752 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Dec 13 16:12:37.972936 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Dec 13 16:12:37.973094 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Dec 13 16:12:37.973113 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 13 16:12:37.973127 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 13 16:12:37.973146 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 13 16:12:37.973159 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 13 16:12:37.973173 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Dec 13 16:12:37.973186 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Dec 13 16:12:37.973199 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Dec 13 16:12:37.973212 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Dec 13 16:12:37.973231 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Dec 13 16:12:37.973256 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Dec 13 16:12:37.973268 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Dec 13 16:12:37.973285 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Dec 13 16:12:37.973303 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Dec 13 16:12:37.973328 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Dec 13 16:12:37.973341 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Dec 13 16:12:37.973353 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Dec 13 16:12:37.973366 kernel: iommu: Default domain type: Translated Dec 13 16:12:37.973379 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 13 16:12:37.973545 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Dec 13 16:12:37.973730 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 13 16:12:37.973936 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Dec 13 16:12:37.973956 kernel: vgaarb: loaded Dec 13 16:12:37.973969 kernel: pps_core: LinuxPPS API ver. 1 registered Dec 13 16:12:37.973982 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Dec 13 16:12:37.973995 kernel: PTP clock support registered Dec 13 16:12:37.974009 kernel: PCI: Using ACPI for IRQ routing Dec 13 16:12:37.974021 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 13 16:12:37.974034 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Dec 13 16:12:37.974054 kernel: e820: reserve RAM buffer [mem 0x7ffdc000-0x7fffffff] Dec 13 16:12:37.974067 kernel: clocksource: Switched to clocksource kvm-clock Dec 13 16:12:37.974080 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 16:12:37.974093 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 16:12:37.974106 kernel: pnp: PnP ACPI init Dec 13 16:12:37.974343 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Dec 13 16:12:37.974375 kernel: pnp: PnP ACPI: found 5 devices Dec 13 16:12:37.974389 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 13 16:12:37.974409 kernel: NET: Registered PF_INET protocol family Dec 13 16:12:37.974422 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 16:12:37.974435 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Dec 13 16:12:37.974449 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 16:12:37.974461 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 13 16:12:37.974481 kernel: TCP bind hash table entries: 16384 (order: 6, 262144 bytes, linear) Dec 13 16:12:37.974493 kernel: TCP: Hash tables configured (established 16384 bind 16384) Dec 13 16:12:37.974507 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 13 16:12:37.974520 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 13 16:12:37.974545 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 16:12:37.974558 kernel: NET: Registered PF_XDP protocol family Dec 13 16:12:37.976795 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01-02] add_size 1000 Dec 13 16:12:37.976989 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Dec 13 16:12:37.977156 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Dec 13 16:12:37.977319 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Dec 13 16:12:37.977479 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Dec 13 16:12:37.977665 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Dec 13 16:12:37.977826 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Dec 13 16:12:37.977999 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Dec 13 16:12:37.978155 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Dec 13 16:12:37.978310 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Dec 13 16:12:37.978469 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Dec 13 16:12:37.978650 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Dec 13 16:12:37.978810 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Dec 13 16:12:37.978988 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Dec 13 16:12:37.979143 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Dec 13 16:12:37.979297 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Dec 13 16:12:37.979460 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Dec 13 16:12:37.979625 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Dec 13 16:12:37.979801 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Dec 13 16:12:37.979980 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Dec 13 16:12:37.980136 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Dec 13 16:12:37.980300 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Dec 13 16:12:37.980478 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Dec 13 16:12:37.980652 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Dec 13 16:12:37.980813 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Dec 13 16:12:37.980982 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Dec 13 16:12:37.981137 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Dec 13 16:12:37.981292 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Dec 13 16:12:37.981456 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Dec 13 16:12:37.981621 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Dec 13 16:12:37.981821 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Dec 13 16:12:37.981991 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Dec 13 16:12:37.982148 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Dec 13 16:12:37.982324 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Dec 13 16:12:37.982483 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Dec 13 16:12:37.982658 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Dec 13 16:12:37.995050 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Dec 13 16:12:37.995228 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Dec 13 16:12:37.995404 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Dec 13 16:12:37.995563 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Dec 13 16:12:37.995746 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Dec 13 16:12:37.995938 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Dec 13 16:12:37.996099 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Dec 13 16:12:37.996267 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Dec 13 16:12:37.996424 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Dec 13 16:12:37.996581 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Dec 13 16:12:37.996756 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Dec 13 16:12:37.996936 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Dec 13 16:12:37.997103 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Dec 13 16:12:37.997262 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Dec 13 16:12:37.997414 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 13 16:12:37.997559 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 13 16:12:37.997718 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 13 16:12:37.997876 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Dec 13 16:12:37.998037 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Dec 13 16:12:37.998195 kernel: pci_bus 0000:00: resource 9 [mem 0x20c0000000-0x28bfffffff window] Dec 13 16:12:37.998370 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Dec 13 16:12:37.998536 kernel: pci_bus 0000:01: resource 1 [mem 0xfd800000-0xfdbfffff] Dec 13 16:12:37.998707 kernel: pci_bus 0000:01: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Dec 13 16:12:37.998896 kernel: pci_bus 0000:02: resource 1 [mem 0xfd800000-0xfd9fffff] Dec 13 16:12:37.999075 kernel: pci_bus 0000:03: resource 0 [io 0x2000-0x2fff] Dec 13 16:12:37.999241 kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff] Dec 13 16:12:37.999406 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Dec 13 16:12:37.999589 kernel: pci_bus 0000:04: resource 0 [io 0x3000-0x3fff] Dec 13 16:12:37.999767 kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff] Dec 13 16:12:37.999940 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Dec 13 16:12:38.000105 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Dec 13 16:12:38.000270 kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff] Dec 13 16:12:38.000425 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Dec 13 16:12:38.000595 kernel: pci_bus 0000:06: resource 0 [io 0x5000-0x5fff] Dec 13 16:12:38.000765 kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff] Dec 13 16:12:38.000933 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Dec 13 16:12:38.001121 kernel: pci_bus 0000:07: resource 0 [io 0x6000-0x6fff] Dec 13 16:12:38.001275 kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff] Dec 13 16:12:38.001432 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Dec 13 16:12:38.001616 kernel: pci_bus 0000:08: resource 0 [io 0x7000-0x7fff] Dec 13 16:12:38.008330 kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff] Dec 13 16:12:38.008488 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Dec 13 16:12:38.008672 kernel: pci_bus 0000:09: resource 0 [io 0x8000-0x8fff] Dec 13 16:12:38.008840 kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff] Dec 13 16:12:38.008999 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Dec 13 16:12:38.009021 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Dec 13 16:12:38.009036 kernel: PCI: CLS 0 bytes, default 64 Dec 13 16:12:38.009057 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Dec 13 16:12:38.009071 kernel: software IO TLB: mapped [mem 0x0000000073000000-0x0000000077000000] (64MB) Dec 13 16:12:38.009085 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Dec 13 16:12:38.009099 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Dec 13 16:12:38.009113 kernel: Initialise system trusted keyrings Dec 13 16:12:38.009134 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Dec 13 16:12:38.009147 kernel: Key type asymmetric registered Dec 13 16:12:38.009160 kernel: Asymmetric key parser 'x509' registered Dec 13 16:12:38.009174 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Dec 13 16:12:38.009201 kernel: io scheduler mq-deadline registered Dec 13 16:12:38.009215 kernel: io scheduler kyber registered Dec 13 16:12:38.009228 kernel: io scheduler bfq registered Dec 13 16:12:38.009408 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Dec 13 16:12:38.009569 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Dec 13 16:12:38.009860 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 16:12:38.010024 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Dec 13 16:12:38.010180 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Dec 13 16:12:38.010346 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 16:12:38.010515 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Dec 13 16:12:38.010685 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Dec 13 16:12:38.010855 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 16:12:38.011015 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Dec 13 16:12:38.011170 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Dec 13 16:12:38.011333 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 16:12:38.011491 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Dec 13 16:12:38.011670 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Dec 13 16:12:38.011839 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 16:12:38.012001 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Dec 13 16:12:38.012156 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Dec 13 16:12:38.012325 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 16:12:38.012489 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Dec 13 16:12:38.012656 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Dec 13 16:12:38.012816 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 16:12:38.012985 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Dec 13 16:12:38.013142 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Dec 13 16:12:38.013304 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 16:12:38.013326 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 13 16:12:38.013341 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Dec 13 16:12:38.013355 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Dec 13 16:12:38.013369 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 16:12:38.013382 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 13 16:12:38.013396 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 13 16:12:38.013423 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 13 16:12:38.013438 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 13 16:12:38.013613 kernel: rtc_cmos 00:03: RTC can wake from S4 Dec 13 16:12:38.013655 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 13 16:12:38.013806 kernel: rtc_cmos 00:03: registered as rtc0 Dec 13 16:12:38.013969 kernel: rtc_cmos 00:03: setting system clock to 2024-12-13T16:12:37 UTC (1734106357) Dec 13 16:12:38.014117 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Dec 13 16:12:38.014144 kernel: intel_pstate: CPU model not supported Dec 13 16:12:38.014165 kernel: NET: Registered PF_INET6 protocol family Dec 13 16:12:38.014179 kernel: Segment Routing with IPv6 Dec 13 16:12:38.014194 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 16:12:38.014208 kernel: NET: Registered PF_PACKET protocol family Dec 13 16:12:38.014222 kernel: Key type dns_resolver registered Dec 13 16:12:38.014235 kernel: IPI shorthand broadcast: enabled Dec 13 16:12:38.014248 kernel: sched_clock: Marking stable (999467001, 224313111)->(1507738564, -283958452) Dec 13 16:12:38.014262 kernel: registered taskstats version 1 Dec 13 16:12:38.014275 kernel: Loading compiled-in X.509 certificates Dec 13 16:12:38.014293 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.173-flatcar: e1d88c9e01f5bb2adeb5b99325e46e5ca8dff115' Dec 13 16:12:38.014306 kernel: Key type .fscrypt registered Dec 13 16:12:38.014320 kernel: Key type fscrypt-provisioning registered Dec 13 16:12:38.014340 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 16:12:38.014354 kernel: ima: Allocated hash algorithm: sha1 Dec 13 16:12:38.014367 kernel: ima: No architecture policies found Dec 13 16:12:38.014380 kernel: clk: Disabling unused clocks Dec 13 16:12:38.014394 kernel: Freeing unused kernel image (initmem) memory: 47472K Dec 13 16:12:38.014408 kernel: Write protecting the kernel read-only data: 28672k Dec 13 16:12:38.014429 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Dec 13 16:12:38.014443 kernel: Freeing unused kernel image (rodata/data gap) memory: 620K Dec 13 16:12:38.014466 kernel: Run /init as init process Dec 13 16:12:38.014479 kernel: with arguments: Dec 13 16:12:38.014492 kernel: /init Dec 13 16:12:38.014505 kernel: with environment: Dec 13 16:12:38.014519 kernel: HOME=/ Dec 13 16:12:38.014539 kernel: TERM=linux Dec 13 16:12:38.014552 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 16:12:38.014581 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 16:12:38.014600 systemd[1]: Detected virtualization kvm. Dec 13 16:12:38.014620 systemd[1]: Detected architecture x86-64. Dec 13 16:12:38.015682 systemd[1]: Running in initrd. Dec 13 16:12:38.015705 systemd[1]: No hostname configured, using default hostname. Dec 13 16:12:38.015719 systemd[1]: Hostname set to . Dec 13 16:12:38.015734 systemd[1]: Initializing machine ID from VM UUID. Dec 13 16:12:38.015755 systemd[1]: Queued start job for default target initrd.target. Dec 13 16:12:38.015769 systemd[1]: Started systemd-ask-password-console.path. Dec 13 16:12:38.015784 systemd[1]: Reached target cryptsetup.target. Dec 13 16:12:38.015798 systemd[1]: Reached target paths.target. Dec 13 16:12:38.015812 systemd[1]: Reached target slices.target. Dec 13 16:12:38.015826 systemd[1]: Reached target swap.target. Dec 13 16:12:38.015853 systemd[1]: Reached target timers.target. Dec 13 16:12:38.015868 systemd[1]: Listening on iscsid.socket. Dec 13 16:12:38.015888 systemd[1]: Listening on iscsiuio.socket. Dec 13 16:12:38.015902 systemd[1]: Listening on systemd-journald-audit.socket. Dec 13 16:12:38.015917 systemd[1]: Listening on systemd-journald-dev-log.socket. Dec 13 16:12:38.015931 systemd[1]: Listening on systemd-journald.socket. Dec 13 16:12:38.015946 systemd[1]: Listening on systemd-networkd.socket. Dec 13 16:12:38.015960 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 16:12:38.015975 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 16:12:38.015989 systemd[1]: Reached target sockets.target. Dec 13 16:12:38.016003 systemd[1]: Starting kmod-static-nodes.service... Dec 13 16:12:38.016022 systemd[1]: Finished network-cleanup.service. Dec 13 16:12:38.016037 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 16:12:38.016051 systemd[1]: Starting systemd-journald.service... Dec 13 16:12:38.016065 systemd[1]: Starting systemd-modules-load.service... Dec 13 16:12:38.016080 systemd[1]: Starting systemd-resolved.service... Dec 13 16:12:38.016094 systemd[1]: Starting systemd-vconsole-setup.service... Dec 13 16:12:38.016108 systemd[1]: Finished kmod-static-nodes.service. Dec 13 16:12:38.016130 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 16:12:38.016160 systemd-journald[201]: Journal started Dec 13 16:12:38.016245 systemd-journald[201]: Runtime Journal (/run/log/journal/01f2f98a2af4435a940c6860c3d3e59b) is 4.7M, max 38.1M, 33.3M free. Dec 13 16:12:37.939881 systemd-modules-load[202]: Inserted module 'overlay' Dec 13 16:12:38.035253 kernel: Bridge firewalling registered Dec 13 16:12:37.993729 systemd-resolved[203]: Positive Trust Anchors: Dec 13 16:12:38.051372 systemd[1]: Started systemd-resolved.service. Dec 13 16:12:38.051406 kernel: audit: type=1130 audit(1734106358.035:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:12:38.051437 systemd[1]: Started systemd-journald.service. Dec 13 16:12:38.051457 kernel: audit: type=1130 audit(1734106358.042:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:12:38.051475 kernel: SCSI subsystem initialized Dec 13 16:12:38.035000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:12:38.042000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:12:37.993748 systemd-resolved[203]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 16:12:37.993793 systemd-resolved[203]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 16:12:37.997784 systemd-resolved[203]: Defaulting to hostname 'linux'. Dec 13 16:12:38.021450 systemd-modules-load[202]: Inserted module 'br_netfilter' Dec 13 16:12:38.056770 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 16:12:38.078074 kernel: audit: type=1130 audit(1734106358.055:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:12:38.078109 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 16:12:38.078129 kernel: device-mapper: uevent: version 1.0.3 Dec 13 16:12:38.078155 kernel: audit: type=1130 audit(1734106358.056:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:12:38.055000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:12:38.056000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:12:38.057565 systemd[1]: Finished systemd-vconsole-setup.service. Dec 13 16:12:38.058408 systemd[1]: Reached target nss-lookup.target. Dec 13 16:12:38.062337 systemd[1]: Starting dracut-cmdline-ask.service... Dec 13 16:12:38.071494 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Dec 13 16:12:38.089776 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Dec 13 16:12:38.089812 kernel: audit: type=1130 audit(1734106358.057:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:12:38.057000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:12:38.090394 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Dec 13 16:12:38.091000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:12:38.097676 kernel: audit: type=1130 audit(1734106358.091:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:12:38.103128 systemd-modules-load[202]: Inserted module 'dm_multipath' Dec 13 16:12:38.111011 kernel: audit: type=1130 audit(1734106358.103:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:12:38.103000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:12:38.103929 systemd[1]: Finished systemd-modules-load.service. Dec 13 16:12:38.112018 systemd[1]: Starting systemd-sysctl.service... Dec 13 16:12:38.113258 systemd[1]: Finished dracut-cmdline-ask.service. Dec 13 16:12:38.122487 kernel: audit: type=1130 audit(1734106358.114:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:12:38.114000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:12:38.116391 systemd[1]: Starting dracut-cmdline.service... Dec 13 16:12:38.126213 systemd[1]: Finished systemd-sysctl.service. Dec 13 16:12:38.148186 kernel: audit: type=1130 audit(1734106358.126:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:12:38.126000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:12:38.150949 dracut-cmdline[223]: dracut-dracut-053 Dec 13 16:12:38.154418 dracut-cmdline[223]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=8c474c3ec361ec863adbecaa85281a726e1b53f7863ecc4742be8c5f6d02a66e Dec 13 16:12:38.237673 kernel: Loading iSCSI transport class v2.0-870. Dec 13 16:12:38.259679 kernel: iscsi: registered transport (tcp) Dec 13 16:12:38.289772 kernel: iscsi: registered transport (qla4xxx) Dec 13 16:12:38.289857 kernel: QLogic iSCSI HBA Driver Dec 13 16:12:38.338326 systemd[1]: Finished dracut-cmdline.service. Dec 13 16:12:38.338000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:12:38.340388 systemd[1]: Starting dracut-pre-udev.service... Dec 13 16:12:38.399678 kernel: raid6: sse2x4 gen() 7473 MB/s Dec 13 16:12:38.417709 kernel: raid6: sse2x4 xor() 4746 MB/s Dec 13 16:12:38.435756 kernel: raid6: sse2x2 gen() 5266 MB/s Dec 13 16:12:38.453715 kernel: raid6: sse2x2 xor() 7751 MB/s Dec 13 16:12:38.471714 kernel: raid6: sse2x1 gen() 5515 MB/s Dec 13 16:12:38.490464 kernel: raid6: sse2x1 xor() 7200 MB/s Dec 13 16:12:38.490527 kernel: raid6: using algorithm sse2x4 gen() 7473 MB/s Dec 13 16:12:38.490547 kernel: raid6: .... xor() 4746 MB/s, rmw enabled Dec 13 16:12:38.491725 kernel: raid6: using ssse3x2 recovery algorithm Dec 13 16:12:38.509696 kernel: xor: automatically using best checksumming function avx Dec 13 16:12:38.629784 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Dec 13 16:12:38.643916 systemd[1]: Finished dracut-pre-udev.service. Dec 13 16:12:38.643000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:12:38.644000 audit: BPF prog-id=7 op=LOAD Dec 13 16:12:38.644000 audit: BPF prog-id=8 op=LOAD Dec 13 16:12:38.646130 systemd[1]: Starting systemd-udevd.service... Dec 13 16:12:38.664276 systemd-udevd[402]: Using default interface naming scheme 'v252'. Dec 13 16:12:38.672855 systemd[1]: Started systemd-udevd.service. Dec 13 16:12:38.676000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:12:38.678047 systemd[1]: Starting dracut-pre-trigger.service... Dec 13 16:12:38.697887 dracut-pre-trigger[412]: rd.md=0: removing MD RAID activation Dec 13 16:12:38.740611 systemd[1]: Finished dracut-pre-trigger.service. Dec 13 16:12:38.740000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:12:38.747533 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 16:12:38.843933 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 16:12:38.844000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:12:38.945348 kernel: ACPI: bus type USB registered Dec 13 16:12:38.945459 kernel: usbcore: registered new interface driver usbfs Dec 13 16:12:38.947654 kernel: usbcore: registered new interface driver hub Dec 13 16:12:38.947709 kernel: usbcore: registered new device driver usb Dec 13 16:12:38.955658 kernel: virtio_blk virtio1: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Dec 13 16:12:39.005736 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 16:12:39.005775 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 16:12:39.005793 kernel: GPT:17805311 != 125829119 Dec 13 16:12:39.005848 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 16:12:39.005867 kernel: GPT:17805311 != 125829119 Dec 13 16:12:39.005883 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 16:12:39.005900 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 16:12:39.005917 kernel: AVX version of gcm_enc/dec engaged. Dec 13 16:12:39.010668 kernel: AES CTR mode by8 optimization enabled Dec 13 16:12:39.033663 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (454) Dec 13 16:12:39.052949 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Dec 13 16:12:39.161114 kernel: libata version 3.00 loaded. Dec 13 16:12:39.161151 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Dec 13 16:12:39.161425 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 1 Dec 13 16:12:39.161673 kernel: xhci_hcd 0000:03:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Dec 13 16:12:39.161893 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Dec 13 16:12:39.162079 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 2 Dec 13 16:12:39.162259 kernel: xhci_hcd 0000:03:00.0: Host supports USB 3.0 SuperSpeed Dec 13 16:12:39.162438 kernel: hub 1-0:1.0: USB hub found Dec 13 16:12:39.162677 kernel: hub 1-0:1.0: 4 ports detected Dec 13 16:12:39.162891 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Dec 13 16:12:39.163185 kernel: hub 2-0:1.0: USB hub found Dec 13 16:12:39.163422 kernel: hub 2-0:1.0: 4 ports detected Dec 13 16:12:39.163647 kernel: ahci 0000:00:1f.2: version 3.0 Dec 13 16:12:39.163889 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Dec 13 16:12:39.163912 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Dec 13 16:12:39.164097 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Dec 13 16:12:39.164258 kernel: scsi host0: ahci Dec 13 16:12:39.164511 kernel: scsi host1: ahci Dec 13 16:12:39.164728 kernel: scsi host2: ahci Dec 13 16:12:39.164957 kernel: scsi host3: ahci Dec 13 16:12:39.165154 kernel: scsi host4: ahci Dec 13 16:12:39.165354 kernel: scsi host5: ahci Dec 13 16:12:39.165583 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b100 irq 41 Dec 13 16:12:39.165627 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b180 irq 41 Dec 13 16:12:39.165661 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b200 irq 41 Dec 13 16:12:39.165679 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b280 irq 41 Dec 13 16:12:39.165696 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b300 irq 41 Dec 13 16:12:39.165713 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b380 irq 41 Dec 13 16:12:39.160139 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Dec 13 16:12:39.166514 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Dec 13 16:12:39.173009 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Dec 13 16:12:39.179642 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Dec 13 16:12:39.181507 systemd[1]: Starting disk-uuid.service... Dec 13 16:12:39.188485 disk-uuid[529]: Primary Header is updated. Dec 13 16:12:39.188485 disk-uuid[529]: Secondary Entries is updated. Dec 13 16:12:39.188485 disk-uuid[529]: Secondary Header is updated. Dec 13 16:12:39.193838 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 16:12:39.200663 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 16:12:39.309691 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Dec 13 16:12:39.405967 kernel: ata4: SATA link down (SStatus 0 SControl 300) Dec 13 16:12:39.406061 kernel: ata6: SATA link down (SStatus 0 SControl 300) Dec 13 16:12:39.406667 kernel: ata3: SATA link down (SStatus 0 SControl 300) Dec 13 16:12:39.409440 kernel: ata1: SATA link down (SStatus 0 SControl 300) Dec 13 16:12:39.411104 kernel: ata2: SATA link down (SStatus 0 SControl 300) Dec 13 16:12:39.412779 kernel: ata5: SATA link down (SStatus 0 SControl 300) Dec 13 16:12:39.451703 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 13 16:12:39.459998 kernel: usbcore: registered new interface driver usbhid Dec 13 16:12:39.460033 kernel: usbhid: USB HID core driver Dec 13 16:12:39.469793 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:03:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input3 Dec 13 16:12:39.469836 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:03:00.0-1/input0 Dec 13 16:12:40.210672 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 16:12:40.210837 disk-uuid[530]: The operation has completed successfully. Dec 13 16:12:40.258613 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 16:12:40.259929 systemd[1]: Finished disk-uuid.service. Dec 13 16:12:40.259000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:12:40.259000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:12:40.266467 systemd[1]: Starting verity-setup.service... Dec 13 16:12:40.285666 kernel: device-mapper: verity: sha256 using implementation "sha256-avx" Dec 13 16:12:40.342604 systemd[1]: Found device dev-mapper-usr.device. Dec 13 16:12:40.344543 systemd[1]: Mounting sysusr-usr.mount... Dec 13 16:12:40.346306 systemd[1]: Finished verity-setup.service. Dec 13 16:12:40.346000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:12:40.439703 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Dec 13 16:12:40.440906 systemd[1]: Mounted sysusr-usr.mount. Dec 13 16:12:40.441798 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Dec 13 16:12:40.442875 systemd[1]: Starting ignition-setup.service... Dec 13 16:12:40.445572 systemd[1]: Starting parse-ip-for-networkd.service... Dec 13 16:12:40.463939 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 16:12:40.463989 kernel: BTRFS info (device vda6): using free space tree Dec 13 16:12:40.464009 kernel: BTRFS info (device vda6): has skinny extents Dec 13 16:12:40.479721 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 16:12:40.486845 systemd[1]: Finished ignition-setup.service. Dec 13 16:12:40.486000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:12:40.488728 systemd[1]: Starting ignition-fetch-offline.service... Dec 13 16:12:40.597256 systemd[1]: Finished parse-ip-for-networkd.service. Dec 13 16:12:40.597000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:12:40.599000 audit: BPF prog-id=9 op=LOAD Dec 13 16:12:40.600887 systemd[1]: Starting systemd-networkd.service... Dec 13 16:12:40.649532 systemd-networkd[710]: lo: Link UP Dec 13 16:12:40.650701 systemd-networkd[710]: lo: Gained carrier Dec 13 16:12:40.654645 systemd-networkd[710]: Enumeration completed Dec 13 16:12:40.655845 systemd-networkd[710]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 16:12:40.656000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:12:40.656018 systemd[1]: Started systemd-networkd.service. Dec 13 16:12:40.657381 systemd[1]: Reached target network.target. Dec 13 16:12:40.671840 ignition[627]: Ignition 2.14.0 Dec 13 16:12:40.676000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:12:40.657676 systemd-networkd[710]: eth0: Link UP Dec 13 16:12:40.671862 ignition[627]: Stage: fetch-offline Dec 13 16:12:40.657682 systemd-networkd[710]: eth0: Gained carrier Dec 13 16:12:40.679000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:12:40.671991 ignition[627]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 16:12:40.659546 systemd[1]: Starting iscsiuio.service... Dec 13 16:12:40.672040 ignition[627]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Dec 13 16:12:40.676792 systemd[1]: Started iscsiuio.service. Dec 13 16:12:40.673983 ignition[627]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 16:12:40.679036 systemd[1]: Finished ignition-fetch-offline.service. Dec 13 16:12:40.674160 ignition[627]: parsed url from cmdline: "" Dec 13 16:12:40.681490 systemd[1]: Starting ignition-fetch.service... Dec 13 16:12:40.674167 ignition[627]: no config URL provided Dec 13 16:12:40.687507 systemd[1]: Starting iscsid.service... Dec 13 16:12:40.698000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:12:40.674177 ignition[627]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 16:12:40.697443 systemd[1]: Started iscsid.service. Dec 13 16:12:40.674193 ignition[627]: no config at "/usr/lib/ignition/user.ign" Dec 13 16:12:40.703630 iscsid[720]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Dec 13 16:12:40.703630 iscsid[720]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Dec 13 16:12:40.703630 iscsid[720]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Dec 13 16:12:40.703630 iscsid[720]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Dec 13 16:12:40.703630 iscsid[720]: If using hardware iscsi like qla4xxx this message can be ignored. Dec 13 16:12:40.703630 iscsid[720]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Dec 13 16:12:40.703630 iscsid[720]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Dec 13 16:12:40.700617 systemd[1]: Starting dracut-initqueue.service... Dec 13 16:12:40.674202 ignition[627]: failed to fetch config: resource requires networking Dec 13 16:12:40.712808 systemd-networkd[710]: eth0: DHCPv4 address 10.230.57.126/30, gateway 10.230.57.125 acquired from 10.230.57.125 Dec 13 16:12:40.674399 ignition[627]: Ignition finished successfully Dec 13 16:12:40.695022 ignition[715]: Ignition 2.14.0 Dec 13 16:12:40.695033 ignition[715]: Stage: fetch Dec 13 16:12:40.695205 ignition[715]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 16:12:40.695240 ignition[715]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Dec 13 16:12:40.696873 ignition[715]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 16:12:40.721000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:12:40.721393 systemd[1]: Finished dracut-initqueue.service. Dec 13 16:12:40.697032 ignition[715]: parsed url from cmdline: "" Dec 13 16:12:40.722424 systemd[1]: Reached target remote-fs-pre.target. Dec 13 16:12:40.697039 ignition[715]: no config URL provided Dec 13 16:12:40.723034 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 16:12:40.697049 ignition[715]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 16:12:40.723628 systemd[1]: Reached target remote-fs.target. Dec 13 16:12:40.697065 ignition[715]: no config at "/usr/lib/ignition/user.ign" Dec 13 16:12:40.725339 systemd[1]: Starting dracut-pre-mount.service... Dec 13 16:12:40.707505 ignition[715]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Dec 13 16:12:40.707566 ignition[715]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Dec 13 16:12:40.708963 ignition[715]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Dec 13 16:12:40.738000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:12:40.738892 systemd[1]: Finished dracut-pre-mount.service. Dec 13 16:12:40.731074 ignition[715]: GET result: OK Dec 13 16:12:40.739385 unknown[715]: fetched base config from "system" Dec 13 16:12:40.742000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:12:40.731156 ignition[715]: parsing config with SHA512: f71c2002e97f7fcb689894c4335dae2cf4bcdb8e212a813d8d9b73a9a83e3b212d28fe7d78d6efa3f989d8e9b95fb7fe6c057efe35078b1b1fb861ccd67cf2b2 Dec 13 16:12:40.739403 unknown[715]: fetched base config from "system" Dec 13 16:12:40.740500 ignition[715]: fetch: fetch complete Dec 13 16:12:40.739417 unknown[715]: fetched user config from "openstack" Dec 13 16:12:40.740510 ignition[715]: fetch: fetch passed Dec 13 16:12:40.742563 systemd[1]: Finished ignition-fetch.service. Dec 13 16:12:40.740576 ignition[715]: Ignition finished successfully Dec 13 16:12:40.744857 systemd[1]: Starting ignition-kargs.service... Dec 13 16:12:40.756916 ignition[735]: Ignition 2.14.0 Dec 13 16:12:40.756933 ignition[735]: Stage: kargs Dec 13 16:12:40.757087 ignition[735]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 16:12:40.757120 ignition[735]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Dec 13 16:12:40.758398 ignition[735]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 16:12:40.760543 systemd[1]: Finished ignition-kargs.service. Dec 13 16:12:40.760000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:12:40.759600 ignition[735]: kargs: kargs passed Dec 13 16:12:40.762379 systemd[1]: Starting ignition-disks.service... Dec 13 16:12:40.759680 ignition[735]: Ignition finished successfully Dec 13 16:12:40.773054 ignition[741]: Ignition 2.14.0 Dec 13 16:12:40.773073 ignition[741]: Stage: disks Dec 13 16:12:40.773227 ignition[741]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 16:12:40.773271 ignition[741]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Dec 13 16:12:40.774598 ignition[741]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 16:12:40.775884 ignition[741]: disks: disks passed Dec 13 16:12:40.777127 systemd[1]: Finished ignition-disks.service. Dec 13 16:12:40.777000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:12:40.775951 ignition[741]: Ignition finished successfully Dec 13 16:12:40.778502 systemd[1]: Reached target initrd-root-device.target. Dec 13 16:12:40.779606 systemd[1]: Reached target local-fs-pre.target. Dec 13 16:12:40.781072 systemd[1]: Reached target local-fs.target. Dec 13 16:12:40.782340 systemd[1]: Reached target sysinit.target. Dec 13 16:12:40.783579 systemd[1]: Reached target basic.target. Dec 13 16:12:40.786075 systemd[1]: Starting systemd-fsck-root.service... Dec 13 16:12:40.806288 systemd-fsck[748]: ROOT: clean, 621/1628000 files, 124058/1617920 blocks Dec 13 16:12:40.813085 systemd[1]: Finished systemd-fsck-root.service. Dec 13 16:12:40.813000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:12:40.814865 systemd[1]: Mounting sysroot.mount... Dec 13 16:12:40.828266 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Dec 13 16:12:40.827666 systemd[1]: Mounted sysroot.mount. Dec 13 16:12:40.829392 systemd[1]: Reached target initrd-root-fs.target. Dec 13 16:12:40.832368 systemd[1]: Mounting sysroot-usr.mount... Dec 13 16:12:40.834396 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Dec 13 16:12:40.836439 systemd[1]: Starting flatcar-openstack-hostname.service... Dec 13 16:12:40.838043 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 16:12:40.839210 systemd[1]: Reached target ignition-diskful.target. Dec 13 16:12:40.842044 systemd[1]: Mounted sysroot-usr.mount. Dec 13 16:12:40.845483 systemd[1]: Starting initrd-setup-root.service... Dec 13 16:12:40.852822 initrd-setup-root[759]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 16:12:40.866827 initrd-setup-root[767]: cut: /sysroot/etc/group: No such file or directory Dec 13 16:12:40.874908 initrd-setup-root[775]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 16:12:40.882518 initrd-setup-root[783]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 16:12:40.950000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:12:40.950188 systemd[1]: Finished initrd-setup-root.service. Dec 13 16:12:40.952215 systemd[1]: Starting ignition-mount.service... Dec 13 16:12:40.958698 systemd[1]: Starting sysroot-boot.service... Dec 13 16:12:40.965684 bash[802]: umount: /sysroot/usr/share/oem: not mounted. Dec 13 16:12:40.978588 ignition[803]: INFO : Ignition 2.14.0 Dec 13 16:12:40.979644 ignition[803]: INFO : Stage: mount Dec 13 16:12:40.980539 ignition[803]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 16:12:40.981573 ignition[803]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Dec 13 16:12:40.987657 ignition[803]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 16:12:40.987657 ignition[803]: INFO : mount: mount passed Dec 13 16:12:40.987657 ignition[803]: INFO : Ignition finished successfully Dec 13 16:12:40.996004 systemd[1]: Finished ignition-mount.service. Dec 13 16:12:40.996000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:12:41.001244 systemd[1]: Finished sysroot-boot.service. Dec 13 16:12:41.001000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:12:41.005506 coreos-metadata[754]: Dec 13 16:12:41.005 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Dec 13 16:12:41.020614 coreos-metadata[754]: Dec 13 16:12:41.020 INFO Fetch successful Dec 13 16:12:41.021610 coreos-metadata[754]: Dec 13 16:12:41.021 INFO wrote hostname srv-n7j3g.gb1.brightbox.com to /sysroot/etc/hostname Dec 13 16:12:41.024784 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Dec 13 16:12:41.024926 systemd[1]: Finished flatcar-openstack-hostname.service. Dec 13 16:12:41.040000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:12:41.040000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:12:41.365891 systemd[1]: Mounting sysroot-usr-share-oem.mount... Dec 13 16:12:41.377677 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (811) Dec 13 16:12:41.382332 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 16:12:41.382373 kernel: BTRFS info (device vda6): using free space tree Dec 13 16:12:41.382393 kernel: BTRFS info (device vda6): has skinny extents Dec 13 16:12:41.389787 systemd[1]: Mounted sysroot-usr-share-oem.mount. Dec 13 16:12:41.392579 systemd[1]: Starting ignition-files.service... Dec 13 16:12:41.413584 ignition[831]: INFO : Ignition 2.14.0 Dec 13 16:12:41.413584 ignition[831]: INFO : Stage: files Dec 13 16:12:41.415409 ignition[831]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 16:12:41.415409 ignition[831]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Dec 13 16:12:41.415409 ignition[831]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 16:12:41.418736 ignition[831]: DEBUG : files: compiled without relabeling support, skipping Dec 13 16:12:41.418736 ignition[831]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 16:12:41.418736 ignition[831]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 16:12:41.421678 ignition[831]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 16:12:41.423168 ignition[831]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 16:12:41.424308 ignition[831]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 16:12:41.423949 unknown[831]: wrote ssh authorized keys file for user: core Dec 13 16:12:41.426997 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Dec 13 16:12:41.426997 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 16:12:41.426997 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 16:12:41.426997 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 16:12:41.426997 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 16:12:41.426997 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 16:12:41.426997 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 16:12:41.426997 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Dec 13 16:12:42.019170 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Dec 13 16:12:42.202292 systemd-networkd[710]: eth0: Gained IPv6LL Dec 13 16:12:43.703791 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 16:12:43.707344 ignition[831]: INFO : files: op(7): [started] processing unit "coreos-metadata-sshkeys@.service" Dec 13 16:12:43.707344 ignition[831]: INFO : files: op(7): [finished] processing unit "coreos-metadata-sshkeys@.service" Dec 13 16:12:43.707344 ignition[831]: INFO : files: op(8): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Dec 13 16:12:43.707344 ignition[831]: INFO : files: op(8): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Dec 13 16:12:43.713122 systemd-networkd[710]: eth0: Ignoring DHCPv6 address 2a02:1348:179:8e5f:24:19ff:fee6:397e/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:8e5f:24:19ff:fee6:397e/64 assigned by NDisc. Dec 13 16:12:43.713141 systemd-networkd[710]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Dec 13 16:12:43.720146 ignition[831]: INFO : files: createResultFile: createFiles: op(9): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 16:12:43.720146 ignition[831]: INFO : files: createResultFile: createFiles: op(9): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 16:12:43.720146 ignition[831]: INFO : files: files passed Dec 13 16:12:43.720146 ignition[831]: INFO : Ignition finished successfully Dec 13 16:12:43.736797 kernel: kauditd_printk_skb: 28 callbacks suppressed Dec 13 16:12:43.736842 kernel: audit: type=1130 audit(1734106363.724:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:12:43.724000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:12:43.720460 systemd[1]: Finished ignition-files.service. Dec 13 16:12:43.725835 systemd[1]: Starting initrd-setup-root-after-ignition.service... Dec 13 16:12:43.735150 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Dec 13 16:12:43.753158 kernel: audit: type=1130 audit(1734106363.741:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:12:43.753191 kernel: audit: type=1131 audit(1734106363.741:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:12:43.741000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:12:43.741000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:12:43.736164 systemd[1]: Starting ignition-quench.service... Dec 13 16:12:43.759402 kernel: audit: type=1130 audit(1734106363.753:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:12:43.753000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:12:43.759488 initrd-setup-root-after-ignition[856]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 16:12:43.740940 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 16:12:43.741094 systemd[1]: Finished ignition-quench.service. Dec 13 16:12:43.744382 systemd[1]: Finished initrd-setup-root-after-ignition.service. Dec 13 16:12:43.754171 systemd[1]: Reached target ignition-complete.target. Dec 13 16:12:43.761213 systemd[1]: Starting initrd-parse-etc.service... Dec 13 16:12:43.785375 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 16:12:43.786483 systemd[1]: Finished initrd-parse-etc.service. Dec 13 16:12:43.787000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:12:43.788192 systemd[1]: Reached target initrd-fs.target. Dec 13 16:12:43.798375 kernel: audit: type=1130 audit(1734106363.787:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:12:43.798413 kernel: audit: type=1131 audit(1734106363.787:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:12:43.787000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:12:43.799244 systemd[1]: Reached target initrd.target. Dec 13 16:12:43.800026 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Dec 13 16:12:43.801152 systemd[1]: Starting dracut-pre-pivot.service... Dec 13 16:12:43.817686 systemd[1]: Finished dracut-pre-pivot.service. Dec 13 16:12:43.817000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:12:43.819454 systemd[1]: Starting initrd-cleanup.service... Dec 13 16:12:43.825593 kernel: audit: type=1130 audit(1734106363.817:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:12:43.833725 systemd[1]: Stopped target nss-lookup.target. Dec 13 16:12:43.834558 systemd[1]: Stopped target remote-cryptsetup.target. Dec 13 16:12:43.835975 systemd[1]: Stopped target timers.target. Dec 13 16:12:43.837179 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 16:12:43.837425 systemd[1]: Stopped dracut-pre-pivot.service. Dec 13 16:12:43.839000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:12:43.846694 kernel: audit: type=1131 audit(1734106363.839:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:12:43.844933 systemd[1]: Stopped target initrd.target. Dec 13 16:12:43.845708 systemd[1]: Stopped target basic.target. Dec 13 16:12:43.846398 systemd[1]: Stopped target ignition-complete.target. Dec 13 16:12:43.847207 systemd[1]: Stopped target ignition-diskful.target. Dec 13 16:12:43.848496 systemd[1]: Stopped target initrd-root-device.target. Dec 13 16:12:43.849814 systemd[1]: Stopped target remote-fs.target. Dec 13 16:12:43.851020 systemd[1]: Stopped target remote-fs-pre.target. Dec 13 16:12:43.852268 systemd[1]: Stopped target sysinit.target. Dec 13 16:12:43.854190 systemd[1]: Stopped target local-fs.target. Dec 13 16:12:43.855462 systemd[1]: Stopped target local-fs-pre.target. Dec 13 16:12:43.856788 systemd[1]: Stopped target swap.target. Dec 13 16:12:43.864420 kernel: audit: type=1131 audit(1734106363.858:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:12:43.858000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:12:43.858036 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 16:12:43.858316 systemd[1]: Stopped dracut-pre-mount.service. Dec 13 16:12:43.886041 kernel: audit: type=1131 audit(1734106363.880:48): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:12:43.880000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:12:43.859464 systemd[1]: Stopped target cryptsetup.target. Dec 13 16:12:43.886000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:12:43.879544 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 16:12:43.887000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:12:43.879904 systemd[1]: Stopped dracut-initqueue.service. Dec 13 16:12:43.880935 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 16:12:43.881156 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Dec 13 16:12:43.886997 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 16:12:43.887240 systemd[1]: Stopped ignition-files.service. Dec 13 16:12:43.889794 systemd[1]: Stopping ignition-mount.service... Dec 13 16:12:43.900242 iscsid[720]: iscsid shutting down. Dec 13 16:12:43.900000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:12:43.897782 systemd[1]: Stopping iscsid.service... Dec 13 16:12:43.902000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:12:43.899538 systemd[1]: Stopping sysroot-boot.service... Dec 13 16:12:43.900253 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 16:12:43.900549 systemd[1]: Stopped systemd-udev-trigger.service. Dec 13 16:12:43.901562 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 16:12:43.901804 systemd[1]: Stopped dracut-pre-trigger.service. Dec 13 16:12:43.905868 systemd[1]: iscsid.service: Deactivated successfully. Dec 13 16:12:43.906094 systemd[1]: Stopped iscsid.service. Dec 13 16:12:43.909000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:12:43.912067 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 16:12:43.912506 systemd[1]: Finished initrd-cleanup.service. Dec 13 16:12:43.914000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:12:43.914000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:12:43.916288 systemd[1]: Stopping iscsiuio.service... Dec 13 16:12:43.920000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:12:43.921995 ignition[869]: INFO : Ignition 2.14.0 Dec 13 16:12:43.921995 ignition[869]: INFO : Stage: umount Dec 13 16:12:43.921995 ignition[869]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 16:12:43.921995 ignition[869]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Dec 13 16:12:43.921995 ignition[869]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 16:12:43.920444 systemd[1]: iscsiuio.service: Deactivated successfully. Dec 13 16:12:43.929308 ignition[869]: INFO : umount: umount passed Dec 13 16:12:43.929308 ignition[869]: INFO : Ignition finished successfully Dec 13 16:12:43.929000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:12:43.930000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:12:43.920623 systemd[1]: Stopped iscsiuio.service. Dec 13 16:12:43.931000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:12:43.933000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:12:43.935000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:12:43.928965 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 16:12:43.929094 systemd[1]: Stopped ignition-mount.service. Dec 13 16:12:43.930100 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 16:12:43.941000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:12:43.930165 systemd[1]: Stopped ignition-disks.service. Dec 13 16:12:43.931302 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 16:12:43.931360 systemd[1]: Stopped ignition-kargs.service. Dec 13 16:12:43.932623 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 13 16:12:43.932735 systemd[1]: Stopped ignition-fetch.service. Dec 13 16:12:43.934678 systemd[1]: Stopped target network.target. Dec 13 16:12:43.935245 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 16:12:43.935321 systemd[1]: Stopped ignition-fetch-offline.service. Dec 13 16:12:43.936022 systemd[1]: Stopped target paths.target. Dec 13 16:12:43.951000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:12:43.936584 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 16:12:43.955000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:12:43.956000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:12:43.936700 systemd[1]: Stopped systemd-ask-password-console.path. Dec 13 16:12:43.958000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:12:43.937307 systemd[1]: Stopped target slices.target. Dec 13 16:12:43.937892 systemd[1]: Stopped target sockets.target. Dec 13 16:12:43.938536 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 16:12:43.938589 systemd[1]: Closed iscsid.socket. Dec 13 16:12:43.939252 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 16:12:43.939311 systemd[1]: Closed iscsiuio.socket. Dec 13 16:12:43.940505 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 16:12:43.940593 systemd[1]: Stopped ignition-setup.service. Dec 13 16:12:43.972000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:12:43.942049 systemd[1]: Stopping systemd-networkd.service... Dec 13 16:12:43.943546 systemd[1]: Stopping systemd-resolved.service... Dec 13 16:12:43.974000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:12:43.945720 systemd-networkd[710]: eth0: DHCPv6 lease lost Dec 13 16:12:43.975000 audit: BPF prog-id=9 op=UNLOAD Dec 13 16:12:43.948937 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 16:12:43.976000 audit: BPF prog-id=6 op=UNLOAD Dec 13 16:12:43.949613 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 16:12:43.950021 systemd[1]: Stopped systemd-networkd.service. Dec 13 16:12:43.952389 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 16:12:43.982000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:12:43.952728 systemd[1]: Closed systemd-networkd.socket. Dec 13 16:12:43.984000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:12:43.954350 systemd[1]: Stopping network-cleanup.service... Dec 13 16:12:43.985000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:12:43.955778 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 16:12:43.955848 systemd[1]: Stopped parse-ip-for-networkd.service. Dec 13 16:12:43.956538 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 16:12:43.956612 systemd[1]: Stopped systemd-sysctl.service. Dec 13 16:12:43.957858 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 16:12:43.957931 systemd[1]: Stopped systemd-modules-load.service. Dec 13 16:12:43.958875 systemd[1]: Stopping systemd-udevd.service... Dec 13 16:12:43.968438 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 13 16:12:43.969197 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 16:12:43.969353 systemd[1]: Stopped systemd-resolved.service. Dec 13 16:12:43.973964 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 16:12:43.974173 systemd[1]: Stopped systemd-udevd.service. Dec 13 16:12:43.999000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:12:43.976711 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 16:12:44.000000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:12:43.976798 systemd[1]: Closed systemd-udevd-control.socket. Dec 13 16:12:44.002000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:12:43.980191 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 16:12:43.980253 systemd[1]: Closed systemd-udevd-kernel.socket. Dec 13 16:12:44.005000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:12:43.981481 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 16:12:44.006000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:12:44.006000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:12:43.981543 systemd[1]: Stopped dracut-pre-udev.service. Dec 13 16:12:43.982906 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 16:12:43.982968 systemd[1]: Stopped dracut-cmdline.service. Dec 13 16:12:43.985053 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 16:12:43.985113 systemd[1]: Stopped dracut-cmdline-ask.service. Dec 13 16:12:43.987298 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Dec 13 16:12:43.998618 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 16:12:43.998738 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Dec 13 16:12:44.000363 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 16:12:44.000435 systemd[1]: Stopped kmod-static-nodes.service. Dec 13 16:12:44.001372 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 16:12:44.001436 systemd[1]: Stopped systemd-vconsole-setup.service. Dec 13 16:12:44.004184 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Dec 13 16:12:44.005001 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 16:12:44.005140 systemd[1]: Stopped network-cleanup.service. Dec 13 16:12:44.006150 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 16:12:44.006267 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Dec 13 16:12:44.043422 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 16:12:44.043586 systemd[1]: Stopped sysroot-boot.service. Dec 13 16:12:44.044000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:12:44.045306 systemd[1]: Reached target initrd-switch-root.target. Dec 13 16:12:44.046286 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 16:12:44.046000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:12:44.046354 systemd[1]: Stopped initrd-setup-root.service. Dec 13 16:12:44.048778 systemd[1]: Starting initrd-switch-root.service... Dec 13 16:12:44.065399 systemd[1]: Switching root. Dec 13 16:12:44.088034 systemd-journald[201]: Journal stopped Dec 13 16:12:48.188293 systemd-journald[201]: Received SIGTERM from PID 1 (n/a). Dec 13 16:12:48.188435 kernel: SELinux: Class mctp_socket not defined in policy. Dec 13 16:12:48.188470 kernel: SELinux: Class anon_inode not defined in policy. Dec 13 16:12:48.188507 kernel: SELinux: the above unknown classes and permissions will be allowed Dec 13 16:12:48.188542 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 16:12:48.188582 kernel: SELinux: policy capability open_perms=1 Dec 13 16:12:48.188604 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 16:12:48.188624 kernel: SELinux: policy capability always_check_network=0 Dec 13 16:12:48.188724 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 16:12:48.188749 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 16:12:48.188768 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 16:12:48.188801 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 16:12:48.188824 systemd[1]: Successfully loaded SELinux policy in 74.224ms. Dec 13 16:12:48.188889 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 16.437ms. Dec 13 16:12:48.188922 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 16:12:48.188945 systemd[1]: Detected virtualization kvm. Dec 13 16:12:48.188965 systemd[1]: Detected architecture x86-64. Dec 13 16:12:48.189008 systemd[1]: Detected first boot. Dec 13 16:12:48.189035 systemd[1]: Hostname set to . Dec 13 16:12:48.189060 systemd[1]: Initializing machine ID from VM UUID. Dec 13 16:12:48.189080 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Dec 13 16:12:48.189121 systemd[1]: Populated /etc with preset unit settings. Dec 13 16:12:48.189143 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 16:12:48.189194 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 16:12:48.189243 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 16:12:48.189276 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 13 16:12:48.189306 systemd[1]: Stopped initrd-switch-root.service. Dec 13 16:12:48.189337 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 13 16:12:48.189372 systemd[1]: Created slice system-addon\x2dconfig.slice. Dec 13 16:12:48.189395 systemd[1]: Created slice system-addon\x2drun.slice. Dec 13 16:12:48.189416 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Dec 13 16:12:48.189437 systemd[1]: Created slice system-getty.slice. Dec 13 16:12:48.189458 systemd[1]: Created slice system-modprobe.slice. Dec 13 16:12:48.189489 systemd[1]: Created slice system-serial\x2dgetty.slice. Dec 13 16:12:48.189527 systemd[1]: Created slice system-system\x2dcloudinit.slice. Dec 13 16:12:48.189550 systemd[1]: Created slice system-systemd\x2dfsck.slice. Dec 13 16:12:48.189571 systemd[1]: Created slice user.slice. Dec 13 16:12:48.189592 systemd[1]: Started systemd-ask-password-console.path. Dec 13 16:12:48.189628 systemd[1]: Started systemd-ask-password-wall.path. Dec 13 16:12:48.189681 systemd[1]: Set up automount boot.automount. Dec 13 16:12:48.189717 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Dec 13 16:12:48.189746 systemd[1]: Stopped target initrd-switch-root.target. Dec 13 16:12:48.189768 systemd[1]: Stopped target initrd-fs.target. Dec 13 16:12:48.189790 systemd[1]: Stopped target initrd-root-fs.target. Dec 13 16:12:48.189823 systemd[1]: Reached target integritysetup.target. Dec 13 16:12:48.189845 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 16:12:48.189867 systemd[1]: Reached target remote-fs.target. Dec 13 16:12:48.189887 systemd[1]: Reached target slices.target. Dec 13 16:12:48.189918 systemd[1]: Reached target swap.target. Dec 13 16:12:48.189940 systemd[1]: Reached target torcx.target. Dec 13 16:12:48.189967 systemd[1]: Reached target veritysetup.target. Dec 13 16:12:48.189994 systemd[1]: Listening on systemd-coredump.socket. Dec 13 16:12:48.190028 systemd[1]: Listening on systemd-initctl.socket. Dec 13 16:12:48.190050 systemd[1]: Listening on systemd-networkd.socket. Dec 13 16:12:48.190076 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 16:12:48.190103 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 16:12:48.190131 systemd[1]: Listening on systemd-userdbd.socket. Dec 13 16:12:48.190152 systemd[1]: Mounting dev-hugepages.mount... Dec 13 16:12:48.190184 systemd[1]: Mounting dev-mqueue.mount... Dec 13 16:12:48.190206 systemd[1]: Mounting media.mount... Dec 13 16:12:48.190227 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 16:12:48.190248 systemd[1]: Mounting sys-kernel-debug.mount... Dec 13 16:12:48.190271 systemd[1]: Mounting sys-kernel-tracing.mount... Dec 13 16:12:48.190292 systemd[1]: Mounting tmp.mount... Dec 13 16:12:48.190313 systemd[1]: Starting flatcar-tmpfiles.service... Dec 13 16:12:48.190346 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 16:12:48.190374 systemd[1]: Starting kmod-static-nodes.service... Dec 13 16:12:48.190423 systemd[1]: Starting modprobe@configfs.service... Dec 13 16:12:48.190446 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 16:12:48.190472 systemd[1]: Starting modprobe@drm.service... Dec 13 16:12:48.190494 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 16:12:48.190527 systemd[1]: Starting modprobe@fuse.service... Dec 13 16:12:48.190556 systemd[1]: Starting modprobe@loop.service... Dec 13 16:12:48.190584 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 16:12:48.190607 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 13 16:12:48.190628 systemd[1]: Stopped systemd-fsck-root.service. Dec 13 16:12:48.190673 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 13 16:12:48.190695 systemd[1]: Stopped systemd-fsck-usr.service. Dec 13 16:12:48.190716 systemd[1]: Stopped systemd-journald.service. Dec 13 16:12:48.190742 kernel: fuse: init (API version 7.34) Dec 13 16:12:48.190763 systemd[1]: Starting systemd-journald.service... Dec 13 16:12:48.190789 systemd[1]: Starting systemd-modules-load.service... Dec 13 16:12:48.190811 systemd[1]: Starting systemd-network-generator.service... Dec 13 16:12:48.190832 systemd[1]: Starting systemd-remount-fs.service... Dec 13 16:12:48.190854 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 16:12:48.190888 systemd[1]: verity-setup.service: Deactivated successfully. Dec 13 16:12:48.190911 systemd[1]: Stopped verity-setup.service. Dec 13 16:12:48.190932 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 16:12:48.190967 systemd[1]: Mounted dev-hugepages.mount. Dec 13 16:12:48.190994 systemd[1]: Mounted dev-mqueue.mount. Dec 13 16:12:48.191015 kernel: loop: module loaded Dec 13 16:12:48.191035 systemd[1]: Mounted media.mount. Dec 13 16:12:48.191061 systemd[1]: Mounted sys-kernel-debug.mount. Dec 13 16:12:48.191082 systemd[1]: Mounted sys-kernel-tracing.mount. Dec 13 16:12:48.191123 systemd[1]: Mounted tmp.mount. Dec 13 16:12:48.191145 systemd[1]: Finished kmod-static-nodes.service. Dec 13 16:12:48.191166 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 16:12:48.191195 systemd[1]: Finished modprobe@configfs.service. Dec 13 16:12:48.191217 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 16:12:48.191238 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 16:12:48.191265 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 16:12:48.191309 systemd-journald[971]: Journal started Dec 13 16:12:48.191387 systemd-journald[971]: Runtime Journal (/run/log/journal/01f2f98a2af4435a940c6860c3d3e59b) is 4.7M, max 38.1M, 33.3M free. Dec 13 16:12:44.267000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 16:12:44.348000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 16:12:44.348000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 16:12:44.348000 audit: BPF prog-id=10 op=LOAD Dec 13 16:12:44.348000 audit: BPF prog-id=10 op=UNLOAD Dec 13 16:12:44.348000 audit: BPF prog-id=11 op=LOAD Dec 13 16:12:44.348000 audit: BPF prog-id=11 op=UNLOAD Dec 13 16:12:44.480000 audit[901]: AVC avc: denied { associate } for pid=901 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Dec 13 16:12:44.480000 audit[901]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c0001178d2 a1=c00002ae40 a2=c000029100 a3=32 items=0 ppid=884 pid=901 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 16:12:44.480000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Dec 13 16:12:44.483000 audit[901]: AVC avc: denied { associate } for pid=901 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Dec 13 16:12:44.483000 audit[901]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c0001179a9 a2=1ed a3=0 items=2 ppid=884 pid=901 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 16:12:44.483000 audit: CWD cwd="/" Dec 13 16:12:44.483000 audit: PATH item=0 name=(null) inode=2 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:12:44.483000 audit: PATH item=1 name=(null) inode=3 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:12:44.483000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Dec 13 16:12:47.932000 audit: BPF prog-id=12 op=LOAD Dec 13 16:12:47.932000 audit: BPF prog-id=3 op=UNLOAD Dec 13 16:12:47.933000 audit: BPF prog-id=13 op=LOAD Dec 13 16:12:47.933000 audit: BPF prog-id=14 op=LOAD Dec 13 16:12:47.933000 audit: BPF prog-id=4 op=UNLOAD Dec 13 16:12:47.933000 audit: BPF prog-id=5 op=UNLOAD Dec 13 16:12:47.935000 audit: BPF prog-id=15 op=LOAD Dec 13 16:12:47.935000 audit: BPF prog-id=12 op=UNLOAD Dec 13 16:12:47.935000 audit: BPF prog-id=16 op=LOAD Dec 13 16:12:47.935000 audit: BPF prog-id=17 op=LOAD Dec 13 16:12:47.935000 audit: BPF prog-id=13 op=UNLOAD Dec 13 16:12:47.935000 audit: BPF prog-id=14 op=UNLOAD Dec 13 16:12:47.936000 audit: BPF prog-id=18 op=LOAD Dec 13 16:12:47.936000 audit: BPF prog-id=15 op=UNLOAD Dec 13 16:12:47.937000 audit: BPF prog-id=19 op=LOAD Dec 13 16:12:47.937000 audit: BPF prog-id=20 op=LOAD Dec 13 16:12:47.937000 audit: BPF prog-id=16 op=UNLOAD Dec 13 16:12:47.937000 audit: BPF prog-id=17 op=UNLOAD Dec 13 16:12:47.938000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:12:47.944000 audit: BPF prog-id=18 op=UNLOAD Dec 13 16:12:47.944000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:12:47.944000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:12:48.101000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:12:48.105000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:12:48.107000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:12:48.107000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:12:48.110000 audit: BPF prog-id=21 op=LOAD Dec 13 16:12:48.110000 audit: BPF prog-id=22 op=LOAD Dec 13 16:12:48.110000 audit: BPF prog-id=23 op=LOAD Dec 13 16:12:48.110000 audit: BPF prog-id=19 op=UNLOAD Dec 13 16:12:48.110000 audit: BPF prog-id=20 op=UNLOAD Dec 13 16:12:48.143000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:12:48.174000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:12:48.182000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:12:48.182000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:12:48.185000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Dec 13 16:12:48.185000 audit[971]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffd9d9abc90 a2=4000 a3=7ffd9d9abd2c items=0 ppid=1 pid=971 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 16:12:48.185000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Dec 13 16:12:48.189000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:12:48.189000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:12:44.477314 /usr/lib/systemd/system-generators/torcx-generator[901]: time="2024-12-13T16:12:44Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 16:12:47.930098 systemd[1]: Queued start job for default target multi-user.target. Dec 13 16:12:44.478113 /usr/lib/systemd/system-generators/torcx-generator[901]: time="2024-12-13T16:12:44Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Dec 13 16:12:47.930124 systemd[1]: Unnecessary job was removed for dev-vda6.device. Dec 13 16:12:44.478167 /usr/lib/systemd/system-generators/torcx-generator[901]: time="2024-12-13T16:12:44Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Dec 13 16:12:47.938684 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 13 16:12:44.478233 /usr/lib/systemd/system-generators/torcx-generator[901]: time="2024-12-13T16:12:44Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Dec 13 16:12:44.478263 /usr/lib/systemd/system-generators/torcx-generator[901]: time="2024-12-13T16:12:44Z" level=debug msg="skipped missing lower profile" missing profile=oem Dec 13 16:12:44.478314 /usr/lib/systemd/system-generators/torcx-generator[901]: time="2024-12-13T16:12:44Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Dec 13 16:12:44.478335 /usr/lib/systemd/system-generators/torcx-generator[901]: time="2024-12-13T16:12:44Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Dec 13 16:12:44.478802 /usr/lib/systemd/system-generators/torcx-generator[901]: time="2024-12-13T16:12:44Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Dec 13 16:12:44.478868 /usr/lib/systemd/system-generators/torcx-generator[901]: time="2024-12-13T16:12:44Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Dec 13 16:12:44.478893 /usr/lib/systemd/system-generators/torcx-generator[901]: time="2024-12-13T16:12:44Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Dec 13 16:12:44.479724 /usr/lib/systemd/system-generators/torcx-generator[901]: time="2024-12-13T16:12:44Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Dec 13 16:12:44.479784 /usr/lib/systemd/system-generators/torcx-generator[901]: time="2024-12-13T16:12:44Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Dec 13 16:12:44.479815 /usr/lib/systemd/system-generators/torcx-generator[901]: time="2024-12-13T16:12:44Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.6: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.6 Dec 13 16:12:44.479842 /usr/lib/systemd/system-generators/torcx-generator[901]: time="2024-12-13T16:12:44Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Dec 13 16:12:44.479873 /usr/lib/systemd/system-generators/torcx-generator[901]: time="2024-12-13T16:12:44Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.6: no such file or directory" path=/var/lib/torcx/store/3510.3.6 Dec 13 16:12:44.479899 /usr/lib/systemd/system-generators/torcx-generator[901]: time="2024-12-13T16:12:44Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Dec 13 16:12:47.352094 /usr/lib/systemd/system-generators/torcx-generator[901]: time="2024-12-13T16:12:47Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 16:12:47.352555 /usr/lib/systemd/system-generators/torcx-generator[901]: time="2024-12-13T16:12:47Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 16:12:47.352814 /usr/lib/systemd/system-generators/torcx-generator[901]: time="2024-12-13T16:12:47Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 16:12:47.353168 /usr/lib/systemd/system-generators/torcx-generator[901]: time="2024-12-13T16:12:47Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 16:12:47.353266 /usr/lib/systemd/system-generators/torcx-generator[901]: time="2024-12-13T16:12:47Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Dec 13 16:12:47.353402 /usr/lib/systemd/system-generators/torcx-generator[901]: time="2024-12-13T16:12:47Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Dec 13 16:12:48.210585 systemd[1]: Finished modprobe@drm.service. Dec 13 16:12:48.210663 systemd[1]: Started systemd-journald.service. Dec 13 16:12:48.207000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:12:48.207000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:12:48.212000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:12:48.214172 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 16:12:48.214381 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 16:12:48.214000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:12:48.214000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:12:48.215526 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 16:12:48.215759 systemd[1]: Finished modprobe@fuse.service. Dec 13 16:12:48.215000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:12:48.215000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:12:48.216871 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 16:12:48.218948 systemd[1]: Finished modprobe@loop.service. Dec 13 16:12:48.218000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:12:48.218000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:12:48.220246 systemd[1]: Finished systemd-modules-load.service. Dec 13 16:12:48.220000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:12:48.221339 systemd[1]: Finished systemd-network-generator.service. Dec 13 16:12:48.221000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:12:48.222415 systemd[1]: Finished systemd-remount-fs.service. Dec 13 16:12:48.222000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:12:48.224085 systemd[1]: Reached target network-pre.target. Dec 13 16:12:48.226904 systemd[1]: Mounting sys-fs-fuse-connections.mount... Dec 13 16:12:48.231017 systemd[1]: Mounting sys-kernel-config.mount... Dec 13 16:12:48.234950 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 16:12:48.237830 systemd[1]: Starting systemd-hwdb-update.service... Dec 13 16:12:48.240126 systemd[1]: Starting systemd-journal-flush.service... Dec 13 16:12:48.241605 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 16:12:48.244207 systemd[1]: Starting systemd-random-seed.service... Dec 13 16:12:48.249119 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 16:12:48.259890 systemd[1]: Starting systemd-sysctl.service... Dec 13 16:12:48.271012 systemd[1]: Mounted sys-fs-fuse-connections.mount. Dec 13 16:12:48.271910 systemd[1]: Mounted sys-kernel-config.mount. Dec 13 16:12:48.272919 systemd[1]: Finished systemd-random-seed.service. Dec 13 16:12:48.274000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:12:48.275009 systemd[1]: Reached target first-boot-complete.target. Dec 13 16:12:48.282711 systemd-journald[971]: Time spent on flushing to /var/log/journal/01f2f98a2af4435a940c6860c3d3e59b is 66.681ms for 1285 entries. Dec 13 16:12:48.282711 systemd-journald[971]: System Journal (/var/log/journal/01f2f98a2af4435a940c6860c3d3e59b) is 8.0M, max 584.8M, 576.8M free. Dec 13 16:12:48.378553 systemd-journald[971]: Received client request to flush runtime journal. Dec 13 16:12:48.293000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:12:48.341000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:12:48.380000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:12:48.381000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:12:48.293753 systemd[1]: Finished systemd-sysctl.service. Dec 13 16:12:48.341368 systemd[1]: Finished flatcar-tmpfiles.service. Dec 13 16:12:48.344211 systemd[1]: Starting systemd-sysusers.service... Dec 13 16:12:48.380064 systemd[1]: Finished systemd-journal-flush.service. Dec 13 16:12:48.381652 systemd[1]: Finished systemd-sysusers.service. Dec 13 16:12:48.384185 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Dec 13 16:12:48.405988 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 16:12:48.406000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:12:48.408692 systemd[1]: Starting systemd-udev-settle.service... Dec 13 16:12:48.421009 udevadm[1013]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Dec 13 16:12:48.440177 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Dec 13 16:12:48.440000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:12:48.952785 systemd[1]: Finished systemd-hwdb-update.service. Dec 13 16:12:48.962269 kernel: kauditd_printk_skb: 108 callbacks suppressed Dec 13 16:12:48.962432 kernel: audit: type=1130 audit(1734106368.952:148): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:12:48.952000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:12:48.964520 kernel: audit: type=1334 audit(1734106368.956:149): prog-id=24 op=LOAD Dec 13 16:12:48.956000 audit: BPF prog-id=24 op=LOAD Dec 13 16:12:48.963507 systemd[1]: Starting systemd-udevd.service... Dec 13 16:12:48.962000 audit: BPF prog-id=25 op=LOAD Dec 13 16:12:48.962000 audit: BPF prog-id=7 op=UNLOAD Dec 13 16:12:48.962000 audit: BPF prog-id=8 op=UNLOAD Dec 13 16:12:48.965657 kernel: audit: type=1334 audit(1734106368.962:150): prog-id=25 op=LOAD Dec 13 16:12:48.965703 kernel: audit: type=1334 audit(1734106368.962:151): prog-id=7 op=UNLOAD Dec 13 16:12:48.965736 kernel: audit: type=1334 audit(1734106368.962:152): prog-id=8 op=UNLOAD Dec 13 16:12:48.995978 systemd-udevd[1014]: Using default interface naming scheme 'v252'. Dec 13 16:12:49.028752 systemd[1]: Started systemd-udevd.service. Dec 13 16:12:49.036747 kernel: audit: type=1130 audit(1734106369.028:153): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:12:49.036841 kernel: audit: type=1334 audit(1734106369.034:154): prog-id=26 op=LOAD Dec 13 16:12:49.028000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:12:49.034000 audit: BPF prog-id=26 op=LOAD Dec 13 16:12:49.036048 systemd[1]: Starting systemd-networkd.service... Dec 13 16:12:49.054138 kernel: audit: type=1334 audit(1734106369.048:155): prog-id=27 op=LOAD Dec 13 16:12:49.054227 kernel: audit: type=1334 audit(1734106369.050:156): prog-id=28 op=LOAD Dec 13 16:12:49.054260 kernel: audit: type=1334 audit(1734106369.052:157): prog-id=29 op=LOAD Dec 13 16:12:49.048000 audit: BPF prog-id=27 op=LOAD Dec 13 16:12:49.050000 audit: BPF prog-id=28 op=LOAD Dec 13 16:12:49.052000 audit: BPF prog-id=29 op=LOAD Dec 13 16:12:49.054852 systemd[1]: Starting systemd-userdbd.service... Dec 13 16:12:49.101749 systemd[1]: Started systemd-userdbd.service. Dec 13 16:12:49.101000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:12:49.136568 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Dec 13 16:12:49.209753 systemd-networkd[1016]: lo: Link UP Dec 13 16:12:49.209767 systemd-networkd[1016]: lo: Gained carrier Dec 13 16:12:49.210598 systemd-networkd[1016]: Enumeration completed Dec 13 16:12:49.210741 systemd[1]: Started systemd-networkd.service. Dec 13 16:12:49.210000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:12:49.211701 systemd-networkd[1016]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 16:12:49.213987 systemd-networkd[1016]: eth0: Link UP Dec 13 16:12:49.214001 systemd-networkd[1016]: eth0: Gained carrier Dec 13 16:12:49.228850 systemd-networkd[1016]: eth0: DHCPv4 address 10.230.57.126/30, gateway 10.230.57.125 acquired from 10.230.57.125 Dec 13 16:12:49.249732 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Dec 13 16:12:49.255657 kernel: ACPI: button: Power Button [PWRF] Dec 13 16:12:49.273695 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 16:12:49.284732 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Dec 13 16:12:49.317000 audit[1024]: AVC avc: denied { confidentiality } for pid=1024 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Dec 13 16:12:49.317000 audit[1024]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=5621f5c59d20 a1=337fc a2=7fa297dc5bc5 a3=5 items=110 ppid=1014 pid=1024 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 16:12:49.317000 audit: CWD cwd="/" Dec 13 16:12:49.317000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:12:49.317000 audit: PATH item=1 name=(null) inode=15640 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:12:49.317000 audit: PATH item=2 name=(null) inode=15640 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:12:49.317000 audit: PATH item=3 name=(null) inode=15641 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:12:49.317000 audit: PATH item=4 name=(null) inode=15640 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:12:49.317000 audit: PATH item=5 name=(null) inode=15642 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:12:49.317000 audit: PATH item=6 name=(null) inode=15640 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:12:49.317000 audit: PATH item=7 name=(null) inode=15643 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:12:49.317000 audit: PATH item=8 name=(null) inode=15643 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:12:49.317000 audit: PATH item=9 name=(null) inode=15644 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:12:49.317000 audit: PATH item=10 name=(null) inode=15643 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:12:49.317000 audit: PATH item=11 name=(null) inode=15645 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:12:49.317000 audit: PATH item=12 name=(null) inode=15643 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:12:49.317000 audit: PATH item=13 name=(null) inode=15646 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:12:49.317000 audit: PATH item=14 name=(null) inode=15643 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:12:49.317000 audit: PATH item=15 name=(null) inode=15647 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:12:49.317000 audit: PATH item=16 name=(null) inode=15643 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:12:49.317000 audit: PATH item=17 name=(null) inode=15648 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:12:49.317000 audit: PATH item=18 name=(null) inode=15640 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:12:49.317000 audit: PATH item=19 name=(null) inode=15649 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:12:49.317000 audit: PATH item=20 name=(null) inode=15649 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:12:49.317000 audit: PATH item=21 name=(null) inode=15650 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:12:49.317000 audit: PATH item=22 name=(null) inode=15649 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:12:49.317000 audit: PATH item=23 name=(null) inode=15651 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:12:49.317000 audit: PATH item=24 name=(null) inode=15649 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:12:49.317000 audit: PATH item=25 name=(null) inode=15652 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:12:49.317000 audit: PATH item=26 name=(null) inode=15649 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:12:49.317000 audit: PATH item=27 name=(null) inode=15653 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:12:49.317000 audit: PATH item=28 name=(null) inode=15649 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:12:49.317000 audit: PATH item=29 name=(null) inode=15654 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:12:49.317000 audit: PATH item=30 name=(null) inode=15640 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:12:49.317000 audit: PATH item=31 name=(null) inode=15655 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:12:49.317000 audit: PATH item=32 name=(null) inode=15655 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:12:49.317000 audit: PATH item=33 name=(null) inode=15656 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:12:49.317000 audit: PATH item=34 name=(null) inode=15655 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:12:49.317000 audit: PATH item=35 name=(null) inode=15657 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:12:49.317000 audit: PATH item=36 name=(null) inode=15655 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:12:49.317000 audit: PATH item=37 name=(null) inode=15658 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:12:49.317000 audit: PATH item=38 name=(null) inode=15655 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:12:49.317000 audit: PATH item=39 name=(null) inode=15659 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:12:49.317000 audit: PATH item=40 name=(null) inode=15655 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:12:49.317000 audit: PATH item=41 name=(null) inode=15660 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:12:49.317000 audit: PATH item=42 name=(null) inode=15640 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:12:49.317000 audit: PATH item=43 name=(null) inode=15661 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:12:49.317000 audit: PATH item=44 name=(null) inode=15661 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:12:49.317000 audit: PATH item=45 name=(null) inode=15662 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:12:49.317000 audit: PATH item=46 name=(null) inode=15661 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:12:49.317000 audit: PATH item=47 name=(null) inode=15663 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:12:49.317000 audit: PATH item=48 name=(null) inode=15661 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:12:49.317000 audit: PATH item=49 name=(null) inode=15664 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:12:49.317000 audit: PATH item=50 name=(null) inode=15661 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:12:49.317000 audit: PATH item=51 name=(null) inode=15665 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:12:49.317000 audit: PATH item=52 name=(null) inode=15661 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:12:49.317000 audit: PATH item=53 name=(null) inode=15666 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:12:49.317000 audit: PATH item=54 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:12:49.317000 audit: PATH item=55 name=(null) inode=15667 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:12:49.317000 audit: PATH item=56 name=(null) inode=15667 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:12:49.317000 audit: PATH item=57 name=(null) inode=15668 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:12:49.317000 audit: PATH item=58 name=(null) inode=15667 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:12:49.317000 audit: PATH item=59 name=(null) inode=15669 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:12:49.317000 audit: PATH item=60 name=(null) inode=15667 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:12:49.317000 audit: PATH item=61 name=(null) inode=15670 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:12:49.317000 audit: PATH item=62 name=(null) inode=15670 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:12:49.317000 audit: PATH item=63 name=(null) inode=15671 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:12:49.317000 audit: PATH item=64 name=(null) inode=15670 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:12:49.317000 audit: PATH item=65 name=(null) inode=15672 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:12:49.317000 audit: PATH item=66 name=(null) inode=15670 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:12:49.317000 audit: PATH item=67 name=(null) inode=15673 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:12:49.317000 audit: PATH item=68 name=(null) inode=15670 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:12:49.317000 audit: PATH item=69 name=(null) inode=15674 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:12:49.317000 audit: PATH item=70 name=(null) inode=15670 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:12:49.317000 audit: PATH item=71 name=(null) inode=15675 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:12:49.317000 audit: PATH item=72 name=(null) inode=15667 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:12:49.317000 audit: PATH item=73 name=(null) inode=15676 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:12:49.317000 audit: PATH item=74 name=(null) inode=15676 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:12:49.317000 audit: PATH item=75 name=(null) inode=15677 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:12:49.317000 audit: PATH item=76 name=(null) inode=15676 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:12:49.317000 audit: PATH item=77 name=(null) inode=15678 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:12:49.317000 audit: PATH item=78 name=(null) inode=15676 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:12:49.317000 audit: PATH item=79 name=(null) inode=15679 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:12:49.317000 audit: PATH item=80 name=(null) inode=15676 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:12:49.317000 audit: PATH item=81 name=(null) inode=15680 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:12:49.317000 audit: PATH item=82 name=(null) inode=15676 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:12:49.317000 audit: PATH item=83 name=(null) inode=15681 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:12:49.317000 audit: PATH item=84 name=(null) inode=15667 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:12:49.317000 audit: PATH item=85 name=(null) inode=15682 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:12:49.317000 audit: PATH item=86 name=(null) inode=15682 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:12:49.317000 audit: PATH item=87 name=(null) inode=15683 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:12:49.317000 audit: PATH item=88 name=(null) inode=15682 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:12:49.317000 audit: PATH item=89 name=(null) inode=15684 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:12:49.317000 audit: PATH item=90 name=(null) inode=15682 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:12:49.317000 audit: PATH item=91 name=(null) inode=15685 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:12:49.317000 audit: PATH item=92 name=(null) inode=15682 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:12:49.317000 audit: PATH item=93 name=(null) inode=15686 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:12:49.317000 audit: PATH item=94 name=(null) inode=15682 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:12:49.317000 audit: PATH item=95 name=(null) inode=15687 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:12:49.317000 audit: PATH item=96 name=(null) inode=15667 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:12:49.317000 audit: PATH item=97 name=(null) inode=15688 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:12:49.317000 audit: PATH item=98 name=(null) inode=15688 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:12:49.317000 audit: PATH item=99 name=(null) inode=15689 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:12:49.317000 audit: PATH item=100 name=(null) inode=15688 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:12:49.317000 audit: PATH item=101 name=(null) inode=15690 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:12:49.317000 audit: PATH item=102 name=(null) inode=15688 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:12:49.317000 audit: PATH item=103 name=(null) inode=15691 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:12:49.317000 audit: PATH item=104 name=(null) inode=15688 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:12:49.317000 audit: PATH item=105 name=(null) inode=15692 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:12:49.317000 audit: PATH item=106 name=(null) inode=15688 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:12:49.317000 audit: PATH item=107 name=(null) inode=15693 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:12:49.317000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:12:49.317000 audit: PATH item=109 name=(null) inode=15694 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:12:49.317000 audit: PROCTITLE proctitle="(udev-worker)" Dec 13 16:12:49.383672 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input5 Dec 13 16:12:49.388658 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Dec 13 16:12:49.408909 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Dec 13 16:12:49.409199 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Dec 13 16:12:49.542458 systemd[1]: Finished systemd-udev-settle.service. Dec 13 16:12:49.542000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:12:49.545407 systemd[1]: Starting lvm2-activation-early.service... Dec 13 16:12:49.571077 lvm[1043]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 16:12:49.602901 systemd[1]: Finished lvm2-activation-early.service. Dec 13 16:12:49.603000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:12:49.603888 systemd[1]: Reached target cryptsetup.target. Dec 13 16:12:49.606568 systemd[1]: Starting lvm2-activation.service... Dec 13 16:12:49.612758 lvm[1044]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 16:12:49.637739 systemd[1]: Finished lvm2-activation.service. Dec 13 16:12:49.637000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:12:49.638599 systemd[1]: Reached target local-fs-pre.target. Dec 13 16:12:49.639329 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 16:12:49.639374 systemd[1]: Reached target local-fs.target. Dec 13 16:12:49.640100 systemd[1]: Reached target machines.target. Dec 13 16:12:49.642435 systemd[1]: Starting ldconfig.service... Dec 13 16:12:49.643721 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 16:12:49.643800 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 16:12:49.646268 systemd[1]: Starting systemd-boot-update.service... Dec 13 16:12:49.648702 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Dec 13 16:12:49.656451 systemd[1]: Starting systemd-machine-id-commit.service... Dec 13 16:12:49.658973 systemd[1]: Starting systemd-sysext.service... Dec 13 16:12:49.660256 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1046 (bootctl) Dec 13 16:12:49.662210 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Dec 13 16:12:49.678910 systemd[1]: Unmounting usr-share-oem.mount... Dec 13 16:12:49.741000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:12:49.741343 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Dec 13 16:12:49.744171 systemd[1]: usr-share-oem.mount: Deactivated successfully. Dec 13 16:12:49.744450 systemd[1]: Unmounted usr-share-oem.mount. Dec 13 16:12:49.825699 kernel: loop0: detected capacity change from 0 to 210664 Dec 13 16:12:49.841000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:12:49.840217 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 16:12:49.841647 systemd[1]: Finished systemd-machine-id-commit.service. Dec 13 16:12:49.862882 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 16:12:49.888660 kernel: loop1: detected capacity change from 0 to 210664 Dec 13 16:12:49.902183 systemd-fsck[1055]: fsck.fat 4.2 (2021-01-31) Dec 13 16:12:49.902183 systemd-fsck[1055]: /dev/vda1: 789 files, 119291/258078 clusters Dec 13 16:12:49.905000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:12:49.905514 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Dec 13 16:12:49.908844 systemd[1]: Mounting boot.mount... Dec 13 16:12:49.910728 (sd-sysext)[1058]: Using extensions 'kubernetes'. Dec 13 16:12:49.911426 (sd-sysext)[1058]: Merged extensions into '/usr'. Dec 13 16:12:49.947091 systemd[1]: Mounted boot.mount. Dec 13 16:12:49.948898 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 16:12:49.954750 systemd[1]: Mounting usr-share-oem.mount... Dec 13 16:12:49.955752 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 16:12:49.959709 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 16:12:49.963186 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 16:12:49.967315 systemd[1]: Starting modprobe@loop.service... Dec 13 16:12:49.968112 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 16:12:49.968352 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 16:12:49.968621 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 16:12:49.972149 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 16:12:49.972390 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 16:12:49.972000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:12:49.972000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:12:49.973754 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 16:12:49.973945 systemd[1]: Finished modprobe@loop.service. Dec 13 16:12:49.973000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:12:49.974000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:12:49.975097 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 16:12:49.980958 systemd[1]: Mounted usr-share-oem.mount. Dec 13 16:12:49.983699 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 16:12:49.983888 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 16:12:49.983000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:12:49.983000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:12:49.984850 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 16:12:49.986401 systemd[1]: Finished systemd-sysext.service. Dec 13 16:12:49.986000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:12:49.988964 systemd[1]: Starting ensure-sysext.service... Dec 13 16:12:49.993880 systemd[1]: Starting systemd-tmpfiles-setup.service... Dec 13 16:12:49.997343 systemd[1]: Finished systemd-boot-update.service. Dec 13 16:12:49.997000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:12:50.003601 systemd[1]: Reloading. Dec 13 16:12:50.032274 systemd-tmpfiles[1066]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Dec 13 16:12:50.039573 systemd-tmpfiles[1066]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 16:12:50.055112 systemd-tmpfiles[1066]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 16:12:50.134340 /usr/lib/systemd/system-generators/torcx-generator[1085]: time="2024-12-13T16:12:50Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 16:12:50.134395 /usr/lib/systemd/system-generators/torcx-generator[1085]: time="2024-12-13T16:12:50Z" level=info msg="torcx already run" Dec 13 16:12:50.243827 ldconfig[1045]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 16:12:50.296549 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 16:12:50.296586 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 16:12:50.326026 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 16:12:50.408000 audit: BPF prog-id=30 op=LOAD Dec 13 16:12:50.408000 audit: BPF prog-id=26 op=UNLOAD Dec 13 16:12:50.409000 audit: BPF prog-id=31 op=LOAD Dec 13 16:12:50.409000 audit: BPF prog-id=32 op=LOAD Dec 13 16:12:50.409000 audit: BPF prog-id=24 op=UNLOAD Dec 13 16:12:50.409000 audit: BPF prog-id=25 op=UNLOAD Dec 13 16:12:50.411000 audit: BPF prog-id=33 op=LOAD Dec 13 16:12:50.411000 audit: BPF prog-id=27 op=UNLOAD Dec 13 16:12:50.411000 audit: BPF prog-id=34 op=LOAD Dec 13 16:12:50.412000 audit: BPF prog-id=35 op=LOAD Dec 13 16:12:50.412000 audit: BPF prog-id=28 op=UNLOAD Dec 13 16:12:50.412000 audit: BPF prog-id=29 op=UNLOAD Dec 13 16:12:50.412000 audit: BPF prog-id=36 op=LOAD Dec 13 16:12:50.412000 audit: BPF prog-id=21 op=UNLOAD Dec 13 16:12:50.413000 audit: BPF prog-id=37 op=LOAD Dec 13 16:12:50.413000 audit: BPF prog-id=38 op=LOAD Dec 13 16:12:50.413000 audit: BPF prog-id=22 op=UNLOAD Dec 13 16:12:50.413000 audit: BPF prog-id=23 op=UNLOAD Dec 13 16:12:50.420037 systemd[1]: Finished ldconfig.service. Dec 13 16:12:50.420000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:12:50.422536 systemd[1]: Finished systemd-tmpfiles-setup.service. Dec 13 16:12:50.422000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:12:50.430546 systemd[1]: Starting audit-rules.service... Dec 13 16:12:50.433184 systemd[1]: Starting clean-ca-certificates.service... Dec 13 16:12:50.438366 systemd[1]: Starting systemd-journal-catalog-update.service... Dec 13 16:12:50.441000 audit: BPF prog-id=39 op=LOAD Dec 13 16:12:50.444662 systemd[1]: Starting systemd-resolved.service... Dec 13 16:12:50.448000 audit: BPF prog-id=40 op=LOAD Dec 13 16:12:50.450152 systemd[1]: Starting systemd-timesyncd.service... Dec 13 16:12:50.452820 systemd[1]: Starting systemd-update-utmp.service... Dec 13 16:12:50.455000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:12:50.455052 systemd[1]: Finished clean-ca-certificates.service. Dec 13 16:12:50.460285 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 16:12:50.466425 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 16:12:50.468554 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 16:12:50.472180 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 16:12:50.472000 audit[1141]: SYSTEM_BOOT pid=1141 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Dec 13 16:12:50.476090 systemd[1]: Starting modprobe@loop.service... Dec 13 16:12:50.477833 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 16:12:50.478060 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 16:12:50.478286 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 16:12:50.484072 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 16:12:50.484733 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 16:12:50.485000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:12:50.485000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:12:50.487866 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 16:12:50.488086 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 16:12:50.488000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:12:50.488000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:12:50.494211 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 16:12:50.498006 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 16:12:50.501209 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 16:12:50.503105 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 16:12:50.503310 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 16:12:50.503548 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 16:12:50.505175 systemd[1]: Finished systemd-update-utmp.service. Dec 13 16:12:50.505000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:12:50.507459 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 16:12:50.507766 systemd[1]: Finished modprobe@loop.service. Dec 13 16:12:50.508000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:12:50.508000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:12:50.509371 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 16:12:50.509573 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 16:12:50.510000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:12:50.510000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:12:50.519750 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 16:12:50.524302 systemd[1]: Starting modprobe@drm.service... Dec 13 16:12:50.532525 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 16:12:50.536942 systemd[1]: Starting modprobe@loop.service... Dec 13 16:12:50.537946 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 16:12:50.538166 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 16:12:50.542229 systemd[1]: Starting systemd-networkd-wait-online.service... Dec 13 16:12:50.544312 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 16:12:50.547456 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 16:12:50.549749 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 16:12:50.549000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:12:50.550000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:12:50.551471 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 16:12:50.551731 systemd[1]: Finished modprobe@drm.service. Dec 13 16:12:50.551000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:12:50.551000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:12:50.555215 systemd[1]: Finished ensure-sysext.service. Dec 13 16:12:50.555000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:12:50.560465 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 16:12:50.560728 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 16:12:50.560000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:12:50.560000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:12:50.561618 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 16:12:50.568461 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 16:12:50.568686 systemd[1]: Finished modprobe@loop.service. Dec 13 16:12:50.568000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:12:50.568000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:12:50.569878 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 16:12:50.570903 augenrules[1162]: No rules Dec 13 16:12:50.571361 systemd[1]: Finished audit-rules.service. Dec 13 16:12:50.569000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Dec 13 16:12:50.569000 audit[1162]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffc0aad5670 a2=420 a3=0 items=0 ppid=1133 pid=1162 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 16:12:50.569000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 13 16:12:50.601516 systemd[1]: Finished systemd-journal-catalog-update.service. Dec 13 16:12:50.605138 systemd[1]: Starting systemd-update-done.service... Dec 13 16:12:50.616613 systemd[1]: Finished systemd-update-done.service. Dec 13 16:12:50.617623 systemd-resolved[1137]: Positive Trust Anchors: Dec 13 16:12:50.618104 systemd-resolved[1137]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 16:12:50.618266 systemd-resolved[1137]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 16:12:50.626853 systemd-resolved[1137]: Using system hostname 'srv-n7j3g.gb1.brightbox.com'. Dec 13 16:12:50.630585 systemd[1]: Started systemd-resolved.service. Dec 13 16:12:50.631529 systemd[1]: Reached target network.target. Dec 13 16:12:50.632180 systemd[1]: Reached target nss-lookup.target. Dec 13 16:12:50.650371 systemd[1]: Started systemd-timesyncd.service. Dec 13 16:12:50.651193 systemd[1]: Reached target sysinit.target. Dec 13 16:12:50.652066 systemd[1]: Started motdgen.path. Dec 13 16:12:50.652761 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Dec 13 16:12:50.653525 systemd[1]: Started systemd-tmpfiles-clean.timer. Dec 13 16:12:50.654252 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 16:12:50.654311 systemd[1]: Reached target paths.target. Dec 13 16:12:50.654955 systemd[1]: Reached target time-set.target. Dec 13 16:12:50.655826 systemd[1]: Started logrotate.timer. Dec 13 16:12:50.656561 systemd[1]: Started mdadm.timer. Dec 13 16:12:50.657158 systemd[1]: Reached target timers.target. Dec 13 16:12:50.658815 systemd[1]: Listening on dbus.socket. Dec 13 16:12:50.661335 systemd[1]: Starting docker.socket... Dec 13 16:12:50.666845 systemd[1]: Listening on sshd.socket. Dec 13 16:12:50.667727 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 16:12:50.668473 systemd[1]: Listening on docker.socket. Dec 13 16:12:50.669241 systemd[1]: Reached target sockets.target. Dec 13 16:12:50.669873 systemd[1]: Reached target basic.target. Dec 13 16:12:50.670545 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 16:12:50.670595 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 16:12:50.672629 systemd[1]: Starting containerd.service... Dec 13 16:12:50.676424 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Dec 13 16:12:50.679137 systemd[1]: Starting dbus.service... Dec 13 16:12:50.682319 systemd[1]: Starting enable-oem-cloudinit.service... Dec 13 16:12:50.688321 systemd[1]: Starting extend-filesystems.service... Dec 13 16:12:50.690797 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Dec 13 16:12:50.692915 systemd[1]: Starting motdgen.service... Dec 13 16:12:50.696968 systemd[1]: Starting ssh-key-proc-cmdline.service... Dec 13 16:12:50.702959 systemd[1]: Starting sshd-keygen.service... Dec 13 16:12:50.710263 systemd[1]: Starting systemd-logind.service... Dec 13 16:12:50.713044 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 16:12:50.713199 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 16:12:50.713931 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 16:12:50.715209 systemd[1]: Starting update-engine.service... Dec 13 16:12:50.720850 systemd[1]: Starting update-ssh-keys-after-ignition.service... Dec 13 16:12:50.738281 jq[1185]: true Dec 13 16:12:50.745754 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 16:12:50.745817 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 16:12:50.748151 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 16:12:50.748400 systemd[1]: Finished ssh-key-proc-cmdline.service. Dec 13 16:12:50.752293 jq[1176]: false Dec 13 16:12:50.756629 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 16:12:50.756908 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Dec 13 16:12:50.766399 jq[1194]: true Dec 13 16:12:50.803754 dbus-daemon[1173]: [system] SELinux support is enabled Dec 13 16:12:50.804562 systemd[1]: Started dbus.service. Dec 13 16:12:50.811320 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 16:12:50.814832 extend-filesystems[1177]: Found loop1 Dec 13 16:12:50.814832 extend-filesystems[1177]: Found vda Dec 13 16:12:50.814832 extend-filesystems[1177]: Found vda1 Dec 13 16:12:50.814832 extend-filesystems[1177]: Found vda2 Dec 13 16:12:50.814832 extend-filesystems[1177]: Found vda3 Dec 13 16:12:50.814832 extend-filesystems[1177]: Found usr Dec 13 16:12:50.814832 extend-filesystems[1177]: Found vda4 Dec 13 16:12:50.814832 extend-filesystems[1177]: Found vda6 Dec 13 16:12:50.814832 extend-filesystems[1177]: Found vda7 Dec 13 16:12:50.814832 extend-filesystems[1177]: Found vda9 Dec 13 16:12:50.814832 extend-filesystems[1177]: Checking size of /dev/vda9 Dec 13 16:12:50.929004 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 15121403 blocks Dec 13 16:12:50.811365 systemd[1]: Reached target system-config.target. Dec 13 16:12:50.826465 dbus-daemon[1173]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.2' (uid=244 pid=1016 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Dec 13 16:12:50.932082 extend-filesystems[1177]: Resized partition /dev/vda9 Dec 13 16:12:50.934888 update_engine[1184]: I1213 16:12:50.905148 1184 main.cc:92] Flatcar Update Engine starting Dec 13 16:12:50.934888 update_engine[1184]: I1213 16:12:50.910825 1184 update_check_scheduler.cc:74] Next update check in 6m22s Dec 13 16:12:50.812141 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 16:12:50.946884 extend-filesystems[1216]: resize2fs 1.46.5 (30-Dec-2021) Dec 13 16:12:50.812169 systemd[1]: Reached target user-config.target. Dec 13 16:12:50.821439 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 16:12:50.821710 systemd[1]: Finished motdgen.service. Dec 13 16:12:50.831740 systemd[1]: Starting systemd-hostnamed.service... Dec 13 16:12:50.916700 systemd[1]: Started update-engine.service. Dec 13 16:12:50.925613 systemd[1]: Started locksmithd.service. Dec 13 16:12:51.879735 systemd-resolved[1137]: Clock change detected. Flushing caches. Dec 13 16:12:51.880327 systemd-timesyncd[1140]: Contacted time server 185.177.149.33:123 (0.flatcar.pool.ntp.org). Dec 13 16:12:51.880443 systemd-timesyncd[1140]: Initial clock synchronization to Fri 2024-12-13 16:12:51.879658 UTC. Dec 13 16:12:51.896989 bash[1224]: Updated "/home/core/.ssh/authorized_keys" Dec 13 16:12:51.898350 systemd[1]: Finished update-ssh-keys-after-ignition.service. Dec 13 16:12:51.899599 systemd-networkd[1016]: eth0: Gained IPv6LL Dec 13 16:12:51.906142 systemd[1]: Finished systemd-networkd-wait-online.service. Dec 13 16:12:51.907077 systemd[1]: Reached target network-online.target. Dec 13 16:12:51.909909 systemd[1]: Starting kubelet.service... Dec 13 16:12:51.944684 env[1188]: time="2024-12-13T16:12:51.944558278Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Dec 13 16:12:51.959152 systemd-logind[1183]: Watching system buttons on /dev/input/event2 (Power Button) Dec 13 16:12:51.960261 systemd-logind[1183]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 13 16:12:51.962139 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Dec 13 16:12:51.968809 systemd-logind[1183]: New seat seat0. Dec 13 16:12:51.975696 systemd[1]: Started systemd-logind.service. Dec 13 16:12:51.984259 extend-filesystems[1216]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Dec 13 16:12:51.984259 extend-filesystems[1216]: old_desc_blocks = 1, new_desc_blocks = 8 Dec 13 16:12:51.984259 extend-filesystems[1216]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Dec 13 16:12:51.988157 extend-filesystems[1177]: Resized filesystem in /dev/vda9 Dec 13 16:12:51.984734 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 16:12:51.985042 systemd[1]: Finished extend-filesystems.service. Dec 13 16:12:52.033631 dbus-daemon[1173]: [system] Successfully activated service 'org.freedesktop.hostname1' Dec 13 16:12:52.034331 systemd[1]: Started systemd-hostnamed.service. Dec 13 16:12:52.034711 dbus-daemon[1173]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1209 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Dec 13 16:12:52.040518 systemd[1]: Starting polkit.service... Dec 13 16:12:52.047046 env[1188]: time="2024-12-13T16:12:52.046974965Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 16:12:52.047321 env[1188]: time="2024-12-13T16:12:52.047286498Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 16:12:52.050360 env[1188]: time="2024-12-13T16:12:52.050308877Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.173-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 16:12:52.050360 env[1188]: time="2024-12-13T16:12:52.050355329Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 16:12:52.050755 env[1188]: time="2024-12-13T16:12:52.050713731Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 16:12:52.050755 env[1188]: time="2024-12-13T16:12:52.050750529Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 16:12:52.050867 env[1188]: time="2024-12-13T16:12:52.050773260Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Dec 13 16:12:52.050867 env[1188]: time="2024-12-13T16:12:52.050797514Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 16:12:52.050952 env[1188]: time="2024-12-13T16:12:52.050922348Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 16:12:52.051498 env[1188]: time="2024-12-13T16:12:52.051443530Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 16:12:52.052610 env[1188]: time="2024-12-13T16:12:52.052546804Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 16:12:52.052610 env[1188]: time="2024-12-13T16:12:52.052596260Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 16:12:52.058018 env[1188]: time="2024-12-13T16:12:52.057375235Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Dec 13 16:12:52.058018 env[1188]: time="2024-12-13T16:12:52.057410461Z" level=info msg="metadata content store policy set" policy=shared Dec 13 16:12:52.061235 polkitd[1232]: Started polkitd version 121 Dec 13 16:12:52.069230 env[1188]: time="2024-12-13T16:12:52.068830938Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 16:12:52.069230 env[1188]: time="2024-12-13T16:12:52.068874394Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 16:12:52.069230 env[1188]: time="2024-12-13T16:12:52.068899892Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 16:12:52.069230 env[1188]: time="2024-12-13T16:12:52.068978298Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 16:12:52.069230 env[1188]: time="2024-12-13T16:12:52.069011365Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 16:12:52.069230 env[1188]: time="2024-12-13T16:12:52.069035587Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 16:12:52.069230 env[1188]: time="2024-12-13T16:12:52.069056155Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 16:12:52.069230 env[1188]: time="2024-12-13T16:12:52.069078178Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 16:12:52.069230 env[1188]: time="2024-12-13T16:12:52.069129967Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Dec 13 16:12:52.069230 env[1188]: time="2024-12-13T16:12:52.069189079Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 16:12:52.069230 env[1188]: time="2024-12-13T16:12:52.069228188Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 16:12:52.071293 env[1188]: time="2024-12-13T16:12:52.069259921Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 16:12:52.071293 env[1188]: time="2024-12-13T16:12:52.069469176Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 16:12:52.071293 env[1188]: time="2024-12-13T16:12:52.069675405Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 16:12:52.073316 env[1188]: time="2024-12-13T16:12:52.073270337Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 16:12:52.073404 env[1188]: time="2024-12-13T16:12:52.073341420Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 16:12:52.073404 env[1188]: time="2024-12-13T16:12:52.073381707Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 16:12:52.073564 env[1188]: time="2024-12-13T16:12:52.073523643Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 16:12:52.074489 env[1188]: time="2024-12-13T16:12:52.073554512Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 16:12:52.074489 env[1188]: time="2024-12-13T16:12:52.073721783Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 16:12:52.074489 env[1188]: time="2024-12-13T16:12:52.073763982Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 16:12:52.074489 env[1188]: time="2024-12-13T16:12:52.073790810Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 16:12:52.074489 env[1188]: time="2024-12-13T16:12:52.073811063Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 16:12:52.074489 env[1188]: time="2024-12-13T16:12:52.073831005Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 16:12:52.074489 env[1188]: time="2024-12-13T16:12:52.073857461Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 16:12:52.074489 env[1188]: time="2024-12-13T16:12:52.073883347Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 16:12:52.074489 env[1188]: time="2024-12-13T16:12:52.074146809Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 16:12:52.074489 env[1188]: time="2024-12-13T16:12:52.074181931Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 16:12:52.074489 env[1188]: time="2024-12-13T16:12:52.074217575Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 16:12:52.074489 env[1188]: time="2024-12-13T16:12:52.074246557Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 16:12:52.074489 env[1188]: time="2024-12-13T16:12:52.074279618Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Dec 13 16:12:52.074489 env[1188]: time="2024-12-13T16:12:52.074301059Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 16:12:52.075109 env[1188]: time="2024-12-13T16:12:52.074354576Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Dec 13 16:12:52.075109 env[1188]: time="2024-12-13T16:12:52.074460948Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 16:12:52.075254 env[1188]: time="2024-12-13T16:12:52.074757544Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 16:12:52.075254 env[1188]: time="2024-12-13T16:12:52.074862039Z" level=info msg="Connect containerd service" Dec 13 16:12:52.075254 env[1188]: time="2024-12-13T16:12:52.074939198Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 16:12:52.082393 polkitd[1232]: Loading rules from directory /etc/polkit-1/rules.d Dec 13 16:12:52.082517 polkitd[1232]: Loading rules from directory /usr/share/polkit-1/rules.d Dec 13 16:12:52.087533 polkitd[1232]: Finished loading, compiling and executing 2 rules Dec 13 16:12:52.088324 dbus-daemon[1173]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Dec 13 16:12:52.088613 systemd[1]: Started polkit.service. Dec 13 16:12:52.089138 polkitd[1232]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Dec 13 16:12:52.095188 env[1188]: time="2024-12-13T16:12:52.094711622Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 16:12:52.095373 env[1188]: time="2024-12-13T16:12:52.095334076Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 16:12:52.095827 env[1188]: time="2024-12-13T16:12:52.095503172Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 16:12:52.095728 systemd[1]: Started containerd.service. Dec 13 16:12:52.096865 env[1188]: time="2024-12-13T16:12:52.096715974Z" level=info msg="containerd successfully booted in 0.155809s" Dec 13 16:12:52.105398 systemd-hostnamed[1209]: Hostname set to (static) Dec 13 16:12:52.109041 env[1188]: time="2024-12-13T16:12:52.108511099Z" level=info msg="Start subscribing containerd event" Dec 13 16:12:52.109457 env[1188]: time="2024-12-13T16:12:52.109185029Z" level=info msg="Start recovering state" Dec 13 16:12:52.109457 env[1188]: time="2024-12-13T16:12:52.109363867Z" level=info msg="Start event monitor" Dec 13 16:12:52.111447 env[1188]: time="2024-12-13T16:12:52.109437909Z" level=info msg="Start snapshots syncer" Dec 13 16:12:52.111447 env[1188]: time="2024-12-13T16:12:52.109667386Z" level=info msg="Start cni network conf syncer for default" Dec 13 16:12:52.111447 env[1188]: time="2024-12-13T16:12:52.109692462Z" level=info msg="Start streaming server" Dec 13 16:12:52.299628 locksmithd[1225]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 16:12:52.540509 sshd_keygen[1204]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 16:12:52.572133 systemd[1]: Finished sshd-keygen.service. Dec 13 16:12:52.575690 systemd[1]: Starting issuegen.service... Dec 13 16:12:52.584823 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 16:12:52.585093 systemd[1]: Finished issuegen.service. Dec 13 16:12:52.588307 systemd[1]: Starting systemd-user-sessions.service... Dec 13 16:12:52.600024 systemd[1]: Finished systemd-user-sessions.service. Dec 13 16:12:52.603676 systemd[1]: Started getty@tty1.service. Dec 13 16:12:52.606778 systemd[1]: Started serial-getty@ttyS0.service. Dec 13 16:12:52.608054 systemd[1]: Reached target getty.target. Dec 13 16:12:52.664866 systemd[1]: Created slice system-sshd.slice. Dec 13 16:12:52.670346 systemd[1]: Started sshd@0-10.230.57.126:22-139.178.68.195:45342.service. Dec 13 16:12:53.039979 systemd[1]: Started kubelet.service. Dec 13 16:12:53.410700 systemd-networkd[1016]: eth0: Ignoring DHCPv6 address 2a02:1348:179:8e5f:24:19ff:fee6:397e/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:8e5f:24:19ff:fee6:397e/64 assigned by NDisc. Dec 13 16:12:53.411260 systemd-networkd[1016]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Dec 13 16:12:53.574083 sshd[1259]: Accepted publickey for core from 139.178.68.195 port 45342 ssh2: RSA SHA256:BRWuvX4vngANWcecei9LW91Zd3OWx+vtbErQ53ehsZc Dec 13 16:12:53.577215 sshd[1259]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 16:12:53.597394 systemd[1]: Created slice user-500.slice. Dec 13 16:12:53.607827 systemd[1]: Starting user-runtime-dir@500.service... Dec 13 16:12:53.618619 systemd-logind[1183]: New session 1 of user core. Dec 13 16:12:53.629571 systemd[1]: Finished user-runtime-dir@500.service. Dec 13 16:12:53.633175 systemd[1]: Starting user@500.service... Dec 13 16:12:53.641538 (systemd)[1272]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 16:12:53.757295 systemd[1272]: Queued start job for default target default.target. Dec 13 16:12:53.759729 systemd[1272]: Reached target paths.target. Dec 13 16:12:53.759774 systemd[1272]: Reached target sockets.target. Dec 13 16:12:53.759796 systemd[1272]: Reached target timers.target. Dec 13 16:12:53.759816 systemd[1272]: Reached target basic.target. Dec 13 16:12:53.759986 systemd[1]: Started user@500.service. Dec 13 16:12:53.762390 systemd[1]: Started session-1.scope. Dec 13 16:12:53.764324 systemd[1272]: Reached target default.target. Dec 13 16:12:53.764581 systemd[1272]: Startup finished in 110ms. Dec 13 16:12:53.797688 kubelet[1264]: E1213 16:12:53.797590 1264 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 16:12:53.800121 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 16:12:53.800489 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 16:12:53.800965 systemd[1]: kubelet.service: Consumed 1.143s CPU time. Dec 13 16:12:54.422850 systemd[1]: Started sshd@1-10.230.57.126:22-139.178.68.195:45358.service. Dec 13 16:12:55.319029 sshd[1281]: Accepted publickey for core from 139.178.68.195 port 45358 ssh2: RSA SHA256:BRWuvX4vngANWcecei9LW91Zd3OWx+vtbErQ53ehsZc Dec 13 16:12:55.321173 sshd[1281]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 16:12:55.329226 systemd-logind[1183]: New session 2 of user core. Dec 13 16:12:55.330234 systemd[1]: Started session-2.scope. Dec 13 16:12:55.942476 sshd[1281]: pam_unix(sshd:session): session closed for user core Dec 13 16:12:55.946688 systemd-logind[1183]: Session 2 logged out. Waiting for processes to exit. Dec 13 16:12:55.947367 systemd[1]: sshd@1-10.230.57.126:22-139.178.68.195:45358.service: Deactivated successfully. Dec 13 16:12:55.948457 systemd[1]: session-2.scope: Deactivated successfully. Dec 13 16:12:55.949595 systemd-logind[1183]: Removed session 2. Dec 13 16:12:56.089881 systemd[1]: Started sshd@2-10.230.57.126:22-139.178.68.195:45362.service. Dec 13 16:12:56.977782 sshd[1288]: Accepted publickey for core from 139.178.68.195 port 45362 ssh2: RSA SHA256:BRWuvX4vngANWcecei9LW91Zd3OWx+vtbErQ53ehsZc Dec 13 16:12:56.980677 sshd[1288]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 16:12:56.988297 systemd-logind[1183]: New session 3 of user core. Dec 13 16:12:56.988325 systemd[1]: Started session-3.scope. Dec 13 16:12:57.599717 sshd[1288]: pam_unix(sshd:session): session closed for user core Dec 13 16:12:57.603243 systemd[1]: sshd@2-10.230.57.126:22-139.178.68.195:45362.service: Deactivated successfully. Dec 13 16:12:57.604558 systemd-logind[1183]: Session 3 logged out. Waiting for processes to exit. Dec 13 16:12:57.604690 systemd[1]: session-3.scope: Deactivated successfully. Dec 13 16:12:57.606104 systemd-logind[1183]: Removed session 3. Dec 13 16:12:58.762950 coreos-metadata[1172]: Dec 13 16:12:58.762 WARN failed to locate config-drive, using the metadata service API instead Dec 13 16:12:58.819139 coreos-metadata[1172]: Dec 13 16:12:58.819 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Dec 13 16:12:58.846455 coreos-metadata[1172]: Dec 13 16:12:58.846 INFO Fetch successful Dec 13 16:12:58.846849 coreos-metadata[1172]: Dec 13 16:12:58.846 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Dec 13 16:12:58.883227 coreos-metadata[1172]: Dec 13 16:12:58.883 INFO Fetch successful Dec 13 16:12:58.885662 unknown[1172]: wrote ssh authorized keys file for user: core Dec 13 16:12:58.899409 update-ssh-keys[1295]: Updated "/home/core/.ssh/authorized_keys" Dec 13 16:12:58.900070 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Dec 13 16:12:58.900788 systemd[1]: Reached target multi-user.target. Dec 13 16:12:58.903539 systemd[1]: Starting systemd-update-utmp-runlevel.service... Dec 13 16:12:58.914347 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Dec 13 16:12:58.914798 systemd[1]: Finished systemd-update-utmp-runlevel.service. Dec 13 16:12:58.915319 systemd[1]: Startup finished in 1.177s (kernel) + 6.488s (initrd) + 13.805s (userspace) = 21.471s. Dec 13 16:13:04.051937 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 16:13:04.052264 systemd[1]: Stopped kubelet.service. Dec 13 16:13:04.052336 systemd[1]: kubelet.service: Consumed 1.143s CPU time. Dec 13 16:13:04.054784 systemd[1]: Starting kubelet.service... Dec 13 16:13:04.217725 systemd[1]: Started kubelet.service. Dec 13 16:13:04.325198 kubelet[1301]: E1213 16:13:04.325008 1301 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 16:13:04.329972 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 16:13:04.330205 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 16:13:07.749005 systemd[1]: Started sshd@3-10.230.57.126:22-139.178.68.195:41312.service. Dec 13 16:13:08.633691 sshd[1308]: Accepted publickey for core from 139.178.68.195 port 41312 ssh2: RSA SHA256:BRWuvX4vngANWcecei9LW91Zd3OWx+vtbErQ53ehsZc Dec 13 16:13:08.636607 sshd[1308]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 16:13:08.644300 systemd-logind[1183]: New session 4 of user core. Dec 13 16:13:08.644562 systemd[1]: Started session-4.scope. Dec 13 16:13:09.253033 sshd[1308]: pam_unix(sshd:session): session closed for user core Dec 13 16:13:09.257242 systemd-logind[1183]: Session 4 logged out. Waiting for processes to exit. Dec 13 16:13:09.257794 systemd[1]: sshd@3-10.230.57.126:22-139.178.68.195:41312.service: Deactivated successfully. Dec 13 16:13:09.258782 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 16:13:09.260048 systemd-logind[1183]: Removed session 4. Dec 13 16:13:09.401265 systemd[1]: Started sshd@4-10.230.57.126:22-139.178.68.195:41322.service. Dec 13 16:13:10.291965 sshd[1314]: Accepted publickey for core from 139.178.68.195 port 41322 ssh2: RSA SHA256:BRWuvX4vngANWcecei9LW91Zd3OWx+vtbErQ53ehsZc Dec 13 16:13:10.295001 sshd[1314]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 16:13:10.301730 systemd-logind[1183]: New session 5 of user core. Dec 13 16:13:10.302661 systemd[1]: Started session-5.scope. Dec 13 16:13:10.908793 sshd[1314]: pam_unix(sshd:session): session closed for user core Dec 13 16:13:10.912987 systemd[1]: sshd@4-10.230.57.126:22-139.178.68.195:41322.service: Deactivated successfully. Dec 13 16:13:10.914022 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 16:13:10.914988 systemd-logind[1183]: Session 5 logged out. Waiting for processes to exit. Dec 13 16:13:10.916366 systemd-logind[1183]: Removed session 5. Dec 13 16:13:11.055603 systemd[1]: Started sshd@5-10.230.57.126:22-139.178.68.195:41336.service. Dec 13 16:13:11.941721 sshd[1320]: Accepted publickey for core from 139.178.68.195 port 41336 ssh2: RSA SHA256:BRWuvX4vngANWcecei9LW91Zd3OWx+vtbErQ53ehsZc Dec 13 16:13:11.943842 sshd[1320]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 16:13:11.951263 systemd-logind[1183]: New session 6 of user core. Dec 13 16:13:11.952154 systemd[1]: Started session-6.scope. Dec 13 16:13:12.561919 sshd[1320]: pam_unix(sshd:session): session closed for user core Dec 13 16:13:12.566199 systemd[1]: sshd@5-10.230.57.126:22-139.178.68.195:41336.service: Deactivated successfully. Dec 13 16:13:12.567216 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 16:13:12.568137 systemd-logind[1183]: Session 6 logged out. Waiting for processes to exit. Dec 13 16:13:12.569861 systemd-logind[1183]: Removed session 6. Dec 13 16:13:12.707573 systemd[1]: Started sshd@6-10.230.57.126:22-139.178.68.195:41348.service. Dec 13 16:13:13.588929 sshd[1326]: Accepted publickey for core from 139.178.68.195 port 41348 ssh2: RSA SHA256:BRWuvX4vngANWcecei9LW91Zd3OWx+vtbErQ53ehsZc Dec 13 16:13:13.591538 sshd[1326]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 16:13:13.598501 systemd-logind[1183]: New session 7 of user core. Dec 13 16:13:13.599884 systemd[1]: Started session-7.scope. Dec 13 16:13:14.076672 sudo[1329]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 16:13:14.077125 sudo[1329]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Dec 13 16:13:14.097978 systemd[1]: Starting coreos-metadata.service... Dec 13 16:13:14.566744 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 16:13:14.567211 systemd[1]: Stopped kubelet.service. Dec 13 16:13:14.569896 systemd[1]: Starting kubelet.service... Dec 13 16:13:14.696140 systemd[1]: Started kubelet.service. Dec 13 16:13:14.797701 kubelet[1340]: E1213 16:13:14.797637 1340 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 16:13:14.800364 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 16:13:14.800658 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 16:13:21.154853 coreos-metadata[1333]: Dec 13 16:13:21.154 WARN failed to locate config-drive, using the metadata service API instead Dec 13 16:13:21.207101 coreos-metadata[1333]: Dec 13 16:13:21.206 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Dec 13 16:13:21.209127 coreos-metadata[1333]: Dec 13 16:13:21.208 INFO Fetch successful Dec 13 16:13:21.209446 coreos-metadata[1333]: Dec 13 16:13:21.209 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Dec 13 16:13:21.223065 coreos-metadata[1333]: Dec 13 16:13:21.222 INFO Fetch successful Dec 13 16:13:21.223373 coreos-metadata[1333]: Dec 13 16:13:21.223 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Dec 13 16:13:21.237984 coreos-metadata[1333]: Dec 13 16:13:21.237 INFO Fetch successful Dec 13 16:13:21.238264 coreos-metadata[1333]: Dec 13 16:13:21.238 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Dec 13 16:13:21.251465 coreos-metadata[1333]: Dec 13 16:13:21.251 INFO Fetch successful Dec 13 16:13:21.251748 coreos-metadata[1333]: Dec 13 16:13:21.251 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Dec 13 16:13:21.268245 coreos-metadata[1333]: Dec 13 16:13:21.268 INFO Fetch successful Dec 13 16:13:21.279999 systemd[1]: Finished coreos-metadata.service. Dec 13 16:13:22.183081 systemd[1]: Stopped kubelet.service. Dec 13 16:13:22.187217 systemd[1]: Starting kubelet.service... Dec 13 16:13:22.221575 systemd[1]: Reloading. Dec 13 16:13:22.379984 /usr/lib/systemd/system-generators/torcx-generator[1405]: time="2024-12-13T16:13:22Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 16:13:22.380044 /usr/lib/systemd/system-generators/torcx-generator[1405]: time="2024-12-13T16:13:22Z" level=info msg="torcx already run" Dec 13 16:13:22.491768 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 16:13:22.491815 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 16:13:22.520933 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 16:13:22.660063 systemd[1]: Started kubelet.service. Dec 13 16:13:22.667200 systemd[1]: Stopping kubelet.service... Dec 13 16:13:22.668223 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 16:13:22.668713 systemd[1]: Stopped kubelet.service. Dec 13 16:13:22.672563 systemd[1]: Starting kubelet.service... Dec 13 16:13:22.796349 systemd[1]: Started kubelet.service. Dec 13 16:13:22.853632 kubelet[1464]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 16:13:22.854304 kubelet[1464]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 16:13:22.854414 kubelet[1464]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 16:13:22.869611 kubelet[1464]: I1213 16:13:22.869541 1464 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 16:13:23.451822 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Dec 13 16:13:23.497955 kubelet[1464]: I1213 16:13:23.497894 1464 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Dec 13 16:13:23.497955 kubelet[1464]: I1213 16:13:23.497934 1464 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 16:13:23.498203 kubelet[1464]: I1213 16:13:23.498189 1464 server.go:927] "Client rotation is on, will bootstrap in background" Dec 13 16:13:23.513069 kubelet[1464]: I1213 16:13:23.513025 1464 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 16:13:23.529025 kubelet[1464]: I1213 16:13:23.528994 1464 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 16:13:23.530450 kubelet[1464]: I1213 16:13:23.530375 1464 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 16:13:23.530855 kubelet[1464]: I1213 16:13:23.530563 1464 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.230.57.126","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 16:13:23.531239 kubelet[1464]: I1213 16:13:23.531202 1464 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 16:13:23.531378 kubelet[1464]: I1213 16:13:23.531356 1464 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 16:13:23.531761 kubelet[1464]: I1213 16:13:23.531738 1464 state_mem.go:36] "Initialized new in-memory state store" Dec 13 16:13:23.533520 kubelet[1464]: I1213 16:13:23.533496 1464 kubelet.go:400] "Attempting to sync node with API server" Dec 13 16:13:23.533655 kubelet[1464]: I1213 16:13:23.533631 1464 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 16:13:23.533840 kubelet[1464]: I1213 16:13:23.533789 1464 kubelet.go:312] "Adding apiserver pod source" Dec 13 16:13:23.533982 kubelet[1464]: I1213 16:13:23.533958 1464 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 16:13:23.534628 kubelet[1464]: E1213 16:13:23.534541 1464 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 16:13:23.534725 kubelet[1464]: E1213 16:13:23.534630 1464 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 16:13:23.538876 kubelet[1464]: I1213 16:13:23.538843 1464 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Dec 13 16:13:23.540663 kubelet[1464]: I1213 16:13:23.540634 1464 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 16:13:23.540803 kubelet[1464]: W1213 16:13:23.540778 1464 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 16:13:23.541699 kubelet[1464]: W1213 16:13:23.541670 1464 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "10.230.57.126" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Dec 13 16:13:23.541861 kubelet[1464]: I1213 16:13:23.541835 1464 server.go:1264] "Started kubelet" Dec 13 16:13:23.541985 kubelet[1464]: E1213 16:13:23.541847 1464 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.230.57.126" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Dec 13 16:13:23.542361 kubelet[1464]: W1213 16:13:23.542335 1464 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Dec 13 16:13:23.542599 kubelet[1464]: E1213 16:13:23.542573 1464 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Dec 13 16:13:23.542981 kubelet[1464]: I1213 16:13:23.542929 1464 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 16:13:23.549872 kubelet[1464]: I1213 16:13:23.549637 1464 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 16:13:23.550310 kubelet[1464]: I1213 16:13:23.550276 1464 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 16:13:23.558683 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Dec 13 16:13:23.559494 kubelet[1464]: I1213 16:13:23.558816 1464 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 16:13:23.561789 kubelet[1464]: E1213 16:13:23.561564 1464 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.230.57.126.1810c89619d6d06a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.230.57.126,UID:10.230.57.126,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:10.230.57.126,},FirstTimestamp:2024-12-13 16:13:23.541799018 +0000 UTC m=+0.738353457,LastTimestamp:2024-12-13 16:13:23.541799018 +0000 UTC m=+0.738353457,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.230.57.126,}" Dec 13 16:13:23.566400 kubelet[1464]: I1213 16:13:23.566247 1464 server.go:455] "Adding debug handlers to kubelet server" Dec 13 16:13:23.573745 kubelet[1464]: I1213 16:13:23.573703 1464 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 16:13:23.578611 kubelet[1464]: I1213 16:13:23.575134 1464 reconciler.go:26] "Reconciler: start to sync state" Dec 13 16:13:23.578611 kubelet[1464]: I1213 16:13:23.576504 1464 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Dec 13 16:13:23.579884 kubelet[1464]: E1213 16:13:23.579843 1464 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 16:13:23.580475 kubelet[1464]: I1213 16:13:23.580449 1464 factory.go:221] Registration of the systemd container factory successfully Dec 13 16:13:23.580921 kubelet[1464]: I1213 16:13:23.580890 1464 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 16:13:23.581572 kubelet[1464]: E1213 16:13:23.581495 1464 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.230.57.126\" not found" node="10.230.57.126" Dec 13 16:13:23.587789 kubelet[1464]: I1213 16:13:23.587746 1464 factory.go:221] Registration of the containerd container factory successfully Dec 13 16:13:23.626206 kubelet[1464]: I1213 16:13:23.626172 1464 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 16:13:23.626206 kubelet[1464]: I1213 16:13:23.626198 1464 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 16:13:23.626465 kubelet[1464]: I1213 16:13:23.626238 1464 state_mem.go:36] "Initialized new in-memory state store" Dec 13 16:13:23.630308 kubelet[1464]: I1213 16:13:23.630279 1464 policy_none.go:49] "None policy: Start" Dec 13 16:13:23.631022 kubelet[1464]: I1213 16:13:23.630996 1464 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 16:13:23.631113 kubelet[1464]: I1213 16:13:23.631045 1464 state_mem.go:35] "Initializing new in-memory state store" Dec 13 16:13:23.641962 systemd[1]: Created slice kubepods.slice. Dec 13 16:13:23.650695 systemd[1]: Created slice kubepods-burstable.slice. Dec 13 16:13:23.655514 systemd[1]: Created slice kubepods-besteffort.slice. Dec 13 16:13:23.678348 kubelet[1464]: I1213 16:13:23.678313 1464 kubelet_node_status.go:73] "Attempting to register node" node="10.230.57.126" Dec 13 16:13:23.682649 kubelet[1464]: I1213 16:13:23.682624 1464 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 16:13:23.684138 kubelet[1464]: I1213 16:13:23.684094 1464 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 16:13:23.684451 kubelet[1464]: I1213 16:13:23.684408 1464 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 16:13:23.687184 kubelet[1464]: I1213 16:13:23.687091 1464 kubelet_node_status.go:76] "Successfully registered node" node="10.230.57.126" Dec 13 16:13:23.688477 kubelet[1464]: E1213 16:13:23.688414 1464 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.230.57.126\" not found" Dec 13 16:13:23.703370 kubelet[1464]: E1213 16:13:23.703195 1464 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.230.57.126\" not found" Dec 13 16:13:23.753149 kubelet[1464]: I1213 16:13:23.753073 1464 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 16:13:23.754738 kubelet[1464]: I1213 16:13:23.754711 1464 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 16:13:23.754947 kubelet[1464]: I1213 16:13:23.754914 1464 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 16:13:23.755105 kubelet[1464]: I1213 16:13:23.755082 1464 kubelet.go:2337] "Starting kubelet main sync loop" Dec 13 16:13:23.755370 kubelet[1464]: E1213 16:13:23.755343 1464 kubelet.go:2361] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Dec 13 16:13:23.803822 kubelet[1464]: E1213 16:13:23.803743 1464 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.230.57.126\" not found" Dec 13 16:13:23.827527 sudo[1329]: pam_unix(sudo:session): session closed for user root Dec 13 16:13:23.904753 kubelet[1464]: E1213 16:13:23.904678 1464 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.230.57.126\" not found" Dec 13 16:13:23.971918 sshd[1326]: pam_unix(sshd:session): session closed for user core Dec 13 16:13:23.977351 systemd-logind[1183]: Session 7 logged out. Waiting for processes to exit. Dec 13 16:13:23.978400 systemd[1]: sshd@6-10.230.57.126:22-139.178.68.195:41348.service: Deactivated successfully. Dec 13 16:13:23.979541 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 16:13:23.980453 systemd-logind[1183]: Removed session 7. Dec 13 16:13:24.006530 kubelet[1464]: E1213 16:13:24.006493 1464 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.230.57.126\" not found" Dec 13 16:13:24.107808 kubelet[1464]: E1213 16:13:24.107752 1464 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.230.57.126\" not found" Dec 13 16:13:24.208945 kubelet[1464]: E1213 16:13:24.208883 1464 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.230.57.126\" not found" Dec 13 16:13:24.310272 kubelet[1464]: E1213 16:13:24.310058 1464 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.230.57.126\" not found" Dec 13 16:13:24.412017 kubelet[1464]: I1213 16:13:24.411976 1464 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Dec 13 16:13:24.412817 env[1188]: time="2024-12-13T16:13:24.412713580Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 16:13:24.413686 kubelet[1464]: I1213 16:13:24.413515 1464 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Dec 13 16:13:24.501326 kubelet[1464]: I1213 16:13:24.501276 1464 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Dec 13 16:13:24.501920 kubelet[1464]: W1213 16:13:24.501890 1464 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Dec 13 16:13:24.502105 kubelet[1464]: W1213 16:13:24.502078 1464 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.Node ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Dec 13 16:13:24.502327 kubelet[1464]: W1213 16:13:24.502287 1464 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Dec 13 16:13:24.535109 kubelet[1464]: E1213 16:13:24.535061 1464 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 16:13:24.535296 kubelet[1464]: I1213 16:13:24.535197 1464 apiserver.go:52] "Watching apiserver" Dec 13 16:13:24.546749 kubelet[1464]: I1213 16:13:24.546705 1464 topology_manager.go:215] "Topology Admit Handler" podUID="b25e8eb1-78f9-4f6b-9823-67ec8806057c" podNamespace="kube-system" podName="cilium-cnhpl" Dec 13 16:13:24.547126 kubelet[1464]: I1213 16:13:24.547094 1464 topology_manager.go:215] "Topology Admit Handler" podUID="dfea5e67-aaed-4e6f-90bb-cd7b56240aef" podNamespace="kube-system" podName="kube-proxy-r47zv" Dec 13 16:13:24.555294 systemd[1]: Created slice kubepods-burstable-podb25e8eb1_78f9_4f6b_9823_67ec8806057c.slice. Dec 13 16:13:24.567736 systemd[1]: Created slice kubepods-besteffort-poddfea5e67_aaed_4e6f_90bb_cd7b56240aef.slice. Dec 13 16:13:24.577513 kubelet[1464]: I1213 16:13:24.577478 1464 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Dec 13 16:13:24.581721 kubelet[1464]: I1213 16:13:24.581678 1464 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b25e8eb1-78f9-4f6b-9823-67ec8806057c-bpf-maps\") pod \"cilium-cnhpl\" (UID: \"b25e8eb1-78f9-4f6b-9823-67ec8806057c\") " pod="kube-system/cilium-cnhpl" Dec 13 16:13:24.581824 kubelet[1464]: I1213 16:13:24.581726 1464 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b25e8eb1-78f9-4f6b-9823-67ec8806057c-host-proc-sys-net\") pod \"cilium-cnhpl\" (UID: \"b25e8eb1-78f9-4f6b-9823-67ec8806057c\") " pod="kube-system/cilium-cnhpl" Dec 13 16:13:24.581824 kubelet[1464]: I1213 16:13:24.581758 1464 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dfea5e67-aaed-4e6f-90bb-cd7b56240aef-xtables-lock\") pod \"kube-proxy-r47zv\" (UID: \"dfea5e67-aaed-4e6f-90bb-cd7b56240aef\") " pod="kube-system/kube-proxy-r47zv" Dec 13 16:13:24.581824 kubelet[1464]: I1213 16:13:24.581792 1464 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b25e8eb1-78f9-4f6b-9823-67ec8806057c-cilium-run\") pod \"cilium-cnhpl\" (UID: \"b25e8eb1-78f9-4f6b-9823-67ec8806057c\") " pod="kube-system/cilium-cnhpl" Dec 13 16:13:24.581824 kubelet[1464]: I1213 16:13:24.581820 1464 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b25e8eb1-78f9-4f6b-9823-67ec8806057c-cilium-cgroup\") pod \"cilium-cnhpl\" (UID: \"b25e8eb1-78f9-4f6b-9823-67ec8806057c\") " pod="kube-system/cilium-cnhpl" Dec 13 16:13:24.582053 kubelet[1464]: I1213 16:13:24.581855 1464 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b25e8eb1-78f9-4f6b-9823-67ec8806057c-clustermesh-secrets\") pod \"cilium-cnhpl\" (UID: \"b25e8eb1-78f9-4f6b-9823-67ec8806057c\") " pod="kube-system/cilium-cnhpl" Dec 13 16:13:24.582053 kubelet[1464]: I1213 16:13:24.581881 1464 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b25e8eb1-78f9-4f6b-9823-67ec8806057c-hubble-tls\") pod \"cilium-cnhpl\" (UID: \"b25e8eb1-78f9-4f6b-9823-67ec8806057c\") " pod="kube-system/cilium-cnhpl" Dec 13 16:13:24.582053 kubelet[1464]: I1213 16:13:24.581907 1464 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/dfea5e67-aaed-4e6f-90bb-cd7b56240aef-kube-proxy\") pod \"kube-proxy-r47zv\" (UID: \"dfea5e67-aaed-4e6f-90bb-cd7b56240aef\") " pod="kube-system/kube-proxy-r47zv" Dec 13 16:13:24.582053 kubelet[1464]: I1213 16:13:24.581933 1464 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b25e8eb1-78f9-4f6b-9823-67ec8806057c-cni-path\") pod \"cilium-cnhpl\" (UID: \"b25e8eb1-78f9-4f6b-9823-67ec8806057c\") " pod="kube-system/cilium-cnhpl" Dec 13 16:13:24.582053 kubelet[1464]: I1213 16:13:24.581959 1464 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b25e8eb1-78f9-4f6b-9823-67ec8806057c-host-proc-sys-kernel\") pod \"cilium-cnhpl\" (UID: \"b25e8eb1-78f9-4f6b-9823-67ec8806057c\") " pod="kube-system/cilium-cnhpl" Dec 13 16:13:24.582308 kubelet[1464]: I1213 16:13:24.582052 1464 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dfea5e67-aaed-4e6f-90bb-cd7b56240aef-lib-modules\") pod \"kube-proxy-r47zv\" (UID: \"dfea5e67-aaed-4e6f-90bb-cd7b56240aef\") " pod="kube-system/kube-proxy-r47zv" Dec 13 16:13:24.582308 kubelet[1464]: I1213 16:13:24.582085 1464 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7qwb9\" (UniqueName: \"kubernetes.io/projected/dfea5e67-aaed-4e6f-90bb-cd7b56240aef-kube-api-access-7qwb9\") pod \"kube-proxy-r47zv\" (UID: \"dfea5e67-aaed-4e6f-90bb-cd7b56240aef\") " pod="kube-system/kube-proxy-r47zv" Dec 13 16:13:24.582308 kubelet[1464]: I1213 16:13:24.582115 1464 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvwbx\" (UniqueName: \"kubernetes.io/projected/b25e8eb1-78f9-4f6b-9823-67ec8806057c-kube-api-access-bvwbx\") pod \"cilium-cnhpl\" (UID: \"b25e8eb1-78f9-4f6b-9823-67ec8806057c\") " pod="kube-system/cilium-cnhpl" Dec 13 16:13:24.582308 kubelet[1464]: I1213 16:13:24.582140 1464 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b25e8eb1-78f9-4f6b-9823-67ec8806057c-hostproc\") pod \"cilium-cnhpl\" (UID: \"b25e8eb1-78f9-4f6b-9823-67ec8806057c\") " pod="kube-system/cilium-cnhpl" Dec 13 16:13:24.582308 kubelet[1464]: I1213 16:13:24.582165 1464 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b25e8eb1-78f9-4f6b-9823-67ec8806057c-etc-cni-netd\") pod \"cilium-cnhpl\" (UID: \"b25e8eb1-78f9-4f6b-9823-67ec8806057c\") " pod="kube-system/cilium-cnhpl" Dec 13 16:13:24.582643 kubelet[1464]: I1213 16:13:24.582227 1464 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b25e8eb1-78f9-4f6b-9823-67ec8806057c-lib-modules\") pod \"cilium-cnhpl\" (UID: \"b25e8eb1-78f9-4f6b-9823-67ec8806057c\") " pod="kube-system/cilium-cnhpl" Dec 13 16:13:24.582643 kubelet[1464]: I1213 16:13:24.582263 1464 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b25e8eb1-78f9-4f6b-9823-67ec8806057c-xtables-lock\") pod \"cilium-cnhpl\" (UID: \"b25e8eb1-78f9-4f6b-9823-67ec8806057c\") " pod="kube-system/cilium-cnhpl" Dec 13 16:13:24.582643 kubelet[1464]: I1213 16:13:24.582310 1464 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b25e8eb1-78f9-4f6b-9823-67ec8806057c-cilium-config-path\") pod \"cilium-cnhpl\" (UID: \"b25e8eb1-78f9-4f6b-9823-67ec8806057c\") " pod="kube-system/cilium-cnhpl" Dec 13 16:13:24.865356 env[1188]: time="2024-12-13T16:13:24.865202781Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-cnhpl,Uid:b25e8eb1-78f9-4f6b-9823-67ec8806057c,Namespace:kube-system,Attempt:0,}" Dec 13 16:13:24.878519 env[1188]: time="2024-12-13T16:13:24.878107409Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-r47zv,Uid:dfea5e67-aaed-4e6f-90bb-cd7b56240aef,Namespace:kube-system,Attempt:0,}" Dec 13 16:13:25.535987 kubelet[1464]: E1213 16:13:25.535875 1464 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 16:13:25.700132 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2753991353.mount: Deactivated successfully. Dec 13 16:13:25.724353 env[1188]: time="2024-12-13T16:13:25.724287904Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 16:13:25.726887 env[1188]: time="2024-12-13T16:13:25.726852093Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 16:13:25.730085 env[1188]: time="2024-12-13T16:13:25.730045015Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 16:13:25.731519 env[1188]: time="2024-12-13T16:13:25.731461289Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 16:13:25.735161 env[1188]: time="2024-12-13T16:13:25.735128816Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 16:13:25.736793 env[1188]: time="2024-12-13T16:13:25.736759223Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 16:13:25.740793 env[1188]: time="2024-12-13T16:13:25.740714959Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 16:13:25.741854 env[1188]: time="2024-12-13T16:13:25.741816380Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 16:13:25.790914 env[1188]: time="2024-12-13T16:13:25.790640393Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 16:13:25.790914 env[1188]: time="2024-12-13T16:13:25.790756679Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 16:13:25.791646 env[1188]: time="2024-12-13T16:13:25.790776183Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 16:13:25.791867 env[1188]: time="2024-12-13T16:13:25.791748307Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/bfd37fcfe4261222dabef71cad0311a477d51b63699ce6f91a8a2be66cbe6b1c pid=1530 runtime=io.containerd.runc.v2 Dec 13 16:13:25.791965 env[1188]: time="2024-12-13T16:13:25.790827844Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 16:13:25.792031 env[1188]: time="2024-12-13T16:13:25.791954693Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 16:13:25.792091 env[1188]: time="2024-12-13T16:13:25.792043167Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 16:13:25.804754 env[1188]: time="2024-12-13T16:13:25.799125406Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d61d3518451b9e74c93312de3c8450fcd2da7dd714bc97e0f1c94bb6bac9e145 pid=1529 runtime=io.containerd.runc.v2 Dec 13 16:13:25.828719 systemd[1]: Started cri-containerd-d61d3518451b9e74c93312de3c8450fcd2da7dd714bc97e0f1c94bb6bac9e145.scope. Dec 13 16:13:25.842651 systemd[1]: Started cri-containerd-bfd37fcfe4261222dabef71cad0311a477d51b63699ce6f91a8a2be66cbe6b1c.scope. Dec 13 16:13:25.909244 env[1188]: time="2024-12-13T16:13:25.909170683Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-cnhpl,Uid:b25e8eb1-78f9-4f6b-9823-67ec8806057c,Namespace:kube-system,Attempt:0,} returns sandbox id \"d61d3518451b9e74c93312de3c8450fcd2da7dd714bc97e0f1c94bb6bac9e145\"" Dec 13 16:13:25.913766 env[1188]: time="2024-12-13T16:13:25.913727075Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Dec 13 16:13:25.916673 env[1188]: time="2024-12-13T16:13:25.916606594Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-r47zv,Uid:dfea5e67-aaed-4e6f-90bb-cd7b56240aef,Namespace:kube-system,Attempt:0,} returns sandbox id \"bfd37fcfe4261222dabef71cad0311a477d51b63699ce6f91a8a2be66cbe6b1c\"" Dec 13 16:13:26.537080 kubelet[1464]: E1213 16:13:26.536997 1464 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 16:13:27.538083 kubelet[1464]: E1213 16:13:27.538004 1464 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 16:13:28.539174 kubelet[1464]: E1213 16:13:28.539111 1464 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 16:13:29.539550 kubelet[1464]: E1213 16:13:29.539480 1464 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 16:13:30.540696 kubelet[1464]: E1213 16:13:30.540614 1464 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 16:13:31.541552 kubelet[1464]: E1213 16:13:31.541468 1464 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 16:13:32.542637 kubelet[1464]: E1213 16:13:32.542520 1464 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 16:13:33.543566 kubelet[1464]: E1213 16:13:33.543490 1464 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 16:13:34.543850 kubelet[1464]: E1213 16:13:34.543739 1464 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 16:13:35.545129 kubelet[1464]: E1213 16:13:35.544979 1464 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 16:13:35.819913 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3621657532.mount: Deactivated successfully. Dec 13 16:13:36.546234 kubelet[1464]: E1213 16:13:36.546152 1464 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 16:13:37.447555 update_engine[1184]: I1213 16:13:37.446251 1184 update_attempter.cc:509] Updating boot flags... Dec 13 16:13:37.546717 kubelet[1464]: E1213 16:13:37.546639 1464 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 16:13:38.547700 kubelet[1464]: E1213 16:13:38.547637 1464 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 16:13:39.548265 kubelet[1464]: E1213 16:13:39.548219 1464 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 16:13:40.362149 env[1188]: time="2024-12-13T16:13:40.362070350Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 16:13:40.364345 env[1188]: time="2024-12-13T16:13:40.364302878Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 16:13:40.366435 env[1188]: time="2024-12-13T16:13:40.366382885Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 16:13:40.367462 env[1188]: time="2024-12-13T16:13:40.367394783Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Dec 13 16:13:40.372108 env[1188]: time="2024-12-13T16:13:40.371961613Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\"" Dec 13 16:13:40.375078 env[1188]: time="2024-12-13T16:13:40.375035882Z" level=info msg="CreateContainer within sandbox \"d61d3518451b9e74c93312de3c8450fcd2da7dd714bc97e0f1c94bb6bac9e145\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 16:13:40.389532 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1159041329.mount: Deactivated successfully. Dec 13 16:13:40.401647 env[1188]: time="2024-12-13T16:13:40.401605173Z" level=info msg="CreateContainer within sandbox \"d61d3518451b9e74c93312de3c8450fcd2da7dd714bc97e0f1c94bb6bac9e145\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"e285b76f4f339865c47d251f9eadc950092e90e082de5dc102b4427c66e0904d\"" Dec 13 16:13:40.403054 env[1188]: time="2024-12-13T16:13:40.403000401Z" level=info msg="StartContainer for \"e285b76f4f339865c47d251f9eadc950092e90e082de5dc102b4427c66e0904d\"" Dec 13 16:13:40.433706 systemd[1]: Started cri-containerd-e285b76f4f339865c47d251f9eadc950092e90e082de5dc102b4427c66e0904d.scope. Dec 13 16:13:40.489107 env[1188]: time="2024-12-13T16:13:40.489053265Z" level=info msg="StartContainer for \"e285b76f4f339865c47d251f9eadc950092e90e082de5dc102b4427c66e0904d\" returns successfully" Dec 13 16:13:40.508673 systemd[1]: cri-containerd-e285b76f4f339865c47d251f9eadc950092e90e082de5dc102b4427c66e0904d.scope: Deactivated successfully. Dec 13 16:13:40.549197 kubelet[1464]: E1213 16:13:40.549120 1464 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 16:13:40.811047 env[1188]: time="2024-12-13T16:13:40.810367213Z" level=info msg="shim disconnected" id=e285b76f4f339865c47d251f9eadc950092e90e082de5dc102b4427c66e0904d Dec 13 16:13:40.811329 env[1188]: time="2024-12-13T16:13:40.811295350Z" level=warning msg="cleaning up after shim disconnected" id=e285b76f4f339865c47d251f9eadc950092e90e082de5dc102b4427c66e0904d namespace=k8s.io Dec 13 16:13:40.811523 env[1188]: time="2024-12-13T16:13:40.811492601Z" level=info msg="cleaning up dead shim" Dec 13 16:13:40.827626 env[1188]: time="2024-12-13T16:13:40.827548575Z" level=warning msg="cleanup warnings time=\"2024-12-13T16:13:40Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1659 runtime=io.containerd.runc.v2\n" Dec 13 16:13:41.387185 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e285b76f4f339865c47d251f9eadc950092e90e082de5dc102b4427c66e0904d-rootfs.mount: Deactivated successfully. Dec 13 16:13:41.550109 kubelet[1464]: E1213 16:13:41.550002 1464 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 16:13:41.821490 env[1188]: time="2024-12-13T16:13:41.821391380Z" level=info msg="CreateContainer within sandbox \"d61d3518451b9e74c93312de3c8450fcd2da7dd714bc97e0f1c94bb6bac9e145\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 16:13:41.835789 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2098198743.mount: Deactivated successfully. Dec 13 16:13:41.847200 env[1188]: time="2024-12-13T16:13:41.847143266Z" level=info msg="CreateContainer within sandbox \"d61d3518451b9e74c93312de3c8450fcd2da7dd714bc97e0f1c94bb6bac9e145\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"321f4109e900bfcf0cc2b238ef23789182aea437df0566a2ec118df431122b71\"" Dec 13 16:13:41.848152 env[1188]: time="2024-12-13T16:13:41.848115844Z" level=info msg="StartContainer for \"321f4109e900bfcf0cc2b238ef23789182aea437df0566a2ec118df431122b71\"" Dec 13 16:13:41.887722 systemd[1]: Started cri-containerd-321f4109e900bfcf0cc2b238ef23789182aea437df0566a2ec118df431122b71.scope. Dec 13 16:13:41.946212 env[1188]: time="2024-12-13T16:13:41.946153843Z" level=info msg="StartContainer for \"321f4109e900bfcf0cc2b238ef23789182aea437df0566a2ec118df431122b71\" returns successfully" Dec 13 16:13:41.961141 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 16:13:41.962132 systemd[1]: Stopped systemd-sysctl.service. Dec 13 16:13:41.963152 systemd[1]: Stopping systemd-sysctl.service... Dec 13 16:13:41.967857 systemd[1]: Starting systemd-sysctl.service... Dec 13 16:13:41.973304 systemd[1]: cri-containerd-321f4109e900bfcf0cc2b238ef23789182aea437df0566a2ec118df431122b71.scope: Deactivated successfully. Dec 13 16:13:41.983790 systemd[1]: Finished systemd-sysctl.service. Dec 13 16:13:42.077252 env[1188]: time="2024-12-13T16:13:42.076487407Z" level=info msg="shim disconnected" id=321f4109e900bfcf0cc2b238ef23789182aea437df0566a2ec118df431122b71 Dec 13 16:13:42.078350 env[1188]: time="2024-12-13T16:13:42.078307509Z" level=warning msg="cleaning up after shim disconnected" id=321f4109e900bfcf0cc2b238ef23789182aea437df0566a2ec118df431122b71 namespace=k8s.io Dec 13 16:13:42.078477 env[1188]: time="2024-12-13T16:13:42.078343126Z" level=info msg="cleaning up dead shim" Dec 13 16:13:42.103952 env[1188]: time="2024-12-13T16:13:42.103874192Z" level=warning msg="cleanup warnings time=\"2024-12-13T16:13:42Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1722 runtime=io.containerd.runc.v2\n" Dec 13 16:13:42.386611 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount711967542.mount: Deactivated successfully. Dec 13 16:13:42.386773 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3489958669.mount: Deactivated successfully. Dec 13 16:13:42.550513 kubelet[1464]: E1213 16:13:42.550379 1464 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 16:13:42.826344 env[1188]: time="2024-12-13T16:13:42.826289260Z" level=info msg="CreateContainer within sandbox \"d61d3518451b9e74c93312de3c8450fcd2da7dd714bc97e0f1c94bb6bac9e145\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 16:13:42.851541 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2928750398.mount: Deactivated successfully. Dec 13 16:13:42.859124 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2075208191.mount: Deactivated successfully. Dec 13 16:13:42.868723 env[1188]: time="2024-12-13T16:13:42.868663590Z" level=info msg="CreateContainer within sandbox \"d61d3518451b9e74c93312de3c8450fcd2da7dd714bc97e0f1c94bb6bac9e145\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"cc5e7184037e36a96a61a6b88ff9faf474e7172d89dc0b7414ebd1ef931942b6\"" Dec 13 16:13:42.869861 env[1188]: time="2024-12-13T16:13:42.869812880Z" level=info msg="StartContainer for \"cc5e7184037e36a96a61a6b88ff9faf474e7172d89dc0b7414ebd1ef931942b6\"" Dec 13 16:13:42.906783 systemd[1]: Started cri-containerd-cc5e7184037e36a96a61a6b88ff9faf474e7172d89dc0b7414ebd1ef931942b6.scope. Dec 13 16:13:42.967585 env[1188]: time="2024-12-13T16:13:42.967526234Z" level=info msg="StartContainer for \"cc5e7184037e36a96a61a6b88ff9faf474e7172d89dc0b7414ebd1ef931942b6\" returns successfully" Dec 13 16:13:42.969937 systemd[1]: cri-containerd-cc5e7184037e36a96a61a6b88ff9faf474e7172d89dc0b7414ebd1ef931942b6.scope: Deactivated successfully. Dec 13 16:13:43.090681 env[1188]: time="2024-12-13T16:13:43.089975264Z" level=info msg="shim disconnected" id=cc5e7184037e36a96a61a6b88ff9faf474e7172d89dc0b7414ebd1ef931942b6 Dec 13 16:13:43.090681 env[1188]: time="2024-12-13T16:13:43.090036859Z" level=warning msg="cleaning up after shim disconnected" id=cc5e7184037e36a96a61a6b88ff9faf474e7172d89dc0b7414ebd1ef931942b6 namespace=k8s.io Dec 13 16:13:43.090681 env[1188]: time="2024-12-13T16:13:43.090054317Z" level=info msg="cleaning up dead shim" Dec 13 16:13:43.111979 env[1188]: time="2024-12-13T16:13:43.111917428Z" level=warning msg="cleanup warnings time=\"2024-12-13T16:13:43Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1778 runtime=io.containerd.runc.v2\n" Dec 13 16:13:43.471158 env[1188]: time="2024-12-13T16:13:43.470631572Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.30.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 16:13:43.472569 env[1188]: time="2024-12-13T16:13:43.472534686Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 16:13:43.474275 env[1188]: time="2024-12-13T16:13:43.474241521Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.30.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 16:13:43.475806 env[1188]: time="2024-12-13T16:13:43.475771373Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 16:13:43.476724 env[1188]: time="2024-12-13T16:13:43.476687700Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\" returns image reference \"sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2\"" Dec 13 16:13:43.480067 env[1188]: time="2024-12-13T16:13:43.480030111Z" level=info msg="CreateContainer within sandbox \"bfd37fcfe4261222dabef71cad0311a477d51b63699ce6f91a8a2be66cbe6b1c\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 16:13:43.494857 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1606283342.mount: Deactivated successfully. Dec 13 16:13:43.502312 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3141953084.mount: Deactivated successfully. Dec 13 16:13:43.513059 env[1188]: time="2024-12-13T16:13:43.513017242Z" level=info msg="CreateContainer within sandbox \"bfd37fcfe4261222dabef71cad0311a477d51b63699ce6f91a8a2be66cbe6b1c\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"8fe9acbbf195d46a5bbced776d7b217f28bc256914e4632377740571539881b7\"" Dec 13 16:13:43.513845 env[1188]: time="2024-12-13T16:13:43.513799825Z" level=info msg="StartContainer for \"8fe9acbbf195d46a5bbced776d7b217f28bc256914e4632377740571539881b7\"" Dec 13 16:13:43.534174 kubelet[1464]: E1213 16:13:43.534117 1464 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 16:13:43.549696 systemd[1]: Started cri-containerd-8fe9acbbf195d46a5bbced776d7b217f28bc256914e4632377740571539881b7.scope. Dec 13 16:13:43.554755 kubelet[1464]: E1213 16:13:43.554559 1464 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 16:13:43.602866 env[1188]: time="2024-12-13T16:13:43.602809482Z" level=info msg="StartContainer for \"8fe9acbbf195d46a5bbced776d7b217f28bc256914e4632377740571539881b7\" returns successfully" Dec 13 16:13:43.832798 env[1188]: time="2024-12-13T16:13:43.832722852Z" level=info msg="CreateContainer within sandbox \"d61d3518451b9e74c93312de3c8450fcd2da7dd714bc97e0f1c94bb6bac9e145\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 16:13:43.841186 kubelet[1464]: I1213 16:13:43.841090 1464 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-r47zv" podStartSLOduration=3.28157744 podStartE2EDuration="20.841049097s" podCreationTimestamp="2024-12-13 16:13:23 +0000 UTC" firstStartedPulling="2024-12-13 16:13:25.918568782 +0000 UTC m=+3.115123202" lastFinishedPulling="2024-12-13 16:13:43.47804043 +0000 UTC m=+20.674594859" observedRunningTime="2024-12-13 16:13:43.840800503 +0000 UTC m=+21.037354941" watchObservedRunningTime="2024-12-13 16:13:43.841049097 +0000 UTC m=+21.037603535" Dec 13 16:13:43.848691 env[1188]: time="2024-12-13T16:13:43.848650163Z" level=info msg="CreateContainer within sandbox \"d61d3518451b9e74c93312de3c8450fcd2da7dd714bc97e0f1c94bb6bac9e145\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"d35eb7d57e0577f8715ed1ce93e09f059979edec45555a8b0268b787ff8fcb8a\"" Dec 13 16:13:43.849580 env[1188]: time="2024-12-13T16:13:43.849547127Z" level=info msg="StartContainer for \"d35eb7d57e0577f8715ed1ce93e09f059979edec45555a8b0268b787ff8fcb8a\"" Dec 13 16:13:43.885530 systemd[1]: Started cri-containerd-d35eb7d57e0577f8715ed1ce93e09f059979edec45555a8b0268b787ff8fcb8a.scope. Dec 13 16:13:43.947805 systemd[1]: cri-containerd-d35eb7d57e0577f8715ed1ce93e09f059979edec45555a8b0268b787ff8fcb8a.scope: Deactivated successfully. Dec 13 16:13:43.952156 env[1188]: time="2024-12-13T16:13:43.951929538Z" level=info msg="StartContainer for \"d35eb7d57e0577f8715ed1ce93e09f059979edec45555a8b0268b787ff8fcb8a\" returns successfully" Dec 13 16:13:43.956873 env[1188]: time="2024-12-13T16:13:43.956605043Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb25e8eb1_78f9_4f6b_9823_67ec8806057c.slice/cri-containerd-d35eb7d57e0577f8715ed1ce93e09f059979edec45555a8b0268b787ff8fcb8a.scope/memory.events\": no such file or directory" Dec 13 16:13:44.029222 env[1188]: time="2024-12-13T16:13:44.029151382Z" level=info msg="shim disconnected" id=d35eb7d57e0577f8715ed1ce93e09f059979edec45555a8b0268b787ff8fcb8a Dec 13 16:13:44.029567 env[1188]: time="2024-12-13T16:13:44.029533271Z" level=warning msg="cleaning up after shim disconnected" id=d35eb7d57e0577f8715ed1ce93e09f059979edec45555a8b0268b787ff8fcb8a namespace=k8s.io Dec 13 16:13:44.029696 env[1188]: time="2024-12-13T16:13:44.029668104Z" level=info msg="cleaning up dead shim" Dec 13 16:13:44.045813 env[1188]: time="2024-12-13T16:13:44.045756276Z" level=warning msg="cleanup warnings time=\"2024-12-13T16:13:44Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1949 runtime=io.containerd.runc.v2\n" Dec 13 16:13:44.555669 kubelet[1464]: E1213 16:13:44.555604 1464 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 16:13:44.838000 env[1188]: time="2024-12-13T16:13:44.837941554Z" level=info msg="CreateContainer within sandbox \"d61d3518451b9e74c93312de3c8450fcd2da7dd714bc97e0f1c94bb6bac9e145\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 16:13:44.873157 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1546892560.mount: Deactivated successfully. Dec 13 16:13:44.880324 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1546867863.mount: Deactivated successfully. Dec 13 16:13:44.885547 env[1188]: time="2024-12-13T16:13:44.885500609Z" level=info msg="CreateContainer within sandbox \"d61d3518451b9e74c93312de3c8450fcd2da7dd714bc97e0f1c94bb6bac9e145\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"2de1cacb5896fdd10a2bd069507835fe9c9b34c8662c208c235bf27f6bba1ef5\"" Dec 13 16:13:44.906757 env[1188]: time="2024-12-13T16:13:44.906690975Z" level=info msg="StartContainer for \"2de1cacb5896fdd10a2bd069507835fe9c9b34c8662c208c235bf27f6bba1ef5\"" Dec 13 16:13:44.929772 systemd[1]: Started cri-containerd-2de1cacb5896fdd10a2bd069507835fe9c9b34c8662c208c235bf27f6bba1ef5.scope. Dec 13 16:13:44.990037 env[1188]: time="2024-12-13T16:13:44.989983079Z" level=info msg="StartContainer for \"2de1cacb5896fdd10a2bd069507835fe9c9b34c8662c208c235bf27f6bba1ef5\" returns successfully" Dec 13 16:13:45.194151 kubelet[1464]: I1213 16:13:45.192314 1464 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Dec 13 16:13:45.556654 kubelet[1464]: E1213 16:13:45.556536 1464 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 16:13:45.644467 kernel: Initializing XFRM netlink socket Dec 13 16:13:45.866772 kubelet[1464]: I1213 16:13:45.866660 1464 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-cnhpl" podStartSLOduration=8.409227343 podStartE2EDuration="22.866637516s" podCreationTimestamp="2024-12-13 16:13:23 +0000 UTC" firstStartedPulling="2024-12-13 16:13:25.912286696 +0000 UTC m=+3.108841121" lastFinishedPulling="2024-12-13 16:13:40.369696875 +0000 UTC m=+17.566251294" observedRunningTime="2024-12-13 16:13:45.866607476 +0000 UTC m=+23.063161929" watchObservedRunningTime="2024-12-13 16:13:45.866637516 +0000 UTC m=+23.063191947" Dec 13 16:13:46.556925 kubelet[1464]: E1213 16:13:46.556853 1464 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 16:13:47.367598 systemd-networkd[1016]: cilium_host: Link UP Dec 13 16:13:47.379048 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Dec 13 16:13:47.382365 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Dec 13 16:13:47.382493 systemd-networkd[1016]: cilium_net: Link UP Dec 13 16:13:47.388119 systemd-networkd[1016]: cilium_net: Gained carrier Dec 13 16:13:47.388690 systemd-networkd[1016]: cilium_host: Gained carrier Dec 13 16:13:47.557952 kubelet[1464]: E1213 16:13:47.557901 1464 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 16:13:47.570312 systemd-networkd[1016]: cilium_vxlan: Link UP Dec 13 16:13:47.570325 systemd-networkd[1016]: cilium_vxlan: Gained carrier Dec 13 16:13:47.798565 kubelet[1464]: I1213 16:13:47.797454 1464 topology_manager.go:215] "Topology Admit Handler" podUID="c17f7584-24fe-4f8d-b877-6a92ec044bcb" podNamespace="default" podName="nginx-deployment-85f456d6dd-p5598" Dec 13 16:13:47.807448 systemd[1]: Created slice kubepods-besteffort-podc17f7584_24fe_4f8d_b877_6a92ec044bcb.slice. Dec 13 16:13:47.834745 systemd-networkd[1016]: cilium_net: Gained IPv6LL Dec 13 16:13:47.874736 systemd-networkd[1016]: cilium_host: Gained IPv6LL Dec 13 16:13:47.946530 kubelet[1464]: I1213 16:13:47.946354 1464 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42gj9\" (UniqueName: \"kubernetes.io/projected/c17f7584-24fe-4f8d-b877-6a92ec044bcb-kube-api-access-42gj9\") pod \"nginx-deployment-85f456d6dd-p5598\" (UID: \"c17f7584-24fe-4f8d-b877-6a92ec044bcb\") " pod="default/nginx-deployment-85f456d6dd-p5598" Dec 13 16:13:47.985587 kernel: NET: Registered PF_ALG protocol family Dec 13 16:13:48.112673 env[1188]: time="2024-12-13T16:13:48.112553251Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-p5598,Uid:c17f7584-24fe-4f8d-b877-6a92ec044bcb,Namespace:default,Attempt:0,}" Dec 13 16:13:48.558695 kubelet[1464]: E1213 16:13:48.558521 1464 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 16:13:49.012115 systemd-networkd[1016]: lxc_health: Link UP Dec 13 16:13:49.038319 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Dec 13 16:13:49.043030 systemd-networkd[1016]: lxc_health: Gained carrier Dec 13 16:13:49.434660 systemd-networkd[1016]: cilium_vxlan: Gained IPv6LL Dec 13 16:13:49.560039 kubelet[1464]: E1213 16:13:49.559957 1464 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 16:13:49.712590 systemd-networkd[1016]: lxcb454fa870c6f: Link UP Dec 13 16:13:49.720460 kernel: eth0: renamed from tmpec91e Dec 13 16:13:49.727460 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcb454fa870c6f: link becomes ready Dec 13 16:13:49.727566 systemd-networkd[1016]: lxcb454fa870c6f: Gained carrier Dec 13 16:13:50.560582 kubelet[1464]: E1213 16:13:50.560488 1464 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 16:13:50.714854 systemd-networkd[1016]: lxc_health: Gained IPv6LL Dec 13 16:13:51.561187 kubelet[1464]: E1213 16:13:51.561096 1464 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 16:13:51.610745 systemd-networkd[1016]: lxcb454fa870c6f: Gained IPv6LL Dec 13 16:13:52.561505 kubelet[1464]: E1213 16:13:52.561381 1464 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 16:13:53.561831 kubelet[1464]: E1213 16:13:53.561765 1464 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 16:13:54.030286 kubelet[1464]: I1213 16:13:54.030200 1464 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 16:13:54.562478 kubelet[1464]: E1213 16:13:54.562401 1464 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 16:13:55.295432 env[1188]: time="2024-12-13T16:13:55.295264990Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 16:13:55.296232 env[1188]: time="2024-12-13T16:13:55.295408320Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 16:13:55.296232 env[1188]: time="2024-12-13T16:13:55.295460272Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 16:13:55.296574 env[1188]: time="2024-12-13T16:13:55.296505361Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ec91e623385b0e2f5ae43146ca25d2a480c8f1f04ec0535f68acd43a1cca3480 pid=2518 runtime=io.containerd.runc.v2 Dec 13 16:13:55.330546 systemd[1]: run-containerd-runc-k8s.io-ec91e623385b0e2f5ae43146ca25d2a480c8f1f04ec0535f68acd43a1cca3480-runc.dg82QZ.mount: Deactivated successfully. Dec 13 16:13:55.338391 systemd[1]: Started cri-containerd-ec91e623385b0e2f5ae43146ca25d2a480c8f1f04ec0535f68acd43a1cca3480.scope. Dec 13 16:13:55.411147 env[1188]: time="2024-12-13T16:13:55.411062963Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-p5598,Uid:c17f7584-24fe-4f8d-b877-6a92ec044bcb,Namespace:default,Attempt:0,} returns sandbox id \"ec91e623385b0e2f5ae43146ca25d2a480c8f1f04ec0535f68acd43a1cca3480\"" Dec 13 16:13:55.414601 env[1188]: time="2024-12-13T16:13:55.414562630Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Dec 13 16:13:55.563871 kubelet[1464]: E1213 16:13:55.562823 1464 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 16:13:56.563720 kubelet[1464]: E1213 16:13:56.563639 1464 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 16:13:57.564637 kubelet[1464]: E1213 16:13:57.564527 1464 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 16:13:58.564918 kubelet[1464]: E1213 16:13:58.564839 1464 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 16:13:59.397490 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4246542232.mount: Deactivated successfully. Dec 13 16:13:59.565639 kubelet[1464]: E1213 16:13:59.565544 1464 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 16:14:00.566548 kubelet[1464]: E1213 16:14:00.566429 1464 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 16:14:01.567745 kubelet[1464]: E1213 16:14:01.567641 1464 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 16:14:01.846788 env[1188]: time="2024-12-13T16:14:01.846387682Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 16:14:01.848858 env[1188]: time="2024-12-13T16:14:01.848815667Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 16:14:01.851472 env[1188]: time="2024-12-13T16:14:01.851409795Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 16:14:01.853980 env[1188]: time="2024-12-13T16:14:01.853941929Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx@sha256:e04edf30a4ea4c5a4107110797c72d3ee8a654415f00acd4019be17218afd9a1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 16:14:01.855178 env[1188]: time="2024-12-13T16:14:01.855124516Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3\"" Dec 13 16:14:01.859210 env[1188]: time="2024-12-13T16:14:01.859170545Z" level=info msg="CreateContainer within sandbox \"ec91e623385b0e2f5ae43146ca25d2a480c8f1f04ec0535f68acd43a1cca3480\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Dec 13 16:14:01.872876 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3816235824.mount: Deactivated successfully. Dec 13 16:14:01.893770 env[1188]: time="2024-12-13T16:14:01.893688855Z" level=info msg="CreateContainer within sandbox \"ec91e623385b0e2f5ae43146ca25d2a480c8f1f04ec0535f68acd43a1cca3480\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"e4cfd735df586d464e778c8ddf0b18d1fb74232c7c84c48e271f6f37431b8b4c\"" Dec 13 16:14:01.894692 env[1188]: time="2024-12-13T16:14:01.894656035Z" level=info msg="StartContainer for \"e4cfd735df586d464e778c8ddf0b18d1fb74232c7c84c48e271f6f37431b8b4c\"" Dec 13 16:14:01.931776 systemd[1]: Started cri-containerd-e4cfd735df586d464e778c8ddf0b18d1fb74232c7c84c48e271f6f37431b8b4c.scope. Dec 13 16:14:01.991785 env[1188]: time="2024-12-13T16:14:01.991702332Z" level=info msg="StartContainer for \"e4cfd735df586d464e778c8ddf0b18d1fb74232c7c84c48e271f6f37431b8b4c\" returns successfully" Dec 13 16:14:02.568238 kubelet[1464]: E1213 16:14:02.568123 1464 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 16:14:02.914311 kubelet[1464]: I1213 16:14:02.914083 1464 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-85f456d6dd-p5598" podStartSLOduration=9.470650771 podStartE2EDuration="15.914043882s" podCreationTimestamp="2024-12-13 16:13:47 +0000 UTC" firstStartedPulling="2024-12-13 16:13:55.413501989 +0000 UTC m=+32.610056409" lastFinishedPulling="2024-12-13 16:14:01.856895096 +0000 UTC m=+39.053449520" observedRunningTime="2024-12-13 16:14:02.913824276 +0000 UTC m=+40.110378715" watchObservedRunningTime="2024-12-13 16:14:02.914043882 +0000 UTC m=+40.110598321" Dec 13 16:14:03.534233 kubelet[1464]: E1213 16:14:03.534156 1464 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 16:14:03.569158 kubelet[1464]: E1213 16:14:03.569053 1464 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 16:14:04.569594 kubelet[1464]: E1213 16:14:04.569521 1464 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 16:14:05.571045 kubelet[1464]: E1213 16:14:05.570944 1464 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 16:14:06.571975 kubelet[1464]: E1213 16:14:06.571867 1464 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 16:14:07.572802 kubelet[1464]: E1213 16:14:07.572710 1464 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 16:14:08.573161 kubelet[1464]: E1213 16:14:08.573051 1464 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 16:14:08.986275 kubelet[1464]: I1213 16:14:08.986055 1464 topology_manager.go:215] "Topology Admit Handler" podUID="dfb7bde4-146f-43cd-8b67-18c0ff621447" podNamespace="default" podName="nfs-server-provisioner-0" Dec 13 16:14:08.995483 systemd[1]: Created slice kubepods-besteffort-poddfb7bde4_146f_43cd_8b67_18c0ff621447.slice. Dec 13 16:14:09.093083 kubelet[1464]: I1213 16:14:09.093003 1464 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4gqm\" (UniqueName: \"kubernetes.io/projected/dfb7bde4-146f-43cd-8b67-18c0ff621447-kube-api-access-g4gqm\") pod \"nfs-server-provisioner-0\" (UID: \"dfb7bde4-146f-43cd-8b67-18c0ff621447\") " pod="default/nfs-server-provisioner-0" Dec 13 16:14:09.093600 kubelet[1464]: I1213 16:14:09.093568 1464 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/dfb7bde4-146f-43cd-8b67-18c0ff621447-data\") pod \"nfs-server-provisioner-0\" (UID: \"dfb7bde4-146f-43cd-8b67-18c0ff621447\") " pod="default/nfs-server-provisioner-0" Dec 13 16:14:09.304162 env[1188]: time="2024-12-13T16:14:09.302811208Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:dfb7bde4-146f-43cd-8b67-18c0ff621447,Namespace:default,Attempt:0,}" Dec 13 16:14:09.362703 systemd-networkd[1016]: lxc00680fe9c439: Link UP Dec 13 16:14:09.379610 kernel: eth0: renamed from tmpdf381 Dec 13 16:14:09.389746 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 16:14:09.394277 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc00680fe9c439: link becomes ready Dec 13 16:14:09.393789 systemd-networkd[1016]: lxc00680fe9c439: Gained carrier Dec 13 16:14:09.574315 kubelet[1464]: E1213 16:14:09.574195 1464 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 16:14:09.642366 env[1188]: time="2024-12-13T16:14:09.642252306Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 16:14:09.642836 env[1188]: time="2024-12-13T16:14:09.642754914Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 16:14:09.643036 env[1188]: time="2024-12-13T16:14:09.642981797Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 16:14:09.644166 env[1188]: time="2024-12-13T16:14:09.643798351Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/df381e3f50a2bd1f9fd38182986001c055767ca384266f30825b46280316c1c7 pid=2650 runtime=io.containerd.runc.v2 Dec 13 16:14:09.675560 systemd[1]: Started cri-containerd-df381e3f50a2bd1f9fd38182986001c055767ca384266f30825b46280316c1c7.scope. Dec 13 16:14:09.760676 env[1188]: time="2024-12-13T16:14:09.760601227Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:dfb7bde4-146f-43cd-8b67-18c0ff621447,Namespace:default,Attempt:0,} returns sandbox id \"df381e3f50a2bd1f9fd38182986001c055767ca384266f30825b46280316c1c7\"" Dec 13 16:14:09.763465 env[1188]: time="2024-12-13T16:14:09.763410818Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Dec 13 16:14:10.574652 kubelet[1464]: E1213 16:14:10.574484 1464 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 16:14:11.323698 systemd-networkd[1016]: lxc00680fe9c439: Gained IPv6LL Dec 13 16:14:11.574843 kubelet[1464]: E1213 16:14:11.574661 1464 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 16:14:12.575077 kubelet[1464]: E1213 16:14:12.574945 1464 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 16:14:13.575588 kubelet[1464]: E1213 16:14:13.575506 1464 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 16:14:13.849150 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1115674196.mount: Deactivated successfully. Dec 13 16:14:14.576386 kubelet[1464]: E1213 16:14:14.576320 1464 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 16:14:15.577099 kubelet[1464]: E1213 16:14:15.577010 1464 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 16:14:16.577881 kubelet[1464]: E1213 16:14:16.577812 1464 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 16:14:17.509314 env[1188]: time="2024-12-13T16:14:17.509194920Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 16:14:17.513273 env[1188]: time="2024-12-13T16:14:17.513238714Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 16:14:17.517054 env[1188]: time="2024-12-13T16:14:17.517019594Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 16:14:17.520798 env[1188]: time="2024-12-13T16:14:17.520762747Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 16:14:17.522717 env[1188]: time="2024-12-13T16:14:17.522653743Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Dec 13 16:14:17.528799 env[1188]: time="2024-12-13T16:14:17.528760232Z" level=info msg="CreateContainer within sandbox \"df381e3f50a2bd1f9fd38182986001c055767ca384266f30825b46280316c1c7\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Dec 13 16:14:17.544581 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1819894003.mount: Deactivated successfully. Dec 13 16:14:17.552436 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1190650379.mount: Deactivated successfully. Dec 13 16:14:17.556260 env[1188]: time="2024-12-13T16:14:17.556220640Z" level=info msg="CreateContainer within sandbox \"df381e3f50a2bd1f9fd38182986001c055767ca384266f30825b46280316c1c7\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"186c65cf3b17290c16d75be247f687dce7bf47bc39fec9f5ea7cdd4f9bf49054\"" Dec 13 16:14:17.557313 env[1188]: time="2024-12-13T16:14:17.557232287Z" level=info msg="StartContainer for \"186c65cf3b17290c16d75be247f687dce7bf47bc39fec9f5ea7cdd4f9bf49054\"" Dec 13 16:14:17.578662 kubelet[1464]: E1213 16:14:17.578626 1464 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 16:14:17.592092 systemd[1]: Started cri-containerd-186c65cf3b17290c16d75be247f687dce7bf47bc39fec9f5ea7cdd4f9bf49054.scope. Dec 13 16:14:17.656179 env[1188]: time="2024-12-13T16:14:17.656117362Z" level=info msg="StartContainer for \"186c65cf3b17290c16d75be247f687dce7bf47bc39fec9f5ea7cdd4f9bf49054\" returns successfully" Dec 13 16:14:17.958984 kubelet[1464]: I1213 16:14:17.958895 1464 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=2.19595276 podStartE2EDuration="9.95887168s" podCreationTimestamp="2024-12-13 16:14:08 +0000 UTC" firstStartedPulling="2024-12-13 16:14:09.762713394 +0000 UTC m=+46.959267819" lastFinishedPulling="2024-12-13 16:14:17.525632308 +0000 UTC m=+54.722186739" observedRunningTime="2024-12-13 16:14:17.958365295 +0000 UTC m=+55.154919734" watchObservedRunningTime="2024-12-13 16:14:17.95887168 +0000 UTC m=+55.155426119" Dec 13 16:14:18.579249 kubelet[1464]: E1213 16:14:18.579105 1464 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 16:14:19.580101 kubelet[1464]: E1213 16:14:19.579756 1464 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 16:14:20.580834 kubelet[1464]: E1213 16:14:20.580761 1464 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 16:14:21.581863 kubelet[1464]: E1213 16:14:21.581794 1464 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 16:14:22.582701 kubelet[1464]: E1213 16:14:22.582631 1464 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 16:14:23.534659 kubelet[1464]: E1213 16:14:23.534568 1464 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 16:14:23.582849 kubelet[1464]: E1213 16:14:23.582779 1464 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 16:14:24.583888 kubelet[1464]: E1213 16:14:24.583802 1464 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 16:14:25.584536 kubelet[1464]: E1213 16:14:25.584467 1464 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 16:14:26.585696 kubelet[1464]: E1213 16:14:26.585612 1464 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 16:14:27.252402 kubelet[1464]: I1213 16:14:27.252345 1464 topology_manager.go:215] "Topology Admit Handler" podUID="e28d85be-dde7-4a91-bd40-e82e83738a1d" podNamespace="default" podName="test-pod-1" Dec 13 16:14:27.260498 systemd[1]: Created slice kubepods-besteffort-pode28d85be_dde7_4a91_bd40_e82e83738a1d.slice. Dec 13 16:14:27.415867 kubelet[1464]: I1213 16:14:27.415804 1464 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-00608cb9-ed0f-435d-b97b-7919e63f6b55\" (UniqueName: \"kubernetes.io/nfs/e28d85be-dde7-4a91-bd40-e82e83738a1d-pvc-00608cb9-ed0f-435d-b97b-7919e63f6b55\") pod \"test-pod-1\" (UID: \"e28d85be-dde7-4a91-bd40-e82e83738a1d\") " pod="default/test-pod-1" Dec 13 16:14:27.416220 kubelet[1464]: I1213 16:14:27.416189 1464 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4zp6p\" (UniqueName: \"kubernetes.io/projected/e28d85be-dde7-4a91-bd40-e82e83738a1d-kube-api-access-4zp6p\") pod \"test-pod-1\" (UID: \"e28d85be-dde7-4a91-bd40-e82e83738a1d\") " pod="default/test-pod-1" Dec 13 16:14:27.569592 kernel: FS-Cache: Loaded Dec 13 16:14:27.586847 kubelet[1464]: E1213 16:14:27.586776 1464 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 16:14:27.633337 kernel: RPC: Registered named UNIX socket transport module. Dec 13 16:14:27.633554 kernel: RPC: Registered udp transport module. Dec 13 16:14:27.633617 kernel: RPC: Registered tcp transport module. Dec 13 16:14:27.634527 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Dec 13 16:14:27.729480 kernel: FS-Cache: Netfs 'nfs' registered for caching Dec 13 16:14:27.981973 kernel: NFS: Registering the id_resolver key type Dec 13 16:14:27.982256 kernel: Key type id_resolver registered Dec 13 16:14:27.984462 kernel: Key type id_legacy registered Dec 13 16:14:28.052843 nfsidmap[2777]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'gb1.brightbox.com' Dec 13 16:14:28.059935 nfsidmap[2780]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'gb1.brightbox.com' Dec 13 16:14:28.167269 env[1188]: time="2024-12-13T16:14:28.167110358Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:e28d85be-dde7-4a91-bd40-e82e83738a1d,Namespace:default,Attempt:0,}" Dec 13 16:14:28.222238 systemd-networkd[1016]: lxc0716c2967b83: Link UP Dec 13 16:14:28.231820 kernel: eth0: renamed from tmpccf40 Dec 13 16:14:28.238561 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 16:14:28.238654 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc0716c2967b83: link becomes ready Dec 13 16:14:28.242354 systemd-networkd[1016]: lxc0716c2967b83: Gained carrier Dec 13 16:14:28.446365 env[1188]: time="2024-12-13T16:14:28.445688057Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 16:14:28.446365 env[1188]: time="2024-12-13T16:14:28.445785485Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 16:14:28.446365 env[1188]: time="2024-12-13T16:14:28.445803245Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 16:14:28.446365 env[1188]: time="2024-12-13T16:14:28.446132534Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ccf408fac5b00d9bb5048cc6dac6c6c21208b26012875e8a8e5e8d4a2f5a2442 pid=2816 runtime=io.containerd.runc.v2 Dec 13 16:14:28.466134 systemd[1]: Started cri-containerd-ccf408fac5b00d9bb5048cc6dac6c6c21208b26012875e8a8e5e8d4a2f5a2442.scope. Dec 13 16:14:28.543206 env[1188]: time="2024-12-13T16:14:28.543150674Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:e28d85be-dde7-4a91-bd40-e82e83738a1d,Namespace:default,Attempt:0,} returns sandbox id \"ccf408fac5b00d9bb5048cc6dac6c6c21208b26012875e8a8e5e8d4a2f5a2442\"" Dec 13 16:14:28.545740 env[1188]: time="2024-12-13T16:14:28.545672025Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Dec 13 16:14:28.588667 kubelet[1464]: E1213 16:14:28.588599 1464 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 16:14:28.913842 env[1188]: time="2024-12-13T16:14:28.913778997Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 16:14:28.915584 env[1188]: time="2024-12-13T16:14:28.915548988Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 16:14:28.919139 env[1188]: time="2024-12-13T16:14:28.919097057Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 16:14:28.924187 env[1188]: time="2024-12-13T16:14:28.924151180Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx@sha256:e04edf30a4ea4c5a4107110797c72d3ee8a654415f00acd4019be17218afd9a1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 16:14:28.925160 env[1188]: time="2024-12-13T16:14:28.925111280Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3\"" Dec 13 16:14:28.929334 env[1188]: time="2024-12-13T16:14:28.929294839Z" level=info msg="CreateContainer within sandbox \"ccf408fac5b00d9bb5048cc6dac6c6c21208b26012875e8a8e5e8d4a2f5a2442\" for container &ContainerMetadata{Name:test,Attempt:0,}" Dec 13 16:14:28.949263 env[1188]: time="2024-12-13T16:14:28.949213831Z" level=info msg="CreateContainer within sandbox \"ccf408fac5b00d9bb5048cc6dac6c6c21208b26012875e8a8e5e8d4a2f5a2442\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"5f479b234e2320948940c75b16a9386d6ba657ac1ea4b0a78859814efe6faf8d\"" Dec 13 16:14:28.950137 env[1188]: time="2024-12-13T16:14:28.950093970Z" level=info msg="StartContainer for \"5f479b234e2320948940c75b16a9386d6ba657ac1ea4b0a78859814efe6faf8d\"" Dec 13 16:14:28.989488 systemd[1]: Started cri-containerd-5f479b234e2320948940c75b16a9386d6ba657ac1ea4b0a78859814efe6faf8d.scope. Dec 13 16:14:29.030636 env[1188]: time="2024-12-13T16:14:29.030582259Z" level=info msg="StartContainer for \"5f479b234e2320948940c75b16a9386d6ba657ac1ea4b0a78859814efe6faf8d\" returns successfully" Dec 13 16:14:29.535304 systemd[1]: run-containerd-runc-k8s.io-5f479b234e2320948940c75b16a9386d6ba657ac1ea4b0a78859814efe6faf8d-runc.zE1vvY.mount: Deactivated successfully. Dec 13 16:14:29.589852 kubelet[1464]: E1213 16:14:29.589736 1464 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 16:14:29.985944 kubelet[1464]: I1213 16:14:29.985607 1464 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=19.603664726 podStartE2EDuration="19.985577902s" podCreationTimestamp="2024-12-13 16:14:10 +0000 UTC" firstStartedPulling="2024-12-13 16:14:28.545154642 +0000 UTC m=+65.741709067" lastFinishedPulling="2024-12-13 16:14:28.927067811 +0000 UTC m=+66.123622243" observedRunningTime="2024-12-13 16:14:29.984823652 +0000 UTC m=+67.181378084" watchObservedRunningTime="2024-12-13 16:14:29.985577902 +0000 UTC m=+67.182132327" Dec 13 16:14:30.144890 systemd-networkd[1016]: lxc0716c2967b83: Gained IPv6LL Dec 13 16:14:30.590925 kubelet[1464]: E1213 16:14:30.590852 1464 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 16:14:31.592258 kubelet[1464]: E1213 16:14:31.592111 1464 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 16:14:32.593720 kubelet[1464]: E1213 16:14:32.593659 1464 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 16:14:33.595211 kubelet[1464]: E1213 16:14:33.595142 1464 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 16:14:34.597172 kubelet[1464]: E1213 16:14:34.597093 1464 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 16:14:35.598670 kubelet[1464]: E1213 16:14:35.598589 1464 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 16:14:36.118578 systemd[1]: run-containerd-runc-k8s.io-2de1cacb5896fdd10a2bd069507835fe9c9b34c8662c208c235bf27f6bba1ef5-runc.WR9tMH.mount: Deactivated successfully. Dec 13 16:14:36.153814 env[1188]: time="2024-12-13T16:14:36.153688477Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 16:14:36.162665 env[1188]: time="2024-12-13T16:14:36.162576011Z" level=info msg="StopContainer for \"2de1cacb5896fdd10a2bd069507835fe9c9b34c8662c208c235bf27f6bba1ef5\" with timeout 2 (s)" Dec 13 16:14:36.163243 env[1188]: time="2024-12-13T16:14:36.163207531Z" level=info msg="Stop container \"2de1cacb5896fdd10a2bd069507835fe9c9b34c8662c208c235bf27f6bba1ef5\" with signal terminated" Dec 13 16:14:36.174337 systemd-networkd[1016]: lxc_health: Link DOWN Dec 13 16:14:36.174349 systemd-networkd[1016]: lxc_health: Lost carrier Dec 13 16:14:36.211144 systemd[1]: cri-containerd-2de1cacb5896fdd10a2bd069507835fe9c9b34c8662c208c235bf27f6bba1ef5.scope: Deactivated successfully. Dec 13 16:14:36.211792 systemd[1]: cri-containerd-2de1cacb5896fdd10a2bd069507835fe9c9b34c8662c208c235bf27f6bba1ef5.scope: Consumed 9.911s CPU time. Dec 13 16:14:36.241881 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2de1cacb5896fdd10a2bd069507835fe9c9b34c8662c208c235bf27f6bba1ef5-rootfs.mount: Deactivated successfully. Dec 13 16:14:36.249747 env[1188]: time="2024-12-13T16:14:36.249686133Z" level=info msg="shim disconnected" id=2de1cacb5896fdd10a2bd069507835fe9c9b34c8662c208c235bf27f6bba1ef5 Dec 13 16:14:36.250113 env[1188]: time="2024-12-13T16:14:36.250073527Z" level=warning msg="cleaning up after shim disconnected" id=2de1cacb5896fdd10a2bd069507835fe9c9b34c8662c208c235bf27f6bba1ef5 namespace=k8s.io Dec 13 16:14:36.250288 env[1188]: time="2024-12-13T16:14:36.250256391Z" level=info msg="cleaning up dead shim" Dec 13 16:14:36.263123 env[1188]: time="2024-12-13T16:14:36.263045291Z" level=warning msg="cleanup warnings time=\"2024-12-13T16:14:36Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2948 runtime=io.containerd.runc.v2\n" Dec 13 16:14:36.284080 env[1188]: time="2024-12-13T16:14:36.284024335Z" level=info msg="StopContainer for \"2de1cacb5896fdd10a2bd069507835fe9c9b34c8662c208c235bf27f6bba1ef5\" returns successfully" Dec 13 16:14:36.285201 env[1188]: time="2024-12-13T16:14:36.285148918Z" level=info msg="StopPodSandbox for \"d61d3518451b9e74c93312de3c8450fcd2da7dd714bc97e0f1c94bb6bac9e145\"" Dec 13 16:14:36.285451 env[1188]: time="2024-12-13T16:14:36.285397434Z" level=info msg="Container to stop \"e285b76f4f339865c47d251f9eadc950092e90e082de5dc102b4427c66e0904d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 16:14:36.285637 env[1188]: time="2024-12-13T16:14:36.285576863Z" level=info msg="Container to stop \"cc5e7184037e36a96a61a6b88ff9faf474e7172d89dc0b7414ebd1ef931942b6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 16:14:36.285793 env[1188]: time="2024-12-13T16:14:36.285758918Z" level=info msg="Container to stop \"2de1cacb5896fdd10a2bd069507835fe9c9b34c8662c208c235bf27f6bba1ef5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 16:14:36.285964 env[1188]: time="2024-12-13T16:14:36.285930963Z" level=info msg="Container to stop \"321f4109e900bfcf0cc2b238ef23789182aea437df0566a2ec118df431122b71\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 16:14:36.286125 env[1188]: time="2024-12-13T16:14:36.286090401Z" level=info msg="Container to stop \"d35eb7d57e0577f8715ed1ce93e09f059979edec45555a8b0268b787ff8fcb8a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 16:14:36.288775 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d61d3518451b9e74c93312de3c8450fcd2da7dd714bc97e0f1c94bb6bac9e145-shm.mount: Deactivated successfully. Dec 13 16:14:36.299526 systemd[1]: cri-containerd-d61d3518451b9e74c93312de3c8450fcd2da7dd714bc97e0f1c94bb6bac9e145.scope: Deactivated successfully. Dec 13 16:14:36.328395 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d61d3518451b9e74c93312de3c8450fcd2da7dd714bc97e0f1c94bb6bac9e145-rootfs.mount: Deactivated successfully. Dec 13 16:14:36.366636 env[1188]: time="2024-12-13T16:14:36.366545266Z" level=info msg="shim disconnected" id=d61d3518451b9e74c93312de3c8450fcd2da7dd714bc97e0f1c94bb6bac9e145 Dec 13 16:14:36.367445 env[1188]: time="2024-12-13T16:14:36.367156212Z" level=warning msg="cleaning up after shim disconnected" id=d61d3518451b9e74c93312de3c8450fcd2da7dd714bc97e0f1c94bb6bac9e145 namespace=k8s.io Dec 13 16:14:36.367445 env[1188]: time="2024-12-13T16:14:36.367183042Z" level=info msg="cleaning up dead shim" Dec 13 16:14:36.381485 env[1188]: time="2024-12-13T16:14:36.381256456Z" level=warning msg="cleanup warnings time=\"2024-12-13T16:14:36Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2980 runtime=io.containerd.runc.v2\n" Dec 13 16:14:36.382579 env[1188]: time="2024-12-13T16:14:36.382531606Z" level=info msg="TearDown network for sandbox \"d61d3518451b9e74c93312de3c8450fcd2da7dd714bc97e0f1c94bb6bac9e145\" successfully" Dec 13 16:14:36.382858 env[1188]: time="2024-12-13T16:14:36.382821487Z" level=info msg="StopPodSandbox for \"d61d3518451b9e74c93312de3c8450fcd2da7dd714bc97e0f1c94bb6bac9e145\" returns successfully" Dec 13 16:14:36.472124 kubelet[1464]: I1213 16:14:36.472028 1464 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b25e8eb1-78f9-4f6b-9823-67ec8806057c-cilium-run\") pod \"b25e8eb1-78f9-4f6b-9823-67ec8806057c\" (UID: \"b25e8eb1-78f9-4f6b-9823-67ec8806057c\") " Dec 13 16:14:36.472124 kubelet[1464]: I1213 16:14:36.472110 1464 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b25e8eb1-78f9-4f6b-9823-67ec8806057c-cilium-cgroup\") pod \"b25e8eb1-78f9-4f6b-9823-67ec8806057c\" (UID: \"b25e8eb1-78f9-4f6b-9823-67ec8806057c\") " Dec 13 16:14:36.472713 kubelet[1464]: I1213 16:14:36.472181 1464 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b25e8eb1-78f9-4f6b-9823-67ec8806057c-clustermesh-secrets\") pod \"b25e8eb1-78f9-4f6b-9823-67ec8806057c\" (UID: \"b25e8eb1-78f9-4f6b-9823-67ec8806057c\") " Dec 13 16:14:36.472713 kubelet[1464]: I1213 16:14:36.472228 1464 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b25e8eb1-78f9-4f6b-9823-67ec8806057c-hubble-tls\") pod \"b25e8eb1-78f9-4f6b-9823-67ec8806057c\" (UID: \"b25e8eb1-78f9-4f6b-9823-67ec8806057c\") " Dec 13 16:14:36.472713 kubelet[1464]: I1213 16:14:36.472258 1464 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b25e8eb1-78f9-4f6b-9823-67ec8806057c-host-proc-sys-kernel\") pod \"b25e8eb1-78f9-4f6b-9823-67ec8806057c\" (UID: \"b25e8eb1-78f9-4f6b-9823-67ec8806057c\") " Dec 13 16:14:36.472713 kubelet[1464]: I1213 16:14:36.472288 1464 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bvwbx\" (UniqueName: \"kubernetes.io/projected/b25e8eb1-78f9-4f6b-9823-67ec8806057c-kube-api-access-bvwbx\") pod \"b25e8eb1-78f9-4f6b-9823-67ec8806057c\" (UID: \"b25e8eb1-78f9-4f6b-9823-67ec8806057c\") " Dec 13 16:14:36.472713 kubelet[1464]: I1213 16:14:36.472315 1464 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b25e8eb1-78f9-4f6b-9823-67ec8806057c-hostproc\") pod \"b25e8eb1-78f9-4f6b-9823-67ec8806057c\" (UID: \"b25e8eb1-78f9-4f6b-9823-67ec8806057c\") " Dec 13 16:14:36.472713 kubelet[1464]: I1213 16:14:36.472366 1464 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b25e8eb1-78f9-4f6b-9823-67ec8806057c-cilium-config-path\") pod \"b25e8eb1-78f9-4f6b-9823-67ec8806057c\" (UID: \"b25e8eb1-78f9-4f6b-9823-67ec8806057c\") " Dec 13 16:14:36.473081 kubelet[1464]: I1213 16:14:36.472412 1464 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b25e8eb1-78f9-4f6b-9823-67ec8806057c-cni-path\") pod \"b25e8eb1-78f9-4f6b-9823-67ec8806057c\" (UID: \"b25e8eb1-78f9-4f6b-9823-67ec8806057c\") " Dec 13 16:14:36.473081 kubelet[1464]: I1213 16:14:36.472465 1464 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b25e8eb1-78f9-4f6b-9823-67ec8806057c-xtables-lock\") pod \"b25e8eb1-78f9-4f6b-9823-67ec8806057c\" (UID: \"b25e8eb1-78f9-4f6b-9823-67ec8806057c\") " Dec 13 16:14:36.473081 kubelet[1464]: I1213 16:14:36.472495 1464 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b25e8eb1-78f9-4f6b-9823-67ec8806057c-bpf-maps\") pod \"b25e8eb1-78f9-4f6b-9823-67ec8806057c\" (UID: \"b25e8eb1-78f9-4f6b-9823-67ec8806057c\") " Dec 13 16:14:36.473081 kubelet[1464]: I1213 16:14:36.472521 1464 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b25e8eb1-78f9-4f6b-9823-67ec8806057c-host-proc-sys-net\") pod \"b25e8eb1-78f9-4f6b-9823-67ec8806057c\" (UID: \"b25e8eb1-78f9-4f6b-9823-67ec8806057c\") " Dec 13 16:14:36.473081 kubelet[1464]: I1213 16:14:36.472546 1464 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b25e8eb1-78f9-4f6b-9823-67ec8806057c-etc-cni-netd\") pod \"b25e8eb1-78f9-4f6b-9823-67ec8806057c\" (UID: \"b25e8eb1-78f9-4f6b-9823-67ec8806057c\") " Dec 13 16:14:36.473081 kubelet[1464]: I1213 16:14:36.472571 1464 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b25e8eb1-78f9-4f6b-9823-67ec8806057c-lib-modules\") pod \"b25e8eb1-78f9-4f6b-9823-67ec8806057c\" (UID: \"b25e8eb1-78f9-4f6b-9823-67ec8806057c\") " Dec 13 16:14:36.473480 kubelet[1464]: I1213 16:14:36.472760 1464 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b25e8eb1-78f9-4f6b-9823-67ec8806057c-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "b25e8eb1-78f9-4f6b-9823-67ec8806057c" (UID: "b25e8eb1-78f9-4f6b-9823-67ec8806057c"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 16:14:36.473480 kubelet[1464]: I1213 16:14:36.472848 1464 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b25e8eb1-78f9-4f6b-9823-67ec8806057c-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "b25e8eb1-78f9-4f6b-9823-67ec8806057c" (UID: "b25e8eb1-78f9-4f6b-9823-67ec8806057c"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 16:14:36.473480 kubelet[1464]: I1213 16:14:36.472878 1464 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b25e8eb1-78f9-4f6b-9823-67ec8806057c-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "b25e8eb1-78f9-4f6b-9823-67ec8806057c" (UID: "b25e8eb1-78f9-4f6b-9823-67ec8806057c"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 16:14:36.474672 kubelet[1464]: I1213 16:14:36.474628 1464 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b25e8eb1-78f9-4f6b-9823-67ec8806057c-cni-path" (OuterVolumeSpecName: "cni-path") pod "b25e8eb1-78f9-4f6b-9823-67ec8806057c" (UID: "b25e8eb1-78f9-4f6b-9823-67ec8806057c"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 16:14:36.474865 kubelet[1464]: I1213 16:14:36.474833 1464 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b25e8eb1-78f9-4f6b-9823-67ec8806057c-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "b25e8eb1-78f9-4f6b-9823-67ec8806057c" (UID: "b25e8eb1-78f9-4f6b-9823-67ec8806057c"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 16:14:36.475028 kubelet[1464]: I1213 16:14:36.475000 1464 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b25e8eb1-78f9-4f6b-9823-67ec8806057c-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "b25e8eb1-78f9-4f6b-9823-67ec8806057c" (UID: "b25e8eb1-78f9-4f6b-9823-67ec8806057c"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 16:14:36.475215 kubelet[1464]: I1213 16:14:36.475187 1464 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b25e8eb1-78f9-4f6b-9823-67ec8806057c-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "b25e8eb1-78f9-4f6b-9823-67ec8806057c" (UID: "b25e8eb1-78f9-4f6b-9823-67ec8806057c"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 16:14:36.475366 kubelet[1464]: I1213 16:14:36.475339 1464 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b25e8eb1-78f9-4f6b-9823-67ec8806057c-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "b25e8eb1-78f9-4f6b-9823-67ec8806057c" (UID: "b25e8eb1-78f9-4f6b-9823-67ec8806057c"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 16:14:36.476092 kubelet[1464]: I1213 16:14:36.476056 1464 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b25e8eb1-78f9-4f6b-9823-67ec8806057c-hostproc" (OuterVolumeSpecName: "hostproc") pod "b25e8eb1-78f9-4f6b-9823-67ec8806057c" (UID: "b25e8eb1-78f9-4f6b-9823-67ec8806057c"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 16:14:36.480098 kubelet[1464]: I1213 16:14:36.480024 1464 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b25e8eb1-78f9-4f6b-9823-67ec8806057c-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "b25e8eb1-78f9-4f6b-9823-67ec8806057c" (UID: "b25e8eb1-78f9-4f6b-9823-67ec8806057c"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 16:14:36.480447 kubelet[1464]: I1213 16:14:36.480400 1464 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b25e8eb1-78f9-4f6b-9823-67ec8806057c-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "b25e8eb1-78f9-4f6b-9823-67ec8806057c" (UID: "b25e8eb1-78f9-4f6b-9823-67ec8806057c"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 16:14:36.481222 kubelet[1464]: I1213 16:14:36.481084 1464 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b25e8eb1-78f9-4f6b-9823-67ec8806057c-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "b25e8eb1-78f9-4f6b-9823-67ec8806057c" (UID: "b25e8eb1-78f9-4f6b-9823-67ec8806057c"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 16:14:36.483489 kubelet[1464]: I1213 16:14:36.483457 1464 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b25e8eb1-78f9-4f6b-9823-67ec8806057c-kube-api-access-bvwbx" (OuterVolumeSpecName: "kube-api-access-bvwbx") pod "b25e8eb1-78f9-4f6b-9823-67ec8806057c" (UID: "b25e8eb1-78f9-4f6b-9823-67ec8806057c"). InnerVolumeSpecName "kube-api-access-bvwbx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 16:14:36.484390 kubelet[1464]: I1213 16:14:36.484332 1464 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b25e8eb1-78f9-4f6b-9823-67ec8806057c-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "b25e8eb1-78f9-4f6b-9823-67ec8806057c" (UID: "b25e8eb1-78f9-4f6b-9823-67ec8806057c"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 16:14:36.573665 kubelet[1464]: I1213 16:14:36.573515 1464 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-bvwbx\" (UniqueName: \"kubernetes.io/projected/b25e8eb1-78f9-4f6b-9823-67ec8806057c-kube-api-access-bvwbx\") on node \"10.230.57.126\" DevicePath \"\"" Dec 13 16:14:36.573665 kubelet[1464]: I1213 16:14:36.573642 1464 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b25e8eb1-78f9-4f6b-9823-67ec8806057c-hostproc\") on node \"10.230.57.126\" DevicePath \"\"" Dec 13 16:14:36.573665 kubelet[1464]: I1213 16:14:36.573662 1464 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b25e8eb1-78f9-4f6b-9823-67ec8806057c-cilium-run\") on node \"10.230.57.126\" DevicePath \"\"" Dec 13 16:14:36.573665 kubelet[1464]: I1213 16:14:36.573677 1464 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b25e8eb1-78f9-4f6b-9823-67ec8806057c-cilium-cgroup\") on node \"10.230.57.126\" DevicePath \"\"" Dec 13 16:14:36.573665 kubelet[1464]: I1213 16:14:36.573692 1464 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b25e8eb1-78f9-4f6b-9823-67ec8806057c-clustermesh-secrets\") on node \"10.230.57.126\" DevicePath \"\"" Dec 13 16:14:36.574186 kubelet[1464]: I1213 16:14:36.573717 1464 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b25e8eb1-78f9-4f6b-9823-67ec8806057c-hubble-tls\") on node \"10.230.57.126\" DevicePath \"\"" Dec 13 16:14:36.574186 kubelet[1464]: I1213 16:14:36.573742 1464 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b25e8eb1-78f9-4f6b-9823-67ec8806057c-host-proc-sys-kernel\") on node \"10.230.57.126\" DevicePath \"\"" Dec 13 16:14:36.574186 kubelet[1464]: I1213 16:14:36.573757 1464 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b25e8eb1-78f9-4f6b-9823-67ec8806057c-cilium-config-path\") on node \"10.230.57.126\" DevicePath \"\"" Dec 13 16:14:36.574186 kubelet[1464]: I1213 16:14:36.573772 1464 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b25e8eb1-78f9-4f6b-9823-67ec8806057c-cni-path\") on node \"10.230.57.126\" DevicePath \"\"" Dec 13 16:14:36.574186 kubelet[1464]: I1213 16:14:36.573786 1464 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b25e8eb1-78f9-4f6b-9823-67ec8806057c-xtables-lock\") on node \"10.230.57.126\" DevicePath \"\"" Dec 13 16:14:36.574186 kubelet[1464]: I1213 16:14:36.573800 1464 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b25e8eb1-78f9-4f6b-9823-67ec8806057c-bpf-maps\") on node \"10.230.57.126\" DevicePath \"\"" Dec 13 16:14:36.574186 kubelet[1464]: I1213 16:14:36.573814 1464 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b25e8eb1-78f9-4f6b-9823-67ec8806057c-host-proc-sys-net\") on node \"10.230.57.126\" DevicePath \"\"" Dec 13 16:14:36.574186 kubelet[1464]: I1213 16:14:36.573828 1464 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b25e8eb1-78f9-4f6b-9823-67ec8806057c-etc-cni-netd\") on node \"10.230.57.126\" DevicePath \"\"" Dec 13 16:14:36.574743 kubelet[1464]: I1213 16:14:36.573841 1464 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b25e8eb1-78f9-4f6b-9823-67ec8806057c-lib-modules\") on node \"10.230.57.126\" DevicePath \"\"" Dec 13 16:14:36.599907 kubelet[1464]: E1213 16:14:36.599856 1464 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 16:14:36.991864 kubelet[1464]: I1213 16:14:36.991802 1464 scope.go:117] "RemoveContainer" containerID="2de1cacb5896fdd10a2bd069507835fe9c9b34c8662c208c235bf27f6bba1ef5" Dec 13 16:14:36.996505 env[1188]: time="2024-12-13T16:14:36.996440512Z" level=info msg="RemoveContainer for \"2de1cacb5896fdd10a2bd069507835fe9c9b34c8662c208c235bf27f6bba1ef5\"" Dec 13 16:14:36.999174 systemd[1]: Removed slice kubepods-burstable-podb25e8eb1_78f9_4f6b_9823_67ec8806057c.slice. Dec 13 16:14:36.999323 systemd[1]: kubepods-burstable-podb25e8eb1_78f9_4f6b_9823_67ec8806057c.slice: Consumed 10.095s CPU time. Dec 13 16:14:37.019566 env[1188]: time="2024-12-13T16:14:37.019513081Z" level=info msg="RemoveContainer for \"2de1cacb5896fdd10a2bd069507835fe9c9b34c8662c208c235bf27f6bba1ef5\" returns successfully" Dec 13 16:14:37.020251 kubelet[1464]: I1213 16:14:37.020220 1464 scope.go:117] "RemoveContainer" containerID="d35eb7d57e0577f8715ed1ce93e09f059979edec45555a8b0268b787ff8fcb8a" Dec 13 16:14:37.021850 env[1188]: time="2024-12-13T16:14:37.021784231Z" level=info msg="RemoveContainer for \"d35eb7d57e0577f8715ed1ce93e09f059979edec45555a8b0268b787ff8fcb8a\"" Dec 13 16:14:37.036375 env[1188]: time="2024-12-13T16:14:37.036284475Z" level=info msg="RemoveContainer for \"d35eb7d57e0577f8715ed1ce93e09f059979edec45555a8b0268b787ff8fcb8a\" returns successfully" Dec 13 16:14:37.036628 kubelet[1464]: I1213 16:14:37.036537 1464 scope.go:117] "RemoveContainer" containerID="cc5e7184037e36a96a61a6b88ff9faf474e7172d89dc0b7414ebd1ef931942b6" Dec 13 16:14:37.038086 env[1188]: time="2024-12-13T16:14:37.038012880Z" level=info msg="RemoveContainer for \"cc5e7184037e36a96a61a6b88ff9faf474e7172d89dc0b7414ebd1ef931942b6\"" Dec 13 16:14:37.041365 env[1188]: time="2024-12-13T16:14:37.041263280Z" level=info msg="RemoveContainer for \"cc5e7184037e36a96a61a6b88ff9faf474e7172d89dc0b7414ebd1ef931942b6\" returns successfully" Dec 13 16:14:37.041693 kubelet[1464]: I1213 16:14:37.041611 1464 scope.go:117] "RemoveContainer" containerID="321f4109e900bfcf0cc2b238ef23789182aea437df0566a2ec118df431122b71" Dec 13 16:14:37.043376 env[1188]: time="2024-12-13T16:14:37.043015225Z" level=info msg="RemoveContainer for \"321f4109e900bfcf0cc2b238ef23789182aea437df0566a2ec118df431122b71\"" Dec 13 16:14:37.046134 env[1188]: time="2024-12-13T16:14:37.046098463Z" level=info msg="RemoveContainer for \"321f4109e900bfcf0cc2b238ef23789182aea437df0566a2ec118df431122b71\" returns successfully" Dec 13 16:14:37.046465 kubelet[1464]: I1213 16:14:37.046437 1464 scope.go:117] "RemoveContainer" containerID="e285b76f4f339865c47d251f9eadc950092e90e082de5dc102b4427c66e0904d" Dec 13 16:14:37.048182 env[1188]: time="2024-12-13T16:14:37.047836964Z" level=info msg="RemoveContainer for \"e285b76f4f339865c47d251f9eadc950092e90e082de5dc102b4427c66e0904d\"" Dec 13 16:14:37.050779 env[1188]: time="2024-12-13T16:14:37.050743216Z" level=info msg="RemoveContainer for \"e285b76f4f339865c47d251f9eadc950092e90e082de5dc102b4427c66e0904d\" returns successfully" Dec 13 16:14:37.051199 kubelet[1464]: I1213 16:14:37.051175 1464 scope.go:117] "RemoveContainer" containerID="2de1cacb5896fdd10a2bd069507835fe9c9b34c8662c208c235bf27f6bba1ef5" Dec 13 16:14:37.051617 env[1188]: time="2024-12-13T16:14:37.051478061Z" level=error msg="ContainerStatus for \"2de1cacb5896fdd10a2bd069507835fe9c9b34c8662c208c235bf27f6bba1ef5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2de1cacb5896fdd10a2bd069507835fe9c9b34c8662c208c235bf27f6bba1ef5\": not found" Dec 13 16:14:37.052005 kubelet[1464]: E1213 16:14:37.051954 1464 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2de1cacb5896fdd10a2bd069507835fe9c9b34c8662c208c235bf27f6bba1ef5\": not found" containerID="2de1cacb5896fdd10a2bd069507835fe9c9b34c8662c208c235bf27f6bba1ef5" Dec 13 16:14:37.052175 kubelet[1464]: I1213 16:14:37.052020 1464 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2de1cacb5896fdd10a2bd069507835fe9c9b34c8662c208c235bf27f6bba1ef5"} err="failed to get container status \"2de1cacb5896fdd10a2bd069507835fe9c9b34c8662c208c235bf27f6bba1ef5\": rpc error: code = NotFound desc = an error occurred when try to find container \"2de1cacb5896fdd10a2bd069507835fe9c9b34c8662c208c235bf27f6bba1ef5\": not found" Dec 13 16:14:37.052240 kubelet[1464]: I1213 16:14:37.052178 1464 scope.go:117] "RemoveContainer" containerID="d35eb7d57e0577f8715ed1ce93e09f059979edec45555a8b0268b787ff8fcb8a" Dec 13 16:14:37.052498 env[1188]: time="2024-12-13T16:14:37.052431568Z" level=error msg="ContainerStatus for \"d35eb7d57e0577f8715ed1ce93e09f059979edec45555a8b0268b787ff8fcb8a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d35eb7d57e0577f8715ed1ce93e09f059979edec45555a8b0268b787ff8fcb8a\": not found" Dec 13 16:14:37.052731 kubelet[1464]: E1213 16:14:37.052697 1464 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d35eb7d57e0577f8715ed1ce93e09f059979edec45555a8b0268b787ff8fcb8a\": not found" containerID="d35eb7d57e0577f8715ed1ce93e09f059979edec45555a8b0268b787ff8fcb8a" Dec 13 16:14:37.052820 kubelet[1464]: I1213 16:14:37.052752 1464 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d35eb7d57e0577f8715ed1ce93e09f059979edec45555a8b0268b787ff8fcb8a"} err="failed to get container status \"d35eb7d57e0577f8715ed1ce93e09f059979edec45555a8b0268b787ff8fcb8a\": rpc error: code = NotFound desc = an error occurred when try to find container \"d35eb7d57e0577f8715ed1ce93e09f059979edec45555a8b0268b787ff8fcb8a\": not found" Dec 13 16:14:37.052820 kubelet[1464]: I1213 16:14:37.052777 1464 scope.go:117] "RemoveContainer" containerID="cc5e7184037e36a96a61a6b88ff9faf474e7172d89dc0b7414ebd1ef931942b6" Dec 13 16:14:37.053207 env[1188]: time="2024-12-13T16:14:37.053131605Z" level=error msg="ContainerStatus for \"cc5e7184037e36a96a61a6b88ff9faf474e7172d89dc0b7414ebd1ef931942b6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"cc5e7184037e36a96a61a6b88ff9faf474e7172d89dc0b7414ebd1ef931942b6\": not found" Dec 13 16:14:37.053559 kubelet[1464]: E1213 16:14:37.053517 1464 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"cc5e7184037e36a96a61a6b88ff9faf474e7172d89dc0b7414ebd1ef931942b6\": not found" containerID="cc5e7184037e36a96a61a6b88ff9faf474e7172d89dc0b7414ebd1ef931942b6" Dec 13 16:14:37.053773 kubelet[1464]: I1213 16:14:37.053741 1464 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"cc5e7184037e36a96a61a6b88ff9faf474e7172d89dc0b7414ebd1ef931942b6"} err="failed to get container status \"cc5e7184037e36a96a61a6b88ff9faf474e7172d89dc0b7414ebd1ef931942b6\": rpc error: code = NotFound desc = an error occurred when try to find container \"cc5e7184037e36a96a61a6b88ff9faf474e7172d89dc0b7414ebd1ef931942b6\": not found" Dec 13 16:14:37.053911 kubelet[1464]: I1213 16:14:37.053887 1464 scope.go:117] "RemoveContainer" containerID="321f4109e900bfcf0cc2b238ef23789182aea437df0566a2ec118df431122b71" Dec 13 16:14:37.054291 env[1188]: time="2024-12-13T16:14:37.054229754Z" level=error msg="ContainerStatus for \"321f4109e900bfcf0cc2b238ef23789182aea437df0566a2ec118df431122b71\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"321f4109e900bfcf0cc2b238ef23789182aea437df0566a2ec118df431122b71\": not found" Dec 13 16:14:37.054605 kubelet[1464]: E1213 16:14:37.054560 1464 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"321f4109e900bfcf0cc2b238ef23789182aea437df0566a2ec118df431122b71\": not found" containerID="321f4109e900bfcf0cc2b238ef23789182aea437df0566a2ec118df431122b71" Dec 13 16:14:37.054729 kubelet[1464]: I1213 16:14:37.054607 1464 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"321f4109e900bfcf0cc2b238ef23789182aea437df0566a2ec118df431122b71"} err="failed to get container status \"321f4109e900bfcf0cc2b238ef23789182aea437df0566a2ec118df431122b71\": rpc error: code = NotFound desc = an error occurred when try to find container \"321f4109e900bfcf0cc2b238ef23789182aea437df0566a2ec118df431122b71\": not found" Dec 13 16:14:37.054729 kubelet[1464]: I1213 16:14:37.054647 1464 scope.go:117] "RemoveContainer" containerID="e285b76f4f339865c47d251f9eadc950092e90e082de5dc102b4427c66e0904d" Dec 13 16:14:37.054988 env[1188]: time="2024-12-13T16:14:37.054922563Z" level=error msg="ContainerStatus for \"e285b76f4f339865c47d251f9eadc950092e90e082de5dc102b4427c66e0904d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e285b76f4f339865c47d251f9eadc950092e90e082de5dc102b4427c66e0904d\": not found" Dec 13 16:14:37.055164 kubelet[1464]: E1213 16:14:37.055120 1464 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e285b76f4f339865c47d251f9eadc950092e90e082de5dc102b4427c66e0904d\": not found" containerID="e285b76f4f339865c47d251f9eadc950092e90e082de5dc102b4427c66e0904d" Dec 13 16:14:37.055267 kubelet[1464]: I1213 16:14:37.055167 1464 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e285b76f4f339865c47d251f9eadc950092e90e082de5dc102b4427c66e0904d"} err="failed to get container status \"e285b76f4f339865c47d251f9eadc950092e90e082de5dc102b4427c66e0904d\": rpc error: code = NotFound desc = an error occurred when try to find container \"e285b76f4f339865c47d251f9eadc950092e90e082de5dc102b4427c66e0904d\": not found" Dec 13 16:14:37.113013 systemd[1]: var-lib-kubelet-pods-b25e8eb1\x2d78f9\x2d4f6b\x2d9823\x2d67ec8806057c-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 16:14:37.113203 systemd[1]: var-lib-kubelet-pods-b25e8eb1\x2d78f9\x2d4f6b\x2d9823\x2d67ec8806057c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dbvwbx.mount: Deactivated successfully. Dec 13 16:14:37.113325 systemd[1]: var-lib-kubelet-pods-b25e8eb1\x2d78f9\x2d4f6b\x2d9823\x2d67ec8806057c-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 16:14:37.600840 kubelet[1464]: E1213 16:14:37.600731 1464 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 16:14:37.760098 kubelet[1464]: I1213 16:14:37.760034 1464 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b25e8eb1-78f9-4f6b-9823-67ec8806057c" path="/var/lib/kubelet/pods/b25e8eb1-78f9-4f6b-9823-67ec8806057c/volumes" Dec 13 16:14:38.601918 kubelet[1464]: E1213 16:14:38.601836 1464 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 16:14:38.703598 kubelet[1464]: E1213 16:14:38.703543 1464 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 16:14:39.602906 kubelet[1464]: E1213 16:14:39.602813 1464 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 16:14:40.448736 kubelet[1464]: I1213 16:14:40.448609 1464 topology_manager.go:215] "Topology Admit Handler" podUID="00167ed9-6c8e-466e-a2cd-7d21ba86589e" podNamespace="kube-system" podName="cilium-operator-599987898-4hnjv" Dec 13 16:14:40.449197 kubelet[1464]: E1213 16:14:40.449166 1464 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b25e8eb1-78f9-4f6b-9823-67ec8806057c" containerName="apply-sysctl-overwrites" Dec 13 16:14:40.449337 kubelet[1464]: E1213 16:14:40.449312 1464 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b25e8eb1-78f9-4f6b-9823-67ec8806057c" containerName="mount-bpf-fs" Dec 13 16:14:40.449504 kubelet[1464]: E1213 16:14:40.449479 1464 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b25e8eb1-78f9-4f6b-9823-67ec8806057c" containerName="clean-cilium-state" Dec 13 16:14:40.449633 kubelet[1464]: E1213 16:14:40.449610 1464 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b25e8eb1-78f9-4f6b-9823-67ec8806057c" containerName="cilium-agent" Dec 13 16:14:40.449779 kubelet[1464]: E1213 16:14:40.449756 1464 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b25e8eb1-78f9-4f6b-9823-67ec8806057c" containerName="mount-cgroup" Dec 13 16:14:40.450109 kubelet[1464]: I1213 16:14:40.450079 1464 memory_manager.go:354] "RemoveStaleState removing state" podUID="b25e8eb1-78f9-4f6b-9823-67ec8806057c" containerName="cilium-agent" Dec 13 16:14:40.458064 systemd[1]: Created slice kubepods-besteffort-pod00167ed9_6c8e_466e_a2cd_7d21ba86589e.slice. Dec 13 16:14:40.461269 kubelet[1464]: W1213 16:14:40.461238 1464 reflector.go:547] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:10.230.57.126" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node '10.230.57.126' and this object Dec 13 16:14:40.461514 kubelet[1464]: E1213 16:14:40.461483 1464 reflector.go:150] object-"kube-system"/"cilium-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:10.230.57.126" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node '10.230.57.126' and this object Dec 13 16:14:40.477482 kubelet[1464]: I1213 16:14:40.477404 1464 topology_manager.go:215] "Topology Admit Handler" podUID="dc7bf0ff-b0a5-488c-9238-23705400f90b" podNamespace="kube-system" podName="cilium-lfbjq" Dec 13 16:14:40.484917 systemd[1]: Created slice kubepods-burstable-poddc7bf0ff_b0a5_488c_9238_23705400f90b.slice. Dec 13 16:14:40.603956 kubelet[1464]: E1213 16:14:40.603863 1464 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 16:14:40.604744 kubelet[1464]: I1213 16:14:40.604681 1464 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/dc7bf0ff-b0a5-488c-9238-23705400f90b-hostproc\") pod \"cilium-lfbjq\" (UID: \"dc7bf0ff-b0a5-488c-9238-23705400f90b\") " pod="kube-system/cilium-lfbjq" Dec 13 16:14:40.604940 kubelet[1464]: I1213 16:14:40.604910 1464 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/dc7bf0ff-b0a5-488c-9238-23705400f90b-cilium-cgroup\") pod \"cilium-lfbjq\" (UID: \"dc7bf0ff-b0a5-488c-9238-23705400f90b\") " pod="kube-system/cilium-lfbjq" Dec 13 16:14:40.605149 kubelet[1464]: I1213 16:14:40.605122 1464 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dc7bf0ff-b0a5-488c-9238-23705400f90b-lib-modules\") pod \"cilium-lfbjq\" (UID: \"dc7bf0ff-b0a5-488c-9238-23705400f90b\") " pod="kube-system/cilium-lfbjq" Dec 13 16:14:40.605339 kubelet[1464]: I1213 16:14:40.605312 1464 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/dc7bf0ff-b0a5-488c-9238-23705400f90b-cilium-run\") pod \"cilium-lfbjq\" (UID: \"dc7bf0ff-b0a5-488c-9238-23705400f90b\") " pod="kube-system/cilium-lfbjq" Dec 13 16:14:40.605532 kubelet[1464]: I1213 16:14:40.605506 1464 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/dc7bf0ff-b0a5-488c-9238-23705400f90b-bpf-maps\") pod \"cilium-lfbjq\" (UID: \"dc7bf0ff-b0a5-488c-9238-23705400f90b\") " pod="kube-system/cilium-lfbjq" Dec 13 16:14:40.605689 kubelet[1464]: I1213 16:14:40.605662 1464 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/dc7bf0ff-b0a5-488c-9238-23705400f90b-cilium-ipsec-secrets\") pod \"cilium-lfbjq\" (UID: \"dc7bf0ff-b0a5-488c-9238-23705400f90b\") " pod="kube-system/cilium-lfbjq" Dec 13 16:14:40.605884 kubelet[1464]: I1213 16:14:40.605857 1464 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/00167ed9-6c8e-466e-a2cd-7d21ba86589e-cilium-config-path\") pod \"cilium-operator-599987898-4hnjv\" (UID: \"00167ed9-6c8e-466e-a2cd-7d21ba86589e\") " pod="kube-system/cilium-operator-599987898-4hnjv" Dec 13 16:14:40.606072 kubelet[1464]: I1213 16:14:40.606044 1464 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/dc7bf0ff-b0a5-488c-9238-23705400f90b-cilium-config-path\") pod \"cilium-lfbjq\" (UID: \"dc7bf0ff-b0a5-488c-9238-23705400f90b\") " pod="kube-system/cilium-lfbjq" Dec 13 16:14:40.606258 kubelet[1464]: I1213 16:14:40.606232 1464 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qnzvm\" (UniqueName: \"kubernetes.io/projected/dc7bf0ff-b0a5-488c-9238-23705400f90b-kube-api-access-qnzvm\") pod \"cilium-lfbjq\" (UID: \"dc7bf0ff-b0a5-488c-9238-23705400f90b\") " pod="kube-system/cilium-lfbjq" Dec 13 16:14:40.606472 kubelet[1464]: I1213 16:14:40.606415 1464 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/dc7bf0ff-b0a5-488c-9238-23705400f90b-host-proc-sys-net\") pod \"cilium-lfbjq\" (UID: \"dc7bf0ff-b0a5-488c-9238-23705400f90b\") " pod="kube-system/cilium-lfbjq" Dec 13 16:14:40.606663 kubelet[1464]: I1213 16:14:40.606635 1464 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/dc7bf0ff-b0a5-488c-9238-23705400f90b-host-proc-sys-kernel\") pod \"cilium-lfbjq\" (UID: \"dc7bf0ff-b0a5-488c-9238-23705400f90b\") " pod="kube-system/cilium-lfbjq" Dec 13 16:14:40.606839 kubelet[1464]: I1213 16:14:40.606810 1464 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xtjfq\" (UniqueName: \"kubernetes.io/projected/00167ed9-6c8e-466e-a2cd-7d21ba86589e-kube-api-access-xtjfq\") pod \"cilium-operator-599987898-4hnjv\" (UID: \"00167ed9-6c8e-466e-a2cd-7d21ba86589e\") " pod="kube-system/cilium-operator-599987898-4hnjv" Dec 13 16:14:40.607010 kubelet[1464]: I1213 16:14:40.606985 1464 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/dc7bf0ff-b0a5-488c-9238-23705400f90b-etc-cni-netd\") pod \"cilium-lfbjq\" (UID: \"dc7bf0ff-b0a5-488c-9238-23705400f90b\") " pod="kube-system/cilium-lfbjq" Dec 13 16:14:40.607195 kubelet[1464]: I1213 16:14:40.607170 1464 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/dc7bf0ff-b0a5-488c-9238-23705400f90b-clustermesh-secrets\") pod \"cilium-lfbjq\" (UID: \"dc7bf0ff-b0a5-488c-9238-23705400f90b\") " pod="kube-system/cilium-lfbjq" Dec 13 16:14:40.607439 kubelet[1464]: I1213 16:14:40.607400 1464 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/dc7bf0ff-b0a5-488c-9238-23705400f90b-hubble-tls\") pod \"cilium-lfbjq\" (UID: \"dc7bf0ff-b0a5-488c-9238-23705400f90b\") " pod="kube-system/cilium-lfbjq" Dec 13 16:14:40.607619 kubelet[1464]: I1213 16:14:40.607589 1464 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/dc7bf0ff-b0a5-488c-9238-23705400f90b-cni-path\") pod \"cilium-lfbjq\" (UID: \"dc7bf0ff-b0a5-488c-9238-23705400f90b\") " pod="kube-system/cilium-lfbjq" Dec 13 16:14:40.607831 kubelet[1464]: I1213 16:14:40.607806 1464 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dc7bf0ff-b0a5-488c-9238-23705400f90b-xtables-lock\") pod \"cilium-lfbjq\" (UID: \"dc7bf0ff-b0a5-488c-9238-23705400f90b\") " pod="kube-system/cilium-lfbjq" Dec 13 16:14:41.612362 kubelet[1464]: E1213 16:14:41.612294 1464 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 16:14:41.665804 env[1188]: time="2024-12-13T16:14:41.663482528Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-4hnjv,Uid:00167ed9-6c8e-466e-a2cd-7d21ba86589e,Namespace:kube-system,Attempt:0,}" Dec 13 16:14:41.696676 env[1188]: time="2024-12-13T16:14:41.696585814Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-lfbjq,Uid:dc7bf0ff-b0a5-488c-9238-23705400f90b,Namespace:kube-system,Attempt:0,}" Dec 13 16:14:41.730197 env[1188]: time="2024-12-13T16:14:41.730091376Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 16:14:41.730513 env[1188]: time="2024-12-13T16:14:41.730159749Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 16:14:41.730513 env[1188]: time="2024-12-13T16:14:41.730178933Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 16:14:41.730513 env[1188]: time="2024-12-13T16:14:41.730399010Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b29f5fe3d20f3f895304c9905ef5c16e2cf884e2d56c54ebf505766f65ec07ff pid=3010 runtime=io.containerd.runc.v2 Dec 13 16:14:41.741497 env[1188]: time="2024-12-13T16:14:41.741359255Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 16:14:41.741683 env[1188]: time="2024-12-13T16:14:41.741463817Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 16:14:41.741683 env[1188]: time="2024-12-13T16:14:41.741489612Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 16:14:41.741850 env[1188]: time="2024-12-13T16:14:41.741687980Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/26bc1b94bbac24262f03dc77efd8b80c435f5f85e07a7e1392ceb0a84b8fb7cf pid=3030 runtime=io.containerd.runc.v2 Dec 13 16:14:41.764309 systemd[1]: Started cri-containerd-b29f5fe3d20f3f895304c9905ef5c16e2cf884e2d56c54ebf505766f65ec07ff.scope. Dec 13 16:14:41.790844 systemd[1]: Started cri-containerd-26bc1b94bbac24262f03dc77efd8b80c435f5f85e07a7e1392ceb0a84b8fb7cf.scope. Dec 13 16:14:41.845876 env[1188]: time="2024-12-13T16:14:41.845804967Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-lfbjq,Uid:dc7bf0ff-b0a5-488c-9238-23705400f90b,Namespace:kube-system,Attempt:0,} returns sandbox id \"26bc1b94bbac24262f03dc77efd8b80c435f5f85e07a7e1392ceb0a84b8fb7cf\"" Dec 13 16:14:41.852855 env[1188]: time="2024-12-13T16:14:41.852803029Z" level=info msg="CreateContainer within sandbox \"26bc1b94bbac24262f03dc77efd8b80c435f5f85e07a7e1392ceb0a84b8fb7cf\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 16:14:41.871718 env[1188]: time="2024-12-13T16:14:41.870973311Z" level=info msg="CreateContainer within sandbox \"26bc1b94bbac24262f03dc77efd8b80c435f5f85e07a7e1392ceb0a84b8fb7cf\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"40a6f6d60040afa753990020964fa5b4d144356a7d3ba56d5417fde3521dfbae\"" Dec 13 16:14:41.873486 env[1188]: time="2024-12-13T16:14:41.873448926Z" level=info msg="StartContainer for \"40a6f6d60040afa753990020964fa5b4d144356a7d3ba56d5417fde3521dfbae\"" Dec 13 16:14:41.882322 env[1188]: time="2024-12-13T16:14:41.882283773Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-4hnjv,Uid:00167ed9-6c8e-466e-a2cd-7d21ba86589e,Namespace:kube-system,Attempt:0,} returns sandbox id \"b29f5fe3d20f3f895304c9905ef5c16e2cf884e2d56c54ebf505766f65ec07ff\"" Dec 13 16:14:41.885277 env[1188]: time="2024-12-13T16:14:41.885218726Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Dec 13 16:14:41.903107 systemd[1]: Started cri-containerd-40a6f6d60040afa753990020964fa5b4d144356a7d3ba56d5417fde3521dfbae.scope. Dec 13 16:14:41.924778 systemd[1]: cri-containerd-40a6f6d60040afa753990020964fa5b4d144356a7d3ba56d5417fde3521dfbae.scope: Deactivated successfully. Dec 13 16:14:41.947055 env[1188]: time="2024-12-13T16:14:41.946965750Z" level=info msg="shim disconnected" id=40a6f6d60040afa753990020964fa5b4d144356a7d3ba56d5417fde3521dfbae Dec 13 16:14:41.947511 env[1188]: time="2024-12-13T16:14:41.947457374Z" level=warning msg="cleaning up after shim disconnected" id=40a6f6d60040afa753990020964fa5b4d144356a7d3ba56d5417fde3521dfbae namespace=k8s.io Dec 13 16:14:41.947693 env[1188]: time="2024-12-13T16:14:41.947664441Z" level=info msg="cleaning up dead shim" Dec 13 16:14:41.961318 env[1188]: time="2024-12-13T16:14:41.961211121Z" level=warning msg="cleanup warnings time=\"2024-12-13T16:14:41Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3111 runtime=io.containerd.runc.v2\ntime=\"2024-12-13T16:14:41Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/40a6f6d60040afa753990020964fa5b4d144356a7d3ba56d5417fde3521dfbae/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Dec 13 16:14:41.961912 env[1188]: time="2024-12-13T16:14:41.961696577Z" level=error msg="copy shim log" error="read /proc/self/fd/69: file already closed" Dec 13 16:14:41.962229 env[1188]: time="2024-12-13T16:14:41.962174600Z" level=error msg="Failed to pipe stdout of container \"40a6f6d60040afa753990020964fa5b4d144356a7d3ba56d5417fde3521dfbae\"" error="reading from a closed fifo" Dec 13 16:14:41.966809 env[1188]: time="2024-12-13T16:14:41.966737266Z" level=error msg="Failed to pipe stderr of container \"40a6f6d60040afa753990020964fa5b4d144356a7d3ba56d5417fde3521dfbae\"" error="reading from a closed fifo" Dec 13 16:14:41.968331 env[1188]: time="2024-12-13T16:14:41.968257336Z" level=error msg="StartContainer for \"40a6f6d60040afa753990020964fa5b4d144356a7d3ba56d5417fde3521dfbae\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Dec 13 16:14:41.969447 kubelet[1464]: E1213 16:14:41.968887 1464 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="40a6f6d60040afa753990020964fa5b4d144356a7d3ba56d5417fde3521dfbae" Dec 13 16:14:41.969447 kubelet[1464]: E1213 16:14:41.969262 1464 kuberuntime_manager.go:1256] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Dec 13 16:14:41.969447 kubelet[1464]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Dec 13 16:14:41.969447 kubelet[1464]: rm /hostbin/cilium-mount Dec 13 16:14:41.969753 kubelet[1464]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qnzvm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:&AppArmorProfile{Type:Unconfined,LocalhostProfile:nil,},},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-lfbjq_kube-system(dc7bf0ff-b0a5-488c-9238-23705400f90b): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Dec 13 16:14:41.969966 kubelet[1464]: E1213 16:14:41.969338 1464 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-lfbjq" podUID="dc7bf0ff-b0a5-488c-9238-23705400f90b" Dec 13 16:14:42.012018 env[1188]: time="2024-12-13T16:14:42.011949699Z" level=info msg="CreateContainer within sandbox \"26bc1b94bbac24262f03dc77efd8b80c435f5f85e07a7e1392ceb0a84b8fb7cf\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:1,}" Dec 13 16:14:42.025699 env[1188]: time="2024-12-13T16:14:42.025656476Z" level=info msg="CreateContainer within sandbox \"26bc1b94bbac24262f03dc77efd8b80c435f5f85e07a7e1392ceb0a84b8fb7cf\" for &ContainerMetadata{Name:mount-cgroup,Attempt:1,} returns container id \"51ccc65cb493e4be92b35812e12db4e5a1cb74cf293cf1fe593d71786eb86253\"" Dec 13 16:14:42.026362 env[1188]: time="2024-12-13T16:14:42.026307966Z" level=info msg="StartContainer for \"51ccc65cb493e4be92b35812e12db4e5a1cb74cf293cf1fe593d71786eb86253\"" Dec 13 16:14:42.063682 systemd[1]: Started cri-containerd-51ccc65cb493e4be92b35812e12db4e5a1cb74cf293cf1fe593d71786eb86253.scope. Dec 13 16:14:42.088468 systemd[1]: cri-containerd-51ccc65cb493e4be92b35812e12db4e5a1cb74cf293cf1fe593d71786eb86253.scope: Deactivated successfully. Dec 13 16:14:42.099938 env[1188]: time="2024-12-13T16:14:42.099865175Z" level=info msg="shim disconnected" id=51ccc65cb493e4be92b35812e12db4e5a1cb74cf293cf1fe593d71786eb86253 Dec 13 16:14:42.100137 env[1188]: time="2024-12-13T16:14:42.099941960Z" level=warning msg="cleaning up after shim disconnected" id=51ccc65cb493e4be92b35812e12db4e5a1cb74cf293cf1fe593d71786eb86253 namespace=k8s.io Dec 13 16:14:42.100137 env[1188]: time="2024-12-13T16:14:42.099959739Z" level=info msg="cleaning up dead shim" Dec 13 16:14:42.114526 env[1188]: time="2024-12-13T16:14:42.114451957Z" level=warning msg="cleanup warnings time=\"2024-12-13T16:14:42Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3147 runtime=io.containerd.runc.v2\ntime=\"2024-12-13T16:14:42Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/51ccc65cb493e4be92b35812e12db4e5a1cb74cf293cf1fe593d71786eb86253/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Dec 13 16:14:42.114951 env[1188]: time="2024-12-13T16:14:42.114870662Z" level=error msg="copy shim log" error="read /proc/self/fd/69: file already closed" Dec 13 16:14:42.116599 env[1188]: time="2024-12-13T16:14:42.116548971Z" level=error msg="Failed to pipe stderr of container \"51ccc65cb493e4be92b35812e12db4e5a1cb74cf293cf1fe593d71786eb86253\"" error="reading from a closed fifo" Dec 13 16:14:42.117950 env[1188]: time="2024-12-13T16:14:42.117901045Z" level=error msg="Failed to pipe stdout of container \"51ccc65cb493e4be92b35812e12db4e5a1cb74cf293cf1fe593d71786eb86253\"" error="reading from a closed fifo" Dec 13 16:14:42.119606 env[1188]: time="2024-12-13T16:14:42.119558503Z" level=error msg="StartContainer for \"51ccc65cb493e4be92b35812e12db4e5a1cb74cf293cf1fe593d71786eb86253\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Dec 13 16:14:42.120044 kubelet[1464]: E1213 16:14:42.119968 1464 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="51ccc65cb493e4be92b35812e12db4e5a1cb74cf293cf1fe593d71786eb86253" Dec 13 16:14:42.120237 kubelet[1464]: E1213 16:14:42.120192 1464 kuberuntime_manager.go:1256] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Dec 13 16:14:42.120237 kubelet[1464]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Dec 13 16:14:42.120237 kubelet[1464]: rm /hostbin/cilium-mount Dec 13 16:14:42.120463 kubelet[1464]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qnzvm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:&AppArmorProfile{Type:Unconfined,LocalhostProfile:nil,},},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-lfbjq_kube-system(dc7bf0ff-b0a5-488c-9238-23705400f90b): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Dec 13 16:14:42.120463 kubelet[1464]: E1213 16:14:42.120264 1464 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-lfbjq" podUID="dc7bf0ff-b0a5-488c-9238-23705400f90b" Dec 13 16:14:42.614156 kubelet[1464]: E1213 16:14:42.614064 1464 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 16:14:43.017291 kubelet[1464]: I1213 16:14:43.017124 1464 scope.go:117] "RemoveContainer" containerID="40a6f6d60040afa753990020964fa5b4d144356a7d3ba56d5417fde3521dfbae" Dec 13 16:14:43.017986 env[1188]: time="2024-12-13T16:14:43.017916876Z" level=info msg="StopPodSandbox for \"26bc1b94bbac24262f03dc77efd8b80c435f5f85e07a7e1392ceb0a84b8fb7cf\"" Dec 13 16:14:43.018624 env[1188]: time="2024-12-13T16:14:43.018562726Z" level=info msg="Container to stop \"51ccc65cb493e4be92b35812e12db4e5a1cb74cf293cf1fe593d71786eb86253\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 16:14:43.018849 env[1188]: time="2024-12-13T16:14:43.018814215Z" level=info msg="Container to stop \"40a6f6d60040afa753990020964fa5b4d144356a7d3ba56d5417fde3521dfbae\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 16:14:43.022728 env[1188]: time="2024-12-13T16:14:43.019022646Z" level=info msg="RemoveContainer for \"40a6f6d60040afa753990020964fa5b4d144356a7d3ba56d5417fde3521dfbae\"" Dec 13 16:14:43.021758 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-26bc1b94bbac24262f03dc77efd8b80c435f5f85e07a7e1392ceb0a84b8fb7cf-shm.mount: Deactivated successfully. Dec 13 16:14:43.027974 env[1188]: time="2024-12-13T16:14:43.027919525Z" level=info msg="RemoveContainer for \"40a6f6d60040afa753990020964fa5b4d144356a7d3ba56d5417fde3521dfbae\" returns successfully" Dec 13 16:14:43.033647 systemd[1]: cri-containerd-26bc1b94bbac24262f03dc77efd8b80c435f5f85e07a7e1392ceb0a84b8fb7cf.scope: Deactivated successfully. Dec 13 16:14:43.063933 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-26bc1b94bbac24262f03dc77efd8b80c435f5f85e07a7e1392ceb0a84b8fb7cf-rootfs.mount: Deactivated successfully. Dec 13 16:14:43.076195 env[1188]: time="2024-12-13T16:14:43.076116280Z" level=info msg="shim disconnected" id=26bc1b94bbac24262f03dc77efd8b80c435f5f85e07a7e1392ceb0a84b8fb7cf Dec 13 16:14:43.076384 env[1188]: time="2024-12-13T16:14:43.076203358Z" level=warning msg="cleaning up after shim disconnected" id=26bc1b94bbac24262f03dc77efd8b80c435f5f85e07a7e1392ceb0a84b8fb7cf namespace=k8s.io Dec 13 16:14:43.076384 env[1188]: time="2024-12-13T16:14:43.076221798Z" level=info msg="cleaning up dead shim" Dec 13 16:14:43.088963 env[1188]: time="2024-12-13T16:14:43.088893668Z" level=warning msg="cleanup warnings time=\"2024-12-13T16:14:43Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3180 runtime=io.containerd.runc.v2\n" Dec 13 16:14:43.089816 env[1188]: time="2024-12-13T16:14:43.089772597Z" level=info msg="TearDown network for sandbox \"26bc1b94bbac24262f03dc77efd8b80c435f5f85e07a7e1392ceb0a84b8fb7cf\" successfully" Dec 13 16:14:43.089979 env[1188]: time="2024-12-13T16:14:43.089944921Z" level=info msg="StopPodSandbox for \"26bc1b94bbac24262f03dc77efd8b80c435f5f85e07a7e1392ceb0a84b8fb7cf\" returns successfully" Dec 13 16:14:43.230460 kubelet[1464]: I1213 16:14:43.230167 1464 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/dc7bf0ff-b0a5-488c-9238-23705400f90b-hostproc\") pod \"dc7bf0ff-b0a5-488c-9238-23705400f90b\" (UID: \"dc7bf0ff-b0a5-488c-9238-23705400f90b\") " Dec 13 16:14:43.230460 kubelet[1464]: I1213 16:14:43.230230 1464 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/dc7bf0ff-b0a5-488c-9238-23705400f90b-host-proc-sys-net\") pod \"dc7bf0ff-b0a5-488c-9238-23705400f90b\" (UID: \"dc7bf0ff-b0a5-488c-9238-23705400f90b\") " Dec 13 16:14:43.230460 kubelet[1464]: I1213 16:14:43.230261 1464 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dc7bf0ff-b0a5-488c-9238-23705400f90b-xtables-lock\") pod \"dc7bf0ff-b0a5-488c-9238-23705400f90b\" (UID: \"dc7bf0ff-b0a5-488c-9238-23705400f90b\") " Dec 13 16:14:43.230460 kubelet[1464]: I1213 16:14:43.230290 1464 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/dc7bf0ff-b0a5-488c-9238-23705400f90b-host-proc-sys-kernel\") pod \"dc7bf0ff-b0a5-488c-9238-23705400f90b\" (UID: \"dc7bf0ff-b0a5-488c-9238-23705400f90b\") " Dec 13 16:14:43.230460 kubelet[1464]: I1213 16:14:43.230292 1464 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dc7bf0ff-b0a5-488c-9238-23705400f90b-hostproc" (OuterVolumeSpecName: "hostproc") pod "dc7bf0ff-b0a5-488c-9238-23705400f90b" (UID: "dc7bf0ff-b0a5-488c-9238-23705400f90b"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 16:14:43.230460 kubelet[1464]: I1213 16:14:43.230326 1464 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/dc7bf0ff-b0a5-488c-9238-23705400f90b-hubble-tls\") pod \"dc7bf0ff-b0a5-488c-9238-23705400f90b\" (UID: \"dc7bf0ff-b0a5-488c-9238-23705400f90b\") " Dec 13 16:14:43.232396 kubelet[1464]: I1213 16:14:43.230400 1464 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/dc7bf0ff-b0a5-488c-9238-23705400f90b-cilium-cgroup\") pod \"dc7bf0ff-b0a5-488c-9238-23705400f90b\" (UID: \"dc7bf0ff-b0a5-488c-9238-23705400f90b\") " Dec 13 16:14:43.232396 kubelet[1464]: I1213 16:14:43.231257 1464 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/dc7bf0ff-b0a5-488c-9238-23705400f90b-cilium-ipsec-secrets\") pod \"dc7bf0ff-b0a5-488c-9238-23705400f90b\" (UID: \"dc7bf0ff-b0a5-488c-9238-23705400f90b\") " Dec 13 16:14:43.232396 kubelet[1464]: I1213 16:14:43.231280 1464 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dc7bf0ff-b0a5-488c-9238-23705400f90b-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "dc7bf0ff-b0a5-488c-9238-23705400f90b" (UID: "dc7bf0ff-b0a5-488c-9238-23705400f90b"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 16:14:43.232396 kubelet[1464]: I1213 16:14:43.231299 1464 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qnzvm\" (UniqueName: \"kubernetes.io/projected/dc7bf0ff-b0a5-488c-9238-23705400f90b-kube-api-access-qnzvm\") pod \"dc7bf0ff-b0a5-488c-9238-23705400f90b\" (UID: \"dc7bf0ff-b0a5-488c-9238-23705400f90b\") " Dec 13 16:14:43.232396 kubelet[1464]: I1213 16:14:43.231321 1464 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dc7bf0ff-b0a5-488c-9238-23705400f90b-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "dc7bf0ff-b0a5-488c-9238-23705400f90b" (UID: "dc7bf0ff-b0a5-488c-9238-23705400f90b"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 16:14:43.232396 kubelet[1464]: I1213 16:14:43.231347 1464 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dc7bf0ff-b0a5-488c-9238-23705400f90b-lib-modules\") pod \"dc7bf0ff-b0a5-488c-9238-23705400f90b\" (UID: \"dc7bf0ff-b0a5-488c-9238-23705400f90b\") " Dec 13 16:14:43.232396 kubelet[1464]: I1213 16:14:43.231381 1464 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/dc7bf0ff-b0a5-488c-9238-23705400f90b-cilium-run\") pod \"dc7bf0ff-b0a5-488c-9238-23705400f90b\" (UID: \"dc7bf0ff-b0a5-488c-9238-23705400f90b\") " Dec 13 16:14:43.232396 kubelet[1464]: I1213 16:14:43.231440 1464 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/dc7bf0ff-b0a5-488c-9238-23705400f90b-clustermesh-secrets\") pod \"dc7bf0ff-b0a5-488c-9238-23705400f90b\" (UID: \"dc7bf0ff-b0a5-488c-9238-23705400f90b\") " Dec 13 16:14:43.232396 kubelet[1464]: I1213 16:14:43.231471 1464 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/dc7bf0ff-b0a5-488c-9238-23705400f90b-cni-path\") pod \"dc7bf0ff-b0a5-488c-9238-23705400f90b\" (UID: \"dc7bf0ff-b0a5-488c-9238-23705400f90b\") " Dec 13 16:14:43.232396 kubelet[1464]: I1213 16:14:43.231529 1464 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/dc7bf0ff-b0a5-488c-9238-23705400f90b-etc-cni-netd\") pod \"dc7bf0ff-b0a5-488c-9238-23705400f90b\" (UID: \"dc7bf0ff-b0a5-488c-9238-23705400f90b\") " Dec 13 16:14:43.232396 kubelet[1464]: I1213 16:14:43.231584 1464 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/dc7bf0ff-b0a5-488c-9238-23705400f90b-bpf-maps\") pod \"dc7bf0ff-b0a5-488c-9238-23705400f90b\" (UID: \"dc7bf0ff-b0a5-488c-9238-23705400f90b\") " Dec 13 16:14:43.232396 kubelet[1464]: I1213 16:14:43.231618 1464 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/dc7bf0ff-b0a5-488c-9238-23705400f90b-cilium-config-path\") pod \"dc7bf0ff-b0a5-488c-9238-23705400f90b\" (UID: \"dc7bf0ff-b0a5-488c-9238-23705400f90b\") " Dec 13 16:14:43.232396 kubelet[1464]: I1213 16:14:43.231700 1464 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/dc7bf0ff-b0a5-488c-9238-23705400f90b-hostproc\") on node \"10.230.57.126\" DevicePath \"\"" Dec 13 16:14:43.232396 kubelet[1464]: I1213 16:14:43.231721 1464 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/dc7bf0ff-b0a5-488c-9238-23705400f90b-host-proc-sys-net\") on node \"10.230.57.126\" DevicePath \"\"" Dec 13 16:14:43.232396 kubelet[1464]: I1213 16:14:43.231772 1464 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dc7bf0ff-b0a5-488c-9238-23705400f90b-xtables-lock\") on node \"10.230.57.126\" DevicePath \"\"" Dec 13 16:14:43.237594 systemd[1]: var-lib-kubelet-pods-dc7bf0ff\x2db0a5\x2d488c\x2d9238\x2d23705400f90b-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 16:14:43.240315 kubelet[1464]: I1213 16:14:43.231350 1464 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dc7bf0ff-b0a5-488c-9238-23705400f90b-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "dc7bf0ff-b0a5-488c-9238-23705400f90b" (UID: "dc7bf0ff-b0a5-488c-9238-23705400f90b"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 16:14:43.240579 kubelet[1464]: I1213 16:14:43.231535 1464 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dc7bf0ff-b0a5-488c-9238-23705400f90b-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "dc7bf0ff-b0a5-488c-9238-23705400f90b" (UID: "dc7bf0ff-b0a5-488c-9238-23705400f90b"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 16:14:43.240759 kubelet[1464]: I1213 16:14:43.231565 1464 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dc7bf0ff-b0a5-488c-9238-23705400f90b-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "dc7bf0ff-b0a5-488c-9238-23705400f90b" (UID: "dc7bf0ff-b0a5-488c-9238-23705400f90b"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 16:14:43.240961 kubelet[1464]: I1213 16:14:43.240920 1464 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dc7bf0ff-b0a5-488c-9238-23705400f90b-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "dc7bf0ff-b0a5-488c-9238-23705400f90b" (UID: "dc7bf0ff-b0a5-488c-9238-23705400f90b"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 16:14:43.241826 kubelet[1464]: I1213 16:14:43.241787 1464 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dc7bf0ff-b0a5-488c-9238-23705400f90b-cni-path" (OuterVolumeSpecName: "cni-path") pod "dc7bf0ff-b0a5-488c-9238-23705400f90b" (UID: "dc7bf0ff-b0a5-488c-9238-23705400f90b"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 16:14:43.244008 systemd[1]: var-lib-kubelet-pods-dc7bf0ff\x2db0a5\x2d488c\x2d9238\x2d23705400f90b-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Dec 13 16:14:43.245365 kubelet[1464]: I1213 16:14:43.245323 1464 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dc7bf0ff-b0a5-488c-9238-23705400f90b-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "dc7bf0ff-b0a5-488c-9238-23705400f90b" (UID: "dc7bf0ff-b0a5-488c-9238-23705400f90b"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 16:14:43.245644 kubelet[1464]: I1213 16:14:43.245606 1464 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dc7bf0ff-b0a5-488c-9238-23705400f90b-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "dc7bf0ff-b0a5-488c-9238-23705400f90b" (UID: "dc7bf0ff-b0a5-488c-9238-23705400f90b"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 16:14:43.245916 kubelet[1464]: I1213 16:14:43.245880 1464 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc7bf0ff-b0a5-488c-9238-23705400f90b-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "dc7bf0ff-b0a5-488c-9238-23705400f90b" (UID: "dc7bf0ff-b0a5-488c-9238-23705400f90b"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 16:14:43.246009 kubelet[1464]: I1213 16:14:43.245982 1464 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc7bf0ff-b0a5-488c-9238-23705400f90b-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "dc7bf0ff-b0a5-488c-9238-23705400f90b" (UID: "dc7bf0ff-b0a5-488c-9238-23705400f90b"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 16:14:43.247565 kubelet[1464]: I1213 16:14:43.247533 1464 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc7bf0ff-b0a5-488c-9238-23705400f90b-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "dc7bf0ff-b0a5-488c-9238-23705400f90b" (UID: "dc7bf0ff-b0a5-488c-9238-23705400f90b"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 16:14:43.248429 kubelet[1464]: I1213 16:14:43.248363 1464 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc7bf0ff-b0a5-488c-9238-23705400f90b-kube-api-access-qnzvm" (OuterVolumeSpecName: "kube-api-access-qnzvm") pod "dc7bf0ff-b0a5-488c-9238-23705400f90b" (UID: "dc7bf0ff-b0a5-488c-9238-23705400f90b"). InnerVolumeSpecName "kube-api-access-qnzvm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 16:14:43.251111 kubelet[1464]: I1213 16:14:43.251076 1464 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc7bf0ff-b0a5-488c-9238-23705400f90b-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "dc7bf0ff-b0a5-488c-9238-23705400f90b" (UID: "dc7bf0ff-b0a5-488c-9238-23705400f90b"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 16:14:43.332849 kubelet[1464]: I1213 16:14:43.332795 1464 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/dc7bf0ff-b0a5-488c-9238-23705400f90b-bpf-maps\") on node \"10.230.57.126\" DevicePath \"\"" Dec 13 16:14:43.332849 kubelet[1464]: I1213 16:14:43.332850 1464 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/dc7bf0ff-b0a5-488c-9238-23705400f90b-cilium-config-path\") on node \"10.230.57.126\" DevicePath \"\"" Dec 13 16:14:43.332849 kubelet[1464]: I1213 16:14:43.332869 1464 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/dc7bf0ff-b0a5-488c-9238-23705400f90b-etc-cni-netd\") on node \"10.230.57.126\" DevicePath \"\"" Dec 13 16:14:43.333244 kubelet[1464]: I1213 16:14:43.332885 1464 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/dc7bf0ff-b0a5-488c-9238-23705400f90b-cilium-cgroup\") on node \"10.230.57.126\" DevicePath \"\"" Dec 13 16:14:43.333244 kubelet[1464]: I1213 16:14:43.332901 1464 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/dc7bf0ff-b0a5-488c-9238-23705400f90b-host-proc-sys-kernel\") on node \"10.230.57.126\" DevicePath \"\"" Dec 13 16:14:43.333244 kubelet[1464]: I1213 16:14:43.332916 1464 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/dc7bf0ff-b0a5-488c-9238-23705400f90b-hubble-tls\") on node \"10.230.57.126\" DevicePath \"\"" Dec 13 16:14:43.333244 kubelet[1464]: I1213 16:14:43.332930 1464 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dc7bf0ff-b0a5-488c-9238-23705400f90b-lib-modules\") on node \"10.230.57.126\" DevicePath \"\"" Dec 13 16:14:43.333244 kubelet[1464]: I1213 16:14:43.332944 1464 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/dc7bf0ff-b0a5-488c-9238-23705400f90b-cilium-run\") on node \"10.230.57.126\" DevicePath \"\"" Dec 13 16:14:43.333244 kubelet[1464]: I1213 16:14:43.332959 1464 reconciler_common.go:289] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/dc7bf0ff-b0a5-488c-9238-23705400f90b-cilium-ipsec-secrets\") on node \"10.230.57.126\" DevicePath \"\"" Dec 13 16:14:43.333244 kubelet[1464]: I1213 16:14:43.332973 1464 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-qnzvm\" (UniqueName: \"kubernetes.io/projected/dc7bf0ff-b0a5-488c-9238-23705400f90b-kube-api-access-qnzvm\") on node \"10.230.57.126\" DevicePath \"\"" Dec 13 16:14:43.333244 kubelet[1464]: I1213 16:14:43.332988 1464 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/dc7bf0ff-b0a5-488c-9238-23705400f90b-clustermesh-secrets\") on node \"10.230.57.126\" DevicePath \"\"" Dec 13 16:14:43.333244 kubelet[1464]: I1213 16:14:43.333002 1464 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/dc7bf0ff-b0a5-488c-9238-23705400f90b-cni-path\") on node \"10.230.57.126\" DevicePath \"\"" Dec 13 16:14:43.534552 kubelet[1464]: E1213 16:14:43.534456 1464 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 16:14:43.616127 kubelet[1464]: E1213 16:14:43.615296 1464 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 16:14:43.705366 kubelet[1464]: E1213 16:14:43.705266 1464 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 16:14:43.717279 systemd[1]: var-lib-kubelet-pods-dc7bf0ff\x2db0a5\x2d488c\x2d9238\x2d23705400f90b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dqnzvm.mount: Deactivated successfully. Dec 13 16:14:43.717461 systemd[1]: var-lib-kubelet-pods-dc7bf0ff\x2db0a5\x2d488c\x2d9238\x2d23705400f90b-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 16:14:43.764779 systemd[1]: Removed slice kubepods-burstable-poddc7bf0ff_b0a5_488c_9238_23705400f90b.slice. Dec 13 16:14:43.829138 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount632875450.mount: Deactivated successfully. Dec 13 16:14:44.028014 kubelet[1464]: I1213 16:14:44.027513 1464 scope.go:117] "RemoveContainer" containerID="51ccc65cb493e4be92b35812e12db4e5a1cb74cf293cf1fe593d71786eb86253" Dec 13 16:14:44.031861 env[1188]: time="2024-12-13T16:14:44.031803765Z" level=info msg="RemoveContainer for \"51ccc65cb493e4be92b35812e12db4e5a1cb74cf293cf1fe593d71786eb86253\"" Dec 13 16:14:44.037522 env[1188]: time="2024-12-13T16:14:44.037474988Z" level=info msg="RemoveContainer for \"51ccc65cb493e4be92b35812e12db4e5a1cb74cf293cf1fe593d71786eb86253\" returns successfully" Dec 13 16:14:44.112698 kubelet[1464]: I1213 16:14:44.111575 1464 topology_manager.go:215] "Topology Admit Handler" podUID="9b3fb892-a55f-41c2-8285-891de5bce1b4" podNamespace="kube-system" podName="cilium-wzw7q" Dec 13 16:14:44.112698 kubelet[1464]: E1213 16:14:44.111657 1464 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="dc7bf0ff-b0a5-488c-9238-23705400f90b" containerName="mount-cgroup" Dec 13 16:14:44.112698 kubelet[1464]: E1213 16:14:44.111674 1464 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="dc7bf0ff-b0a5-488c-9238-23705400f90b" containerName="mount-cgroup" Dec 13 16:14:44.112698 kubelet[1464]: I1213 16:14:44.111704 1464 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc7bf0ff-b0a5-488c-9238-23705400f90b" containerName="mount-cgroup" Dec 13 16:14:44.112698 kubelet[1464]: I1213 16:14:44.111716 1464 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc7bf0ff-b0a5-488c-9238-23705400f90b" containerName="mount-cgroup" Dec 13 16:14:44.122116 systemd[1]: Created slice kubepods-burstable-pod9b3fb892_a55f_41c2_8285_891de5bce1b4.slice. Dec 13 16:14:44.237715 kubelet[1464]: I1213 16:14:44.237547 1464 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9b3fb892-a55f-41c2-8285-891de5bce1b4-cilium-run\") pod \"cilium-wzw7q\" (UID: \"9b3fb892-a55f-41c2-8285-891de5bce1b4\") " pod="kube-system/cilium-wzw7q" Dec 13 16:14:44.237715 kubelet[1464]: I1213 16:14:44.237630 1464 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9b3fb892-a55f-41c2-8285-891de5bce1b4-hostproc\") pod \"cilium-wzw7q\" (UID: \"9b3fb892-a55f-41c2-8285-891de5bce1b4\") " pod="kube-system/cilium-wzw7q" Dec 13 16:14:44.238051 kubelet[1464]: I1213 16:14:44.237807 1464 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9b3fb892-a55f-41c2-8285-891de5bce1b4-host-proc-sys-kernel\") pod \"cilium-wzw7q\" (UID: \"9b3fb892-a55f-41c2-8285-891de5bce1b4\") " pod="kube-system/cilium-wzw7q" Dec 13 16:14:44.238051 kubelet[1464]: I1213 16:14:44.237876 1464 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hklrp\" (UniqueName: \"kubernetes.io/projected/9b3fb892-a55f-41c2-8285-891de5bce1b4-kube-api-access-hklrp\") pod \"cilium-wzw7q\" (UID: \"9b3fb892-a55f-41c2-8285-891de5bce1b4\") " pod="kube-system/cilium-wzw7q" Dec 13 16:14:44.238051 kubelet[1464]: I1213 16:14:44.237928 1464 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9b3fb892-a55f-41c2-8285-891de5bce1b4-cni-path\") pod \"cilium-wzw7q\" (UID: \"9b3fb892-a55f-41c2-8285-891de5bce1b4\") " pod="kube-system/cilium-wzw7q" Dec 13 16:14:44.238051 kubelet[1464]: I1213 16:14:44.237961 1464 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9b3fb892-a55f-41c2-8285-891de5bce1b4-xtables-lock\") pod \"cilium-wzw7q\" (UID: \"9b3fb892-a55f-41c2-8285-891de5bce1b4\") " pod="kube-system/cilium-wzw7q" Dec 13 16:14:44.238051 kubelet[1464]: I1213 16:14:44.238011 1464 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9b3fb892-a55f-41c2-8285-891de5bce1b4-clustermesh-secrets\") pod \"cilium-wzw7q\" (UID: \"9b3fb892-a55f-41c2-8285-891de5bce1b4\") " pod="kube-system/cilium-wzw7q" Dec 13 16:14:44.238051 kubelet[1464]: I1213 16:14:44.238039 1464 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/9b3fb892-a55f-41c2-8285-891de5bce1b4-cilium-ipsec-secrets\") pod \"cilium-wzw7q\" (UID: \"9b3fb892-a55f-41c2-8285-891de5bce1b4\") " pod="kube-system/cilium-wzw7q" Dec 13 16:14:44.238457 kubelet[1464]: I1213 16:14:44.238093 1464 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9b3fb892-a55f-41c2-8285-891de5bce1b4-lib-modules\") pod \"cilium-wzw7q\" (UID: \"9b3fb892-a55f-41c2-8285-891de5bce1b4\") " pod="kube-system/cilium-wzw7q" Dec 13 16:14:44.238457 kubelet[1464]: I1213 16:14:44.238159 1464 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9b3fb892-a55f-41c2-8285-891de5bce1b4-hubble-tls\") pod \"cilium-wzw7q\" (UID: \"9b3fb892-a55f-41c2-8285-891de5bce1b4\") " pod="kube-system/cilium-wzw7q" Dec 13 16:14:44.238457 kubelet[1464]: I1213 16:14:44.238190 1464 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9b3fb892-a55f-41c2-8285-891de5bce1b4-bpf-maps\") pod \"cilium-wzw7q\" (UID: \"9b3fb892-a55f-41c2-8285-891de5bce1b4\") " pod="kube-system/cilium-wzw7q" Dec 13 16:14:44.238457 kubelet[1464]: I1213 16:14:44.238235 1464 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9b3fb892-a55f-41c2-8285-891de5bce1b4-cilium-cgroup\") pod \"cilium-wzw7q\" (UID: \"9b3fb892-a55f-41c2-8285-891de5bce1b4\") " pod="kube-system/cilium-wzw7q" Dec 13 16:14:44.238457 kubelet[1464]: I1213 16:14:44.238272 1464 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9b3fb892-a55f-41c2-8285-891de5bce1b4-etc-cni-netd\") pod \"cilium-wzw7q\" (UID: \"9b3fb892-a55f-41c2-8285-891de5bce1b4\") " pod="kube-system/cilium-wzw7q" Dec 13 16:14:44.238457 kubelet[1464]: I1213 16:14:44.238319 1464 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9b3fb892-a55f-41c2-8285-891de5bce1b4-cilium-config-path\") pod \"cilium-wzw7q\" (UID: \"9b3fb892-a55f-41c2-8285-891de5bce1b4\") " pod="kube-system/cilium-wzw7q" Dec 13 16:14:44.238457 kubelet[1464]: I1213 16:14:44.238352 1464 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9b3fb892-a55f-41c2-8285-891de5bce1b4-host-proc-sys-net\") pod \"cilium-wzw7q\" (UID: \"9b3fb892-a55f-41c2-8285-891de5bce1b4\") " pod="kube-system/cilium-wzw7q" Dec 13 16:14:44.431154 env[1188]: time="2024-12-13T16:14:44.431069703Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wzw7q,Uid:9b3fb892-a55f-41c2-8285-891de5bce1b4,Namespace:kube-system,Attempt:0,}" Dec 13 16:14:44.464877 env[1188]: time="2024-12-13T16:14:44.464747097Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 16:14:44.464877 env[1188]: time="2024-12-13T16:14:44.464829066Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 16:14:44.464877 env[1188]: time="2024-12-13T16:14:44.464848104Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 16:14:44.465526 env[1188]: time="2024-12-13T16:14:44.465450864Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/de83c31a54ee3fe6239ae8f60179f4e49b8c4258e37f1849434e1c1e1613099a pid=3209 runtime=io.containerd.runc.v2 Dec 13 16:14:44.487449 systemd[1]: Started cri-containerd-de83c31a54ee3fe6239ae8f60179f4e49b8c4258e37f1849434e1c1e1613099a.scope. Dec 13 16:14:44.532708 env[1188]: time="2024-12-13T16:14:44.532594334Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wzw7q,Uid:9b3fb892-a55f-41c2-8285-891de5bce1b4,Namespace:kube-system,Attempt:0,} returns sandbox id \"de83c31a54ee3fe6239ae8f60179f4e49b8c4258e37f1849434e1c1e1613099a\"" Dec 13 16:14:44.537167 env[1188]: time="2024-12-13T16:14:44.537114043Z" level=info msg="CreateContainer within sandbox \"de83c31a54ee3fe6239ae8f60179f4e49b8c4258e37f1849434e1c1e1613099a\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 16:14:44.562620 env[1188]: time="2024-12-13T16:14:44.562547033Z" level=info msg="CreateContainer within sandbox \"de83c31a54ee3fe6239ae8f60179f4e49b8c4258e37f1849434e1c1e1613099a\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"83572972bff0d49f31191ff613c4908b24e30707280e9b21c61289301c65354b\"" Dec 13 16:14:44.563655 env[1188]: time="2024-12-13T16:14:44.563618950Z" level=info msg="StartContainer for \"83572972bff0d49f31191ff613c4908b24e30707280e9b21c61289301c65354b\"" Dec 13 16:14:44.596132 systemd[1]: Started cri-containerd-83572972bff0d49f31191ff613c4908b24e30707280e9b21c61289301c65354b.scope. Dec 13 16:14:44.615954 kubelet[1464]: E1213 16:14:44.615900 1464 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 16:14:44.656506 env[1188]: time="2024-12-13T16:14:44.656446181Z" level=info msg="StartContainer for \"83572972bff0d49f31191ff613c4908b24e30707280e9b21c61289301c65354b\" returns successfully" Dec 13 16:14:44.676086 systemd[1]: cri-containerd-83572972bff0d49f31191ff613c4908b24e30707280e9b21c61289301c65354b.scope: Deactivated successfully. Dec 13 16:14:44.742663 env[1188]: time="2024-12-13T16:14:44.742401591Z" level=info msg="shim disconnected" id=83572972bff0d49f31191ff613c4908b24e30707280e9b21c61289301c65354b Dec 13 16:14:44.743462 env[1188]: time="2024-12-13T16:14:44.743413751Z" level=warning msg="cleaning up after shim disconnected" id=83572972bff0d49f31191ff613c4908b24e30707280e9b21c61289301c65354b namespace=k8s.io Dec 13 16:14:44.743654 env[1188]: time="2024-12-13T16:14:44.743624038Z" level=info msg="cleaning up dead shim" Dec 13 16:14:44.762025 env[1188]: time="2024-12-13T16:14:44.761964920Z" level=warning msg="cleanup warnings time=\"2024-12-13T16:14:44Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3293 runtime=io.containerd.runc.v2\n" Dec 13 16:14:45.035854 env[1188]: time="2024-12-13T16:14:45.035617464Z" level=info msg="CreateContainer within sandbox \"de83c31a54ee3fe6239ae8f60179f4e49b8c4258e37f1849434e1c1e1613099a\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 16:14:45.057844 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4149713826.mount: Deactivated successfully. Dec 13 16:14:45.060073 kubelet[1464]: W1213 16:14:45.059925 1464 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddc7bf0ff_b0a5_488c_9238_23705400f90b.slice/cri-containerd-40a6f6d60040afa753990020964fa5b4d144356a7d3ba56d5417fde3521dfbae.scope WatchSource:0}: container "40a6f6d60040afa753990020964fa5b4d144356a7d3ba56d5417fde3521dfbae" in namespace "k8s.io": not found Dec 13 16:14:45.069909 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount506234226.mount: Deactivated successfully. Dec 13 16:14:45.086578 env[1188]: time="2024-12-13T16:14:45.086523087Z" level=info msg="CreateContainer within sandbox \"de83c31a54ee3fe6239ae8f60179f4e49b8c4258e37f1849434e1c1e1613099a\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"924aab6dc282587c32a487e3f2b37908ece2ff9226879eeb4ba2605d14e9cbfd\"" Dec 13 16:14:45.087251 env[1188]: time="2024-12-13T16:14:45.087210003Z" level=info msg="StartContainer for \"924aab6dc282587c32a487e3f2b37908ece2ff9226879eeb4ba2605d14e9cbfd\"" Dec 13 16:14:45.115981 systemd[1]: Started cri-containerd-924aab6dc282587c32a487e3f2b37908ece2ff9226879eeb4ba2605d14e9cbfd.scope. Dec 13 16:14:45.125457 kubelet[1464]: I1213 16:14:45.123036 1464 setters.go:580] "Node became not ready" node="10.230.57.126" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-12-13T16:14:45Z","lastTransitionTime":"2024-12-13T16:14:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Dec 13 16:14:45.187483 env[1188]: time="2024-12-13T16:14:45.184241937Z" level=info msg="StartContainer for \"924aab6dc282587c32a487e3f2b37908ece2ff9226879eeb4ba2605d14e9cbfd\" returns successfully" Dec 13 16:14:45.196539 systemd[1]: cri-containerd-924aab6dc282587c32a487e3f2b37908ece2ff9226879eeb4ba2605d14e9cbfd.scope: Deactivated successfully. Dec 13 16:14:45.272500 env[1188]: time="2024-12-13T16:14:45.272400092Z" level=info msg="shim disconnected" id=924aab6dc282587c32a487e3f2b37908ece2ff9226879eeb4ba2605d14e9cbfd Dec 13 16:14:45.272500 env[1188]: time="2024-12-13T16:14:45.272479499Z" level=warning msg="cleaning up after shim disconnected" id=924aab6dc282587c32a487e3f2b37908ece2ff9226879eeb4ba2605d14e9cbfd namespace=k8s.io Dec 13 16:14:45.272500 env[1188]: time="2024-12-13T16:14:45.272497438Z" level=info msg="cleaning up dead shim" Dec 13 16:14:45.294141 env[1188]: time="2024-12-13T16:14:45.294014902Z" level=warning msg="cleanup warnings time=\"2024-12-13T16:14:45Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3358 runtime=io.containerd.runc.v2\n" Dec 13 16:14:45.616874 kubelet[1464]: E1213 16:14:45.616766 1464 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 16:14:45.642848 env[1188]: time="2024-12-13T16:14:45.642759710Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 16:14:45.645223 env[1188]: time="2024-12-13T16:14:45.645180436Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 16:14:45.647277 env[1188]: time="2024-12-13T16:14:45.647239162Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 16:14:45.648227 env[1188]: time="2024-12-13T16:14:45.648167427Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Dec 13 16:14:45.657708 env[1188]: time="2024-12-13T16:14:45.657641273Z" level=info msg="CreateContainer within sandbox \"b29f5fe3d20f3f895304c9905ef5c16e2cf884e2d56c54ebf505766f65ec07ff\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Dec 13 16:14:45.673710 env[1188]: time="2024-12-13T16:14:45.673555336Z" level=info msg="CreateContainer within sandbox \"b29f5fe3d20f3f895304c9905ef5c16e2cf884e2d56c54ebf505766f65ec07ff\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"fffc49456ad5add796b486130b9a4174cd1121a10149d6691e69e911dcb863c1\"" Dec 13 16:14:45.674311 env[1188]: time="2024-12-13T16:14:45.674252593Z" level=info msg="StartContainer for \"fffc49456ad5add796b486130b9a4174cd1121a10149d6691e69e911dcb863c1\"" Dec 13 16:14:45.700549 systemd[1]: Started cri-containerd-fffc49456ad5add796b486130b9a4174cd1121a10149d6691e69e911dcb863c1.scope. Dec 13 16:14:45.763649 kubelet[1464]: I1213 16:14:45.763586 1464 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc7bf0ff-b0a5-488c-9238-23705400f90b" path="/var/lib/kubelet/pods/dc7bf0ff-b0a5-488c-9238-23705400f90b/volumes" Dec 13 16:14:45.794931 env[1188]: time="2024-12-13T16:14:45.794876747Z" level=info msg="StartContainer for \"fffc49456ad5add796b486130b9a4174cd1121a10149d6691e69e911dcb863c1\" returns successfully" Dec 13 16:14:46.048364 env[1188]: time="2024-12-13T16:14:46.048203829Z" level=info msg="CreateContainer within sandbox \"de83c31a54ee3fe6239ae8f60179f4e49b8c4258e37f1849434e1c1e1613099a\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 16:14:46.068306 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2001081402.mount: Deactivated successfully. Dec 13 16:14:46.075000 kubelet[1464]: I1213 16:14:46.074935 1464 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-4hnjv" podStartSLOduration=2.303853863 podStartE2EDuration="6.074897897s" podCreationTimestamp="2024-12-13 16:14:40 +0000 UTC" firstStartedPulling="2024-12-13 16:14:41.884399241 +0000 UTC m=+79.080953663" lastFinishedPulling="2024-12-13 16:14:45.655443267 +0000 UTC m=+82.851997697" observedRunningTime="2024-12-13 16:14:46.0559133 +0000 UTC m=+83.252467749" watchObservedRunningTime="2024-12-13 16:14:46.074897897 +0000 UTC m=+83.271452335" Dec 13 16:14:46.078278 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2037541734.mount: Deactivated successfully. Dec 13 16:14:46.083380 env[1188]: time="2024-12-13T16:14:46.083326829Z" level=info msg="CreateContainer within sandbox \"de83c31a54ee3fe6239ae8f60179f4e49b8c4258e37f1849434e1c1e1613099a\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"3f51e938c8f07075c5131403d98c3fe5f31ce8d41eb93d251dc36319a206e991\"" Dec 13 16:14:46.084024 env[1188]: time="2024-12-13T16:14:46.083960276Z" level=info msg="StartContainer for \"3f51e938c8f07075c5131403d98c3fe5f31ce8d41eb93d251dc36319a206e991\"" Dec 13 16:14:46.118074 systemd[1]: Started cri-containerd-3f51e938c8f07075c5131403d98c3fe5f31ce8d41eb93d251dc36319a206e991.scope. Dec 13 16:14:46.174202 env[1188]: time="2024-12-13T16:14:46.174134363Z" level=info msg="StartContainer for \"3f51e938c8f07075c5131403d98c3fe5f31ce8d41eb93d251dc36319a206e991\" returns successfully" Dec 13 16:14:46.179867 systemd[1]: cri-containerd-3f51e938c8f07075c5131403d98c3fe5f31ce8d41eb93d251dc36319a206e991.scope: Deactivated successfully. Dec 13 16:14:46.213743 env[1188]: time="2024-12-13T16:14:46.213672158Z" level=info msg="shim disconnected" id=3f51e938c8f07075c5131403d98c3fe5f31ce8d41eb93d251dc36319a206e991 Dec 13 16:14:46.213743 env[1188]: time="2024-12-13T16:14:46.213741539Z" level=warning msg="cleaning up after shim disconnected" id=3f51e938c8f07075c5131403d98c3fe5f31ce8d41eb93d251dc36319a206e991 namespace=k8s.io Dec 13 16:14:46.214057 env[1188]: time="2024-12-13T16:14:46.213758201Z" level=info msg="cleaning up dead shim" Dec 13 16:14:46.224488 env[1188]: time="2024-12-13T16:14:46.224439992Z" level=warning msg="cleanup warnings time=\"2024-12-13T16:14:46Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3453 runtime=io.containerd.runc.v2\n" Dec 13 16:14:46.617076 kubelet[1464]: E1213 16:14:46.617008 1464 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 16:14:47.054033 env[1188]: time="2024-12-13T16:14:47.053899780Z" level=info msg="CreateContainer within sandbox \"de83c31a54ee3fe6239ae8f60179f4e49b8c4258e37f1849434e1c1e1613099a\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 16:14:47.072111 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2935277664.mount: Deactivated successfully. Dec 13 16:14:47.080838 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount77240575.mount: Deactivated successfully. Dec 13 16:14:47.085842 env[1188]: time="2024-12-13T16:14:47.085764229Z" level=info msg="CreateContainer within sandbox \"de83c31a54ee3fe6239ae8f60179f4e49b8c4258e37f1849434e1c1e1613099a\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"90b3834c8b0ad2ca3f9af2ce181ffc614e13410bb0e86a80f14c206f7c940ca6\"" Dec 13 16:14:47.086879 env[1188]: time="2024-12-13T16:14:47.086842591Z" level=info msg="StartContainer for \"90b3834c8b0ad2ca3f9af2ce181ffc614e13410bb0e86a80f14c206f7c940ca6\"" Dec 13 16:14:47.110110 systemd[1]: Started cri-containerd-90b3834c8b0ad2ca3f9af2ce181ffc614e13410bb0e86a80f14c206f7c940ca6.scope. Dec 13 16:14:47.156328 systemd[1]: cri-containerd-90b3834c8b0ad2ca3f9af2ce181ffc614e13410bb0e86a80f14c206f7c940ca6.scope: Deactivated successfully. Dec 13 16:14:47.160364 env[1188]: time="2024-12-13T16:14:47.160281158Z" level=info msg="StartContainer for \"90b3834c8b0ad2ca3f9af2ce181ffc614e13410bb0e86a80f14c206f7c940ca6\" returns successfully" Dec 13 16:14:47.190788 env[1188]: time="2024-12-13T16:14:47.190716222Z" level=info msg="shim disconnected" id=90b3834c8b0ad2ca3f9af2ce181ffc614e13410bb0e86a80f14c206f7c940ca6 Dec 13 16:14:47.191141 env[1188]: time="2024-12-13T16:14:47.191099805Z" level=warning msg="cleaning up after shim disconnected" id=90b3834c8b0ad2ca3f9af2ce181ffc614e13410bb0e86a80f14c206f7c940ca6 namespace=k8s.io Dec 13 16:14:47.191294 env[1188]: time="2024-12-13T16:14:47.191264372Z" level=info msg="cleaning up dead shim" Dec 13 16:14:47.201022 env[1188]: time="2024-12-13T16:14:47.200981916Z" level=warning msg="cleanup warnings time=\"2024-12-13T16:14:47Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3510 runtime=io.containerd.runc.v2\n" Dec 13 16:14:47.618055 kubelet[1464]: E1213 16:14:47.617968 1464 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 16:14:48.059275 env[1188]: time="2024-12-13T16:14:48.058946658Z" level=info msg="CreateContainer within sandbox \"de83c31a54ee3fe6239ae8f60179f4e49b8c4258e37f1849434e1c1e1613099a\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 16:14:48.078430 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount356719163.mount: Deactivated successfully. Dec 13 16:14:48.086111 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount404876471.mount: Deactivated successfully. Dec 13 16:14:48.090779 env[1188]: time="2024-12-13T16:14:48.090735050Z" level=info msg="CreateContainer within sandbox \"de83c31a54ee3fe6239ae8f60179f4e49b8c4258e37f1849434e1c1e1613099a\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"0afa1a120c09fdeadf56a42f8ef26d13d2df21d5c2772c52cf41bce17c9e1014\"" Dec 13 16:14:48.091781 env[1188]: time="2024-12-13T16:14:48.091743960Z" level=info msg="StartContainer for \"0afa1a120c09fdeadf56a42f8ef26d13d2df21d5c2772c52cf41bce17c9e1014\"" Dec 13 16:14:48.117032 systemd[1]: Started cri-containerd-0afa1a120c09fdeadf56a42f8ef26d13d2df21d5c2772c52cf41bce17c9e1014.scope. Dec 13 16:14:48.171487 env[1188]: time="2024-12-13T16:14:48.171259601Z" level=info msg="StartContainer for \"0afa1a120c09fdeadf56a42f8ef26d13d2df21d5c2772c52cf41bce17c9e1014\" returns successfully" Dec 13 16:14:48.192587 kubelet[1464]: W1213 16:14:48.192498 1464 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9b3fb892_a55f_41c2_8285_891de5bce1b4.slice/cri-containerd-83572972bff0d49f31191ff613c4908b24e30707280e9b21c61289301c65354b.scope WatchSource:0}: task 83572972bff0d49f31191ff613c4908b24e30707280e9b21c61289301c65354b not found: not found Dec 13 16:14:48.618450 kubelet[1464]: E1213 16:14:48.618386 1464 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 16:14:48.879475 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Dec 13 16:14:49.083715 kubelet[1464]: I1213 16:14:49.083608 1464 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-wzw7q" podStartSLOduration=5.083585778 podStartE2EDuration="5.083585778s" podCreationTimestamp="2024-12-13 16:14:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 16:14:49.083414831 +0000 UTC m=+86.279969270" watchObservedRunningTime="2024-12-13 16:14:49.083585778 +0000 UTC m=+86.280140217" Dec 13 16:14:49.619442 kubelet[1464]: E1213 16:14:49.619374 1464 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 16:14:50.619915 kubelet[1464]: E1213 16:14:50.619856 1464 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 16:14:51.308201 kubelet[1464]: W1213 16:14:51.308141 1464 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9b3fb892_a55f_41c2_8285_891de5bce1b4.slice/cri-containerd-924aab6dc282587c32a487e3f2b37908ece2ff9226879eeb4ba2605d14e9cbfd.scope WatchSource:0}: task 924aab6dc282587c32a487e3f2b37908ece2ff9226879eeb4ba2605d14e9cbfd not found: not found Dec 13 16:14:51.620691 kubelet[1464]: E1213 16:14:51.620566 1464 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 16:14:52.297107 systemd-networkd[1016]: lxc_health: Link UP Dec 13 16:14:52.310006 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Dec 13 16:14:52.312800 systemd-networkd[1016]: lxc_health: Gained carrier Dec 13 16:14:52.621778 kubelet[1464]: E1213 16:14:52.621704 1464 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 16:14:52.716726 systemd[1]: run-containerd-runc-k8s.io-0afa1a120c09fdeadf56a42f8ef26d13d2df21d5c2772c52cf41bce17c9e1014-runc.LmQ0Ru.mount: Deactivated successfully. Dec 13 16:14:53.622881 kubelet[1464]: E1213 16:14:53.622807 1464 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 16:14:54.036762 systemd-networkd[1016]: lxc_health: Gained IPv6LL Dec 13 16:14:54.421665 kubelet[1464]: W1213 16:14:54.421606 1464 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9b3fb892_a55f_41c2_8285_891de5bce1b4.slice/cri-containerd-3f51e938c8f07075c5131403d98c3fe5f31ce8d41eb93d251dc36319a206e991.scope WatchSource:0}: task 3f51e938c8f07075c5131403d98c3fe5f31ce8d41eb93d251dc36319a206e991 not found: not found Dec 13 16:14:54.623883 kubelet[1464]: E1213 16:14:54.623796 1464 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 16:14:55.625409 kubelet[1464]: E1213 16:14:55.625319 1464 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 16:14:56.626792 kubelet[1464]: E1213 16:14:56.626685 1464 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 16:14:57.228401 systemd[1]: run-containerd-runc-k8s.io-0afa1a120c09fdeadf56a42f8ef26d13d2df21d5c2772c52cf41bce17c9e1014-runc.mNVfHD.mount: Deactivated successfully. Dec 13 16:14:57.333540 kubelet[1464]: E1213 16:14:57.333397 1464 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:49880->127.0.0.1:41863: write tcp 127.0.0.1:49880->127.0.0.1:41863: write: broken pipe Dec 13 16:14:57.530360 kubelet[1464]: W1213 16:14:57.529707 1464 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9b3fb892_a55f_41c2_8285_891de5bce1b4.slice/cri-containerd-90b3834c8b0ad2ca3f9af2ce181ffc614e13410bb0e86a80f14c206f7c940ca6.scope WatchSource:0}: task 90b3834c8b0ad2ca3f9af2ce181ffc614e13410bb0e86a80f14c206f7c940ca6 not found: not found Dec 13 16:14:57.628580 kubelet[1464]: E1213 16:14:57.628490 1464 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 16:14:58.630130 kubelet[1464]: E1213 16:14:58.630053 1464 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 16:14:59.631645 kubelet[1464]: E1213 16:14:59.631539 1464 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 16:15:00.632604 kubelet[1464]: E1213 16:15:00.632514 1464 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 16:15:01.633586 kubelet[1464]: E1213 16:15:01.633511 1464 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"