Dec 13 14:27:04.989558 kernel: Linux version 5.15.173-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri Dec 13 12:55:10 -00 2024 Dec 13 14:27:04.989608 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=8c474c3ec361ec863adbecaa85281a726e1b53f7863ecc4742be8c5f6d02a66e Dec 13 14:27:04.989637 kernel: BIOS-provided physical RAM map: Dec 13 14:27:04.989655 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Dec 13 14:27:04.989671 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Dec 13 14:27:04.989688 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Dec 13 14:27:04.989707 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdcfff] usable Dec 13 14:27:04.989725 kernel: BIOS-e820: [mem 0x000000007ffdd000-0x000000007fffffff] reserved Dec 13 14:27:04.989745 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Dec 13 14:27:04.989762 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Dec 13 14:27:04.989779 kernel: NX (Execute Disable) protection: active Dec 13 14:27:04.989795 kernel: SMBIOS 2.8 present. Dec 13 14:27:04.989812 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014 Dec 13 14:27:04.989829 kernel: Hypervisor detected: KVM Dec 13 14:27:04.989850 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 13 14:27:04.989873 kernel: kvm-clock: cpu 0, msr 4819a001, primary cpu clock Dec 13 14:27:04.989890 kernel: kvm-clock: using sched offset of 5511567481 cycles Dec 13 14:27:04.989910 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 13 14:27:04.989928 kernel: tsc: Detected 1996.249 MHz processor Dec 13 14:27:04.989947 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 13 14:27:04.989966 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 13 14:27:04.993044 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Dec 13 14:27:04.993061 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 13 14:27:04.993083 kernel: ACPI: Early table checksum verification disabled Dec 13 14:27:04.993096 kernel: ACPI: RSDP 0x00000000000F5930 000014 (v00 BOCHS ) Dec 13 14:27:04.993111 kernel: ACPI: RSDT 0x000000007FFE1848 000030 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 14:27:04.993125 kernel: ACPI: FACP 0x000000007FFE172C 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 14:27:04.993138 kernel: ACPI: DSDT 0x000000007FFE0040 0016EC (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 14:27:04.993152 kernel: ACPI: FACS 0x000000007FFE0000 000040 Dec 13 14:27:04.993166 kernel: ACPI: APIC 0x000000007FFE17A0 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 14:27:04.993180 kernel: ACPI: WAET 0x000000007FFE1820 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 14:27:04.993193 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe172c-0x7ffe179f] Dec 13 14:27:04.993210 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe172b] Dec 13 14:27:04.993223 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Dec 13 14:27:04.993237 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17a0-0x7ffe181f] Dec 13 14:27:04.993251 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe1820-0x7ffe1847] Dec 13 14:27:04.993265 kernel: No NUMA configuration found Dec 13 14:27:04.993278 kernel: Faking a node at [mem 0x0000000000000000-0x000000007ffdcfff] Dec 13 14:27:04.993292 kernel: NODE_DATA(0) allocated [mem 0x7ffd7000-0x7ffdcfff] Dec 13 14:27:04.993306 kernel: Zone ranges: Dec 13 14:27:04.993328 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 13 14:27:04.993343 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdcfff] Dec 13 14:27:04.993357 kernel: Normal empty Dec 13 14:27:04.993371 kernel: Movable zone start for each node Dec 13 14:27:04.993385 kernel: Early memory node ranges Dec 13 14:27:04.993400 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Dec 13 14:27:04.993416 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdcfff] Dec 13 14:27:04.993431 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdcfff] Dec 13 14:27:04.993445 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 14:27:04.993459 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Dec 13 14:27:04.993473 kernel: On node 0, zone DMA32: 35 pages in unavailable ranges Dec 13 14:27:04.993487 kernel: ACPI: PM-Timer IO Port: 0x608 Dec 13 14:27:04.993501 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 13 14:27:04.993516 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 13 14:27:04.993530 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Dec 13 14:27:04.993547 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 13 14:27:04.993562 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 13 14:27:04.993576 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 13 14:27:04.993591 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 13 14:27:04.993605 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 13 14:27:04.993619 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Dec 13 14:27:04.993634 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Dec 13 14:27:04.993648 kernel: Booting paravirtualized kernel on KVM Dec 13 14:27:04.993662 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 13 14:27:04.993677 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 Dec 13 14:27:04.993696 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u1048576 Dec 13 14:27:04.993711 kernel: pcpu-alloc: s188696 r8192 d32488 u1048576 alloc=1*2097152 Dec 13 14:27:04.993725 kernel: pcpu-alloc: [0] 0 1 Dec 13 14:27:04.993739 kernel: kvm-guest: stealtime: cpu 0, msr 7dc1c0c0 Dec 13 14:27:04.993753 kernel: kvm-guest: PV spinlocks disabled, no host support Dec 13 14:27:04.993768 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515805 Dec 13 14:27:04.993782 kernel: Policy zone: DMA32 Dec 13 14:27:04.993799 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=8c474c3ec361ec863adbecaa85281a726e1b53f7863ecc4742be8c5f6d02a66e Dec 13 14:27:04.993818 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 14:27:04.993833 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 14:27:04.993847 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 13 14:27:04.993861 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 14:27:04.993876 kernel: Memory: 1973284K/2096620K available (12294K kernel code, 2275K rwdata, 13716K rodata, 47472K init, 4112K bss, 123076K reserved, 0K cma-reserved) Dec 13 14:27:04.993891 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 13 14:27:04.993905 kernel: ftrace: allocating 34549 entries in 135 pages Dec 13 14:27:04.993919 kernel: ftrace: allocated 135 pages with 4 groups Dec 13 14:27:04.993937 kernel: rcu: Hierarchical RCU implementation. Dec 13 14:27:04.993952 kernel: rcu: RCU event tracing is enabled. Dec 13 14:27:04.993967 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 13 14:27:04.994005 kernel: Rude variant of Tasks RCU enabled. Dec 13 14:27:04.994020 kernel: Tracing variant of Tasks RCU enabled. Dec 13 14:27:04.994035 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 14:27:04.994050 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 13 14:27:04.994064 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Dec 13 14:27:04.994078 kernel: Console: colour VGA+ 80x25 Dec 13 14:27:04.994096 kernel: printk: console [tty0] enabled Dec 13 14:27:04.994110 kernel: printk: console [ttyS0] enabled Dec 13 14:27:04.994125 kernel: ACPI: Core revision 20210730 Dec 13 14:27:04.994139 kernel: APIC: Switch to symmetric I/O mode setup Dec 13 14:27:04.994154 kernel: x2apic enabled Dec 13 14:27:04.994168 kernel: Switched APIC routing to physical x2apic. Dec 13 14:27:04.994183 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Dec 13 14:27:04.994198 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Dec 13 14:27:04.994212 kernel: Calibrating delay loop (skipped) preset value.. 3992.49 BogoMIPS (lpj=1996249) Dec 13 14:27:04.994227 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Dec 13 14:27:04.994244 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Dec 13 14:27:04.994259 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 13 14:27:04.994274 kernel: Spectre V2 : Mitigation: Retpolines Dec 13 14:27:04.994289 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Dec 13 14:27:04.994303 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Dec 13 14:27:04.994318 kernel: Speculative Store Bypass: Vulnerable Dec 13 14:27:04.994332 kernel: x86/fpu: x87 FPU will use FXSAVE Dec 13 14:27:04.994347 kernel: Freeing SMP alternatives memory: 32K Dec 13 14:27:04.994362 kernel: pid_max: default: 32768 minimum: 301 Dec 13 14:27:04.994378 kernel: LSM: Security Framework initializing Dec 13 14:27:04.994393 kernel: SELinux: Initializing. Dec 13 14:27:04.994407 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 13 14:27:04.994422 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 13 14:27:04.994437 kernel: smpboot: CPU0: AMD Intel Core i7 9xx (Nehalem Class Core i7) (family: 0x6, model: 0x1a, stepping: 0x3) Dec 13 14:27:04.994452 kernel: Performance Events: AMD PMU driver. Dec 13 14:27:04.994466 kernel: ... version: 0 Dec 13 14:27:04.994480 kernel: ... bit width: 48 Dec 13 14:27:04.994495 kernel: ... generic registers: 4 Dec 13 14:27:04.994521 kernel: ... value mask: 0000ffffffffffff Dec 13 14:27:04.994536 kernel: ... max period: 00007fffffffffff Dec 13 14:27:04.994553 kernel: ... fixed-purpose events: 0 Dec 13 14:27:04.994568 kernel: ... event mask: 000000000000000f Dec 13 14:27:04.994583 kernel: signal: max sigframe size: 1440 Dec 13 14:27:04.994598 kernel: rcu: Hierarchical SRCU implementation. Dec 13 14:27:04.994613 kernel: smp: Bringing up secondary CPUs ... Dec 13 14:27:04.994628 kernel: x86: Booting SMP configuration: Dec 13 14:27:04.994646 kernel: .... node #0, CPUs: #1 Dec 13 14:27:04.994661 kernel: kvm-clock: cpu 1, msr 4819a041, secondary cpu clock Dec 13 14:27:04.994676 kernel: kvm-guest: stealtime: cpu 1, msr 7dd1c0c0 Dec 13 14:27:04.994692 kernel: smp: Brought up 1 node, 2 CPUs Dec 13 14:27:04.994707 kernel: smpboot: Max logical packages: 2 Dec 13 14:27:04.994723 kernel: smpboot: Total of 2 processors activated (7984.99 BogoMIPS) Dec 13 14:27:04.994738 kernel: devtmpfs: initialized Dec 13 14:27:04.994753 kernel: x86/mm: Memory block size: 128MB Dec 13 14:27:04.994768 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 14:27:04.994786 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 13 14:27:04.994801 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 14:27:04.994816 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 14:27:04.994831 kernel: audit: initializing netlink subsys (disabled) Dec 13 14:27:04.994847 kernel: audit: type=2000 audit(1734100024.492:1): state=initialized audit_enabled=0 res=1 Dec 13 14:27:04.994862 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 14:27:04.994876 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 13 14:27:04.994891 kernel: cpuidle: using governor menu Dec 13 14:27:04.994907 kernel: ACPI: bus type PCI registered Dec 13 14:27:04.994924 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 14:27:04.994939 kernel: dca service started, version 1.12.1 Dec 13 14:27:04.994954 kernel: PCI: Using configuration type 1 for base access Dec 13 14:27:04.994970 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 13 14:27:04.995003 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 14:27:04.995018 kernel: ACPI: Added _OSI(Module Device) Dec 13 14:27:04.995033 kernel: ACPI: Added _OSI(Processor Device) Dec 13 14:27:04.995048 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 14:27:04.995063 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 14:27:04.995081 kernel: ACPI: Added _OSI(Linux-Dell-Video) Dec 13 14:27:04.995096 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Dec 13 14:27:04.995110 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Dec 13 14:27:04.995125 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 14:27:04.995140 kernel: ACPI: Interpreter enabled Dec 13 14:27:04.995155 kernel: ACPI: PM: (supports S0 S3 S5) Dec 13 14:27:04.995170 kernel: ACPI: Using IOAPIC for interrupt routing Dec 13 14:27:04.995185 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 13 14:27:04.995200 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Dec 13 14:27:04.995218 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 13 14:27:04.995455 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Dec 13 14:27:04.995612 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. Dec 13 14:27:04.995635 kernel: acpiphp: Slot [3] registered Dec 13 14:27:04.995650 kernel: acpiphp: Slot [4] registered Dec 13 14:27:04.995665 kernel: acpiphp: Slot [5] registered Dec 13 14:27:04.995680 kernel: acpiphp: Slot [6] registered Dec 13 14:27:04.995700 kernel: acpiphp: Slot [7] registered Dec 13 14:27:04.995715 kernel: acpiphp: Slot [8] registered Dec 13 14:27:04.995730 kernel: acpiphp: Slot [9] registered Dec 13 14:27:04.995745 kernel: acpiphp: Slot [10] registered Dec 13 14:27:04.995760 kernel: acpiphp: Slot [11] registered Dec 13 14:27:04.995774 kernel: acpiphp: Slot [12] registered Dec 13 14:27:04.995789 kernel: acpiphp: Slot [13] registered Dec 13 14:27:04.995804 kernel: acpiphp: Slot [14] registered Dec 13 14:27:04.995818 kernel: acpiphp: Slot [15] registered Dec 13 14:27:04.995833 kernel: acpiphp: Slot [16] registered Dec 13 14:27:04.995850 kernel: acpiphp: Slot [17] registered Dec 13 14:27:04.995865 kernel: acpiphp: Slot [18] registered Dec 13 14:27:04.995879 kernel: acpiphp: Slot [19] registered Dec 13 14:27:04.995894 kernel: acpiphp: Slot [20] registered Dec 13 14:27:04.995909 kernel: acpiphp: Slot [21] registered Dec 13 14:27:04.995923 kernel: acpiphp: Slot [22] registered Dec 13 14:27:04.995938 kernel: acpiphp: Slot [23] registered Dec 13 14:27:04.995953 kernel: acpiphp: Slot [24] registered Dec 13 14:27:04.995968 kernel: acpiphp: Slot [25] registered Dec 13 14:27:04.996011 kernel: acpiphp: Slot [26] registered Dec 13 14:27:04.996026 kernel: acpiphp: Slot [27] registered Dec 13 14:27:04.996041 kernel: acpiphp: Slot [28] registered Dec 13 14:27:04.996056 kernel: acpiphp: Slot [29] registered Dec 13 14:27:04.996071 kernel: acpiphp: Slot [30] registered Dec 13 14:27:04.996085 kernel: acpiphp: Slot [31] registered Dec 13 14:27:04.996100 kernel: PCI host bridge to bus 0000:00 Dec 13 14:27:04.996278 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 13 14:27:04.996418 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 13 14:27:04.996560 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 13 14:27:04.996687 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Dec 13 14:27:04.996767 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Dec 13 14:27:04.996837 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 13 14:27:04.996934 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Dec 13 14:27:05.000070 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Dec 13 14:27:05.000189 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Dec 13 14:27:05.000276 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc120-0xc12f] Dec 13 14:27:05.000359 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Dec 13 14:27:05.000440 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Dec 13 14:27:05.000524 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Dec 13 14:27:05.000607 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Dec 13 14:27:05.000699 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Dec 13 14:27:05.000788 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Dec 13 14:27:05.000873 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Dec 13 14:27:05.000965 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Dec 13 14:27:05.001074 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Dec 13 14:27:05.001162 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Dec 13 14:27:05.001245 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfeb90000-0xfeb90fff] Dec 13 14:27:05.001333 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfeb80000-0xfeb8ffff pref] Dec 13 14:27:05.001420 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 13 14:27:05.001518 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Dec 13 14:27:05.001605 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc080-0xc0bf] Dec 13 14:27:05.001689 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfeb91000-0xfeb91fff] Dec 13 14:27:05.001772 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Dec 13 14:27:05.001855 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb00000-0xfeb7ffff pref] Dec 13 14:27:05.001949 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Dec 13 14:27:05.003077 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Dec 13 14:27:05.003163 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfeb92000-0xfeb92fff] Dec 13 14:27:05.003245 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Dec 13 14:27:05.003333 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 Dec 13 14:27:05.003415 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc0c0-0xc0ff] Dec 13 14:27:05.003496 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Dec 13 14:27:05.003589 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 Dec 13 14:27:05.003670 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc100-0xc11f] Dec 13 14:27:05.003749 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Dec 13 14:27:05.003761 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 13 14:27:05.003770 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 13 14:27:05.003778 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 13 14:27:05.003786 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 13 14:27:05.003794 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Dec 13 14:27:05.003806 kernel: iommu: Default domain type: Translated Dec 13 14:27:05.003814 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 13 14:27:05.003893 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Dec 13 14:27:05.005106 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 13 14:27:05.005211 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Dec 13 14:27:05.005224 kernel: vgaarb: loaded Dec 13 14:27:05.005233 kernel: pps_core: LinuxPPS API ver. 1 registered Dec 13 14:27:05.005241 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Dec 13 14:27:05.005250 kernel: PTP clock support registered Dec 13 14:27:05.005263 kernel: PCI: Using ACPI for IRQ routing Dec 13 14:27:05.005271 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 13 14:27:05.005279 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Dec 13 14:27:05.005287 kernel: e820: reserve RAM buffer [mem 0x7ffdd000-0x7fffffff] Dec 13 14:27:05.005295 kernel: clocksource: Switched to clocksource kvm-clock Dec 13 14:27:05.005303 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 14:27:05.005311 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 14:27:05.005319 kernel: pnp: PnP ACPI init Dec 13 14:27:05.005416 kernel: pnp 00:03: [dma 2] Dec 13 14:27:05.005433 kernel: pnp: PnP ACPI: found 5 devices Dec 13 14:27:05.005441 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 13 14:27:05.005450 kernel: NET: Registered PF_INET protocol family Dec 13 14:27:05.005458 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 14:27:05.005466 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Dec 13 14:27:05.005475 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 14:27:05.005483 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 13 14:27:05.005491 kernel: TCP bind hash table entries: 16384 (order: 6, 262144 bytes, linear) Dec 13 14:27:05.005502 kernel: TCP: Hash tables configured (established 16384 bind 16384) Dec 13 14:27:05.005510 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 13 14:27:05.005518 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 13 14:27:05.005527 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 14:27:05.005535 kernel: NET: Registered PF_XDP protocol family Dec 13 14:27:05.005608 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 13 14:27:05.005682 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 13 14:27:05.005753 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 13 14:27:05.005822 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Dec 13 14:27:05.005897 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Dec 13 14:27:05.007032 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Dec 13 14:27:05.007126 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Dec 13 14:27:05.007233 kernel: pci 0000:00:01.0: Activating ISA DMA hang workarounds Dec 13 14:27:05.007247 kernel: PCI: CLS 0 bytes, default 64 Dec 13 14:27:05.007256 kernel: Initialise system trusted keyrings Dec 13 14:27:05.007264 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Dec 13 14:27:05.007277 kernel: Key type asymmetric registered Dec 13 14:27:05.007285 kernel: Asymmetric key parser 'x509' registered Dec 13 14:27:05.007293 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Dec 13 14:27:05.007301 kernel: io scheduler mq-deadline registered Dec 13 14:27:05.007309 kernel: io scheduler kyber registered Dec 13 14:27:05.007317 kernel: io scheduler bfq registered Dec 13 14:27:05.007325 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 13 14:27:05.007334 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Dec 13 14:27:05.007342 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Dec 13 14:27:05.007350 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Dec 13 14:27:05.007360 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Dec 13 14:27:05.007368 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 14:27:05.007377 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 13 14:27:05.007385 kernel: random: crng init done Dec 13 14:27:05.007393 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 13 14:27:05.007401 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 13 14:27:05.007409 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 13 14:27:05.007496 kernel: rtc_cmos 00:04: RTC can wake from S4 Dec 13 14:27:05.007512 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 13 14:27:05.007583 kernel: rtc_cmos 00:04: registered as rtc0 Dec 13 14:27:05.007655 kernel: rtc_cmos 00:04: setting system clock to 2024-12-13T14:27:04 UTC (1734100024) Dec 13 14:27:05.007727 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Dec 13 14:27:05.007738 kernel: NET: Registered PF_INET6 protocol family Dec 13 14:27:05.007747 kernel: Segment Routing with IPv6 Dec 13 14:27:05.007755 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 14:27:05.007763 kernel: NET: Registered PF_PACKET protocol family Dec 13 14:27:05.007771 kernel: Key type dns_resolver registered Dec 13 14:27:05.007781 kernel: IPI shorthand broadcast: enabled Dec 13 14:27:05.007790 kernel: sched_clock: Marking stable (733673804, 120185656)->(872245687, -18386227) Dec 13 14:27:05.007798 kernel: registered taskstats version 1 Dec 13 14:27:05.007806 kernel: Loading compiled-in X.509 certificates Dec 13 14:27:05.007814 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.173-flatcar: e1d88c9e01f5bb2adeb5b99325e46e5ca8dff115' Dec 13 14:27:05.007822 kernel: Key type .fscrypt registered Dec 13 14:27:05.007830 kernel: Key type fscrypt-provisioning registered Dec 13 14:27:05.007838 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 14:27:05.007848 kernel: ima: Allocated hash algorithm: sha1 Dec 13 14:27:05.007856 kernel: ima: No architecture policies found Dec 13 14:27:05.007864 kernel: clk: Disabling unused clocks Dec 13 14:27:05.007872 kernel: Freeing unused kernel image (initmem) memory: 47472K Dec 13 14:27:05.007880 kernel: Write protecting the kernel read-only data: 28672k Dec 13 14:27:05.007888 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Dec 13 14:27:05.007897 kernel: Freeing unused kernel image (rodata/data gap) memory: 620K Dec 13 14:27:05.007905 kernel: Run /init as init process Dec 13 14:27:05.007913 kernel: with arguments: Dec 13 14:27:05.007922 kernel: /init Dec 13 14:27:05.007930 kernel: with environment: Dec 13 14:27:05.007938 kernel: HOME=/ Dec 13 14:27:05.007946 kernel: TERM=linux Dec 13 14:27:05.007954 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 14:27:05.007965 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 14:27:05.007992 systemd[1]: Detected virtualization kvm. Dec 13 14:27:05.008001 systemd[1]: Detected architecture x86-64. Dec 13 14:27:05.008012 systemd[1]: Running in initrd. Dec 13 14:27:05.008021 systemd[1]: No hostname configured, using default hostname. Dec 13 14:27:05.008029 systemd[1]: Hostname set to . Dec 13 14:27:05.008038 systemd[1]: Initializing machine ID from VM UUID. Dec 13 14:27:05.008047 systemd[1]: Queued start job for default target initrd.target. Dec 13 14:27:05.008055 systemd[1]: Started systemd-ask-password-console.path. Dec 13 14:27:05.008063 systemd[1]: Reached target cryptsetup.target. Dec 13 14:27:05.008072 systemd[1]: Reached target paths.target. Dec 13 14:27:05.008083 systemd[1]: Reached target slices.target. Dec 13 14:27:05.008092 systemd[1]: Reached target swap.target. Dec 13 14:27:05.008100 systemd[1]: Reached target timers.target. Dec 13 14:27:05.008109 systemd[1]: Listening on iscsid.socket. Dec 13 14:27:05.008118 systemd[1]: Listening on iscsiuio.socket. Dec 13 14:27:05.008126 systemd[1]: Listening on systemd-journald-audit.socket. Dec 13 14:27:05.008135 systemd[1]: Listening on systemd-journald-dev-log.socket. Dec 13 14:27:05.008145 systemd[1]: Listening on systemd-journald.socket. Dec 13 14:27:05.008154 systemd[1]: Listening on systemd-networkd.socket. Dec 13 14:27:05.008163 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 14:27:05.008180 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 14:27:05.008189 systemd[1]: Reached target sockets.target. Dec 13 14:27:05.008206 systemd[1]: Starting kmod-static-nodes.service... Dec 13 14:27:05.008217 systemd[1]: Finished network-cleanup.service. Dec 13 14:27:05.008227 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 14:27:05.008236 systemd[1]: Starting systemd-journald.service... Dec 13 14:27:05.008245 systemd[1]: Starting systemd-modules-load.service... Dec 13 14:27:05.008254 systemd[1]: Starting systemd-resolved.service... Dec 13 14:27:05.008263 systemd[1]: Starting systemd-vconsole-setup.service... Dec 13 14:27:05.008272 systemd[1]: Finished kmod-static-nodes.service. Dec 13 14:27:05.008280 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 14:27:05.008293 systemd-journald[184]: Journal started Dec 13 14:27:05.008338 systemd-journald[184]: Runtime Journal (/run/log/journal/b320d7868a524ae1819063eebbed93e6) is 4.9M, max 39.5M, 34.5M free. Dec 13 14:27:04.974057 systemd-modules-load[185]: Inserted module 'overlay' Dec 13 14:27:05.032182 systemd[1]: Started systemd-journald.service. Dec 13 14:27:05.032217 kernel: audit: type=1130 audit(1734100025.027:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:05.027000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:05.019387 systemd-resolved[186]: Positive Trust Anchors: Dec 13 14:27:05.036716 kernel: audit: type=1130 audit(1734100025.031:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:05.031000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:05.019406 systemd-resolved[186]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 14:27:05.043621 kernel: audit: type=1130 audit(1734100025.037:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:05.043642 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 14:27:05.037000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:05.019447 systemd-resolved[186]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 14:27:05.049718 kernel: audit: type=1130 audit(1734100025.043:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:05.043000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:05.029600 systemd-resolved[186]: Defaulting to hostname 'linux'. Dec 13 14:27:05.032749 systemd[1]: Started systemd-resolved.service. Dec 13 14:27:05.037366 systemd[1]: Finished systemd-vconsole-setup.service. Dec 13 14:27:05.064751 kernel: Bridge firewalling registered Dec 13 14:27:05.044276 systemd[1]: Reached target nss-lookup.target. Dec 13 14:27:05.051025 systemd[1]: Starting dracut-cmdline-ask.service... Dec 13 14:27:05.052119 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Dec 13 14:27:05.074084 kernel: audit: type=1130 audit(1734100025.066:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:05.066000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:05.055192 systemd-modules-load[185]: Inserted module 'br_netfilter' Dec 13 14:27:05.066072 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Dec 13 14:27:05.075362 systemd[1]: Finished dracut-cmdline-ask.service. Dec 13 14:27:05.077115 systemd[1]: Starting dracut-cmdline.service... Dec 13 14:27:05.075000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:05.082033 kernel: audit: type=1130 audit(1734100025.075:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:05.088990 kernel: SCSI subsystem initialized Dec 13 14:27:05.089446 dracut-cmdline[201]: dracut-dracut-053 Dec 13 14:27:05.091128 dracut-cmdline[201]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=8c474c3ec361ec863adbecaa85281a726e1b53f7863ecc4742be8c5f6d02a66e Dec 13 14:27:05.104987 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 14:27:05.107389 kernel: device-mapper: uevent: version 1.0.3 Dec 13 14:27:05.107410 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Dec 13 14:27:05.110737 systemd-modules-load[185]: Inserted module 'dm_multipath' Dec 13 14:27:05.111856 systemd[1]: Finished systemd-modules-load.service. Dec 13 14:27:05.117012 kernel: audit: type=1130 audit(1734100025.112:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:05.112000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:05.117670 systemd[1]: Starting systemd-sysctl.service... Dec 13 14:27:05.128447 systemd[1]: Finished systemd-sysctl.service. Dec 13 14:27:05.128000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:05.134003 kernel: audit: type=1130 audit(1734100025.128:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:05.162246 kernel: Loading iSCSI transport class v2.0-870. Dec 13 14:27:05.183229 kernel: iscsi: registered transport (tcp) Dec 13 14:27:05.211043 kernel: iscsi: registered transport (qla4xxx) Dec 13 14:27:05.211109 kernel: QLogic iSCSI HBA Driver Dec 13 14:27:05.266492 systemd[1]: Finished dracut-cmdline.service. Dec 13 14:27:05.273127 kernel: audit: type=1130 audit(1734100025.266:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:05.266000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:05.268232 systemd[1]: Starting dracut-pre-udev.service... Dec 13 14:27:05.354038 kernel: raid6: sse2x4 gen() 9174 MB/s Dec 13 14:27:05.371071 kernel: raid6: sse2x4 xor() 4375 MB/s Dec 13 14:27:05.388063 kernel: raid6: sse2x2 gen() 13609 MB/s Dec 13 14:27:05.405059 kernel: raid6: sse2x2 xor() 8630 MB/s Dec 13 14:27:05.422249 kernel: raid6: sse2x1 gen() 10876 MB/s Dec 13 14:27:05.439809 kernel: raid6: sse2x1 xor() 6892 MB/s Dec 13 14:27:05.439880 kernel: raid6: using algorithm sse2x2 gen() 13609 MB/s Dec 13 14:27:05.439910 kernel: raid6: .... xor() 8630 MB/s, rmw enabled Dec 13 14:27:05.440667 kernel: raid6: using ssse3x2 recovery algorithm Dec 13 14:27:05.456309 kernel: xor: measuring software checksum speed Dec 13 14:27:05.456376 kernel: prefetch64-sse : 18325 MB/sec Dec 13 14:27:05.457365 kernel: generic_sse : 15609 MB/sec Dec 13 14:27:05.457403 kernel: xor: using function: prefetch64-sse (18325 MB/sec) Dec 13 14:27:05.572038 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Dec 13 14:27:05.586520 systemd[1]: Finished dracut-pre-udev.service. Dec 13 14:27:05.594000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:05.595000 audit: BPF prog-id=7 op=LOAD Dec 13 14:27:05.595000 audit: BPF prog-id=8 op=LOAD Dec 13 14:27:05.597755 systemd[1]: Starting systemd-udevd.service... Dec 13 14:27:05.611549 systemd-udevd[385]: Using default interface naming scheme 'v252'. Dec 13 14:27:05.616245 systemd[1]: Started systemd-udevd.service. Dec 13 14:27:05.620000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:05.624011 systemd[1]: Starting dracut-pre-trigger.service... Dec 13 14:27:05.644388 dracut-pre-trigger[401]: rd.md=0: removing MD RAID activation Dec 13 14:27:05.696694 systemd[1]: Finished dracut-pre-trigger.service. Dec 13 14:27:05.696000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:05.699838 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 14:27:05.758162 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 14:27:05.759000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:05.829006 kernel: virtio_blk virtio2: [vda] 41943040 512-byte logical blocks (21.5 GB/20.0 GiB) Dec 13 14:27:05.854682 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 14:27:05.854707 kernel: GPT:17805311 != 41943039 Dec 13 14:27:05.854720 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 14:27:05.854731 kernel: GPT:17805311 != 41943039 Dec 13 14:27:05.854742 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 14:27:05.854753 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 14:27:05.859001 kernel: libata version 3.00 loaded. Dec 13 14:27:05.869001 kernel: ata_piix 0000:00:01.1: version 2.13 Dec 13 14:27:05.885565 kernel: scsi host0: ata_piix Dec 13 14:27:05.885710 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (441) Dec 13 14:27:05.885724 kernel: scsi host1: ata_piix Dec 13 14:27:05.885834 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc120 irq 14 Dec 13 14:27:05.885848 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc128 irq 15 Dec 13 14:27:05.898458 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Dec 13 14:27:05.934295 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Dec 13 14:27:05.937576 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Dec 13 14:27:05.938113 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Dec 13 14:27:05.943047 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Dec 13 14:27:05.944494 systemd[1]: Starting disk-uuid.service... Dec 13 14:27:05.955378 disk-uuid[461]: Primary Header is updated. Dec 13 14:27:05.955378 disk-uuid[461]: Secondary Entries is updated. Dec 13 14:27:05.955378 disk-uuid[461]: Secondary Header is updated. Dec 13 14:27:05.962998 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 14:27:05.967995 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 14:27:07.223075 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 14:27:07.223357 disk-uuid[462]: The operation has completed successfully. Dec 13 14:27:07.308688 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 14:27:07.308803 systemd[1]: Finished disk-uuid.service. Dec 13 14:27:07.308000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:07.308000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:07.310342 systemd[1]: Starting verity-setup.service... Dec 13 14:27:07.331038 kernel: device-mapper: verity: sha256 using implementation "sha256-ssse3" Dec 13 14:27:07.435123 systemd[1]: Found device dev-mapper-usr.device. Dec 13 14:27:07.438437 systemd[1]: Mounting sysusr-usr.mount... Dec 13 14:27:07.443676 systemd[1]: Finished verity-setup.service. Dec 13 14:27:07.443000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:07.582038 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Dec 13 14:27:07.582244 systemd[1]: Mounted sysusr-usr.mount. Dec 13 14:27:07.582881 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Dec 13 14:27:07.583726 systemd[1]: Starting ignition-setup.service... Dec 13 14:27:07.584990 systemd[1]: Starting parse-ip-for-networkd.service... Dec 13 14:27:07.607428 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 14:27:07.607504 kernel: BTRFS info (device vda6): using free space tree Dec 13 14:27:07.607516 kernel: BTRFS info (device vda6): has skinny extents Dec 13 14:27:07.636337 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 14:27:07.660000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:07.661421 systemd[1]: Finished ignition-setup.service. Dec 13 14:27:07.662800 systemd[1]: Starting ignition-fetch-offline.service... Dec 13 14:27:07.726787 systemd[1]: Finished parse-ip-for-networkd.service. Dec 13 14:27:07.727000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:07.729000 audit: BPF prog-id=9 op=LOAD Dec 13 14:27:07.731894 systemd[1]: Starting systemd-networkd.service... Dec 13 14:27:07.759441 systemd-networkd[633]: lo: Link UP Dec 13 14:27:07.760227 systemd-networkd[633]: lo: Gained carrier Dec 13 14:27:07.762339 systemd-networkd[633]: Enumeration completed Dec 13 14:27:07.762962 systemd-networkd[633]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 14:27:07.764943 systemd[1]: Started systemd-networkd.service. Dec 13 14:27:07.765000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:07.766487 systemd[1]: Reached target network.target. Dec 13 14:27:07.767407 systemd-networkd[633]: eth0: Link UP Dec 13 14:27:07.767416 systemd-networkd[633]: eth0: Gained carrier Dec 13 14:27:07.774126 systemd[1]: Starting iscsiuio.service... Dec 13 14:27:07.787054 systemd-networkd[633]: eth0: DHCPv4 address 172.24.4.127/24, gateway 172.24.4.1 acquired from 172.24.4.1 Dec 13 14:27:07.792447 systemd[1]: Started iscsiuio.service. Dec 13 14:27:07.791000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:07.793875 systemd[1]: Starting iscsid.service... Dec 13 14:27:07.799376 iscsid[639]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Dec 13 14:27:07.799376 iscsid[639]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Dec 13 14:27:07.799376 iscsid[639]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Dec 13 14:27:07.799376 iscsid[639]: If using hardware iscsi like qla4xxx this message can be ignored. Dec 13 14:27:07.799376 iscsid[639]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Dec 13 14:27:07.799376 iscsid[639]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Dec 13 14:27:07.798000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:07.799215 systemd[1]: Started iscsid.service. Dec 13 14:27:07.800851 systemd[1]: Starting dracut-initqueue.service... Dec 13 14:27:07.814000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:07.814969 systemd[1]: Finished dracut-initqueue.service. Dec 13 14:27:07.815492 systemd[1]: Reached target remote-fs-pre.target. Dec 13 14:27:07.815903 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 14:27:07.816404 systemd[1]: Reached target remote-fs.target. Dec 13 14:27:07.818505 systemd[1]: Starting dracut-pre-mount.service... Dec 13 14:27:07.828000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:07.829473 systemd[1]: Finished dracut-pre-mount.service. Dec 13 14:27:08.067187 ignition[581]: Ignition 2.14.0 Dec 13 14:27:08.068528 ignition[581]: Stage: fetch-offline Dec 13 14:27:08.068723 ignition[581]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:27:08.068820 ignition[581]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Dec 13 14:27:08.072283 ignition[581]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 14:27:08.072608 ignition[581]: parsed url from cmdline: "" Dec 13 14:27:08.072619 ignition[581]: no config URL provided Dec 13 14:27:08.072633 ignition[581]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 14:27:08.075000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:08.075432 systemd[1]: Finished ignition-fetch-offline.service. Dec 13 14:27:08.072653 ignition[581]: no config at "/usr/lib/ignition/user.ign" Dec 13 14:27:08.078328 systemd[1]: Starting ignition-fetch.service... Dec 13 14:27:08.072667 ignition[581]: failed to fetch config: resource requires networking Dec 13 14:27:08.072924 ignition[581]: Ignition finished successfully Dec 13 14:27:08.096414 ignition[656]: Ignition 2.14.0 Dec 13 14:27:08.096439 ignition[656]: Stage: fetch Dec 13 14:27:08.096686 ignition[656]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:27:08.096728 ignition[656]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Dec 13 14:27:08.098872 ignition[656]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 14:27:08.099108 ignition[656]: parsed url from cmdline: "" Dec 13 14:27:08.099117 ignition[656]: no config URL provided Dec 13 14:27:08.099129 ignition[656]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 14:27:08.099147 ignition[656]: no config at "/usr/lib/ignition/user.ign" Dec 13 14:27:08.105971 ignition[656]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Dec 13 14:27:08.106101 ignition[656]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Dec 13 14:27:08.106519 ignition[656]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Dec 13 14:27:08.328143 ignition[656]: GET result: OK Dec 13 14:27:08.328277 ignition[656]: parsing config with SHA512: 0636ba5b1bbbf91b2b40ceb9c76fd08080a6462fe598e0646a1ee9678e1b6929cf094a4f874e9544af2ca18021bdcaf6e590051d7bb3269ff1f5fe5c6099ef90 Dec 13 14:27:08.340531 unknown[656]: fetched base config from "system" Dec 13 14:27:08.341366 unknown[656]: fetched base config from "system" Dec 13 14:27:08.341397 unknown[656]: fetched user config from "openstack" Dec 13 14:27:08.342088 ignition[656]: fetch: fetch complete Dec 13 14:27:08.342101 ignition[656]: fetch: fetch passed Dec 13 14:27:08.342207 ignition[656]: Ignition finished successfully Dec 13 14:27:08.344000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:08.344641 systemd[1]: Finished ignition-fetch.service. Dec 13 14:27:08.346327 systemd[1]: Starting ignition-kargs.service... Dec 13 14:27:08.365962 ignition[662]: Ignition 2.14.0 Dec 13 14:27:08.366062 ignition[662]: Stage: kargs Dec 13 14:27:08.366313 ignition[662]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:27:08.366356 ignition[662]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Dec 13 14:27:08.372091 systemd[1]: Finished ignition-kargs.service. Dec 13 14:27:08.371000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:08.368504 ignition[662]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 14:27:08.373574 systemd[1]: Starting ignition-disks.service... Dec 13 14:27:08.370443 ignition[662]: kargs: kargs passed Dec 13 14:27:08.370541 ignition[662]: Ignition finished successfully Dec 13 14:27:08.383454 ignition[667]: Ignition 2.14.0 Dec 13 14:27:08.383462 ignition[667]: Stage: disks Dec 13 14:27:08.383572 ignition[667]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:27:08.387000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:08.387286 systemd[1]: Finished ignition-disks.service. Dec 13 14:27:08.383590 ignition[667]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Dec 13 14:27:08.388634 systemd[1]: Reached target initrd-root-device.target. Dec 13 14:27:08.385279 ignition[667]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 14:27:08.390581 systemd[1]: Reached target local-fs-pre.target. Dec 13 14:27:08.386368 ignition[667]: disks: disks passed Dec 13 14:27:08.392146 systemd[1]: Reached target local-fs.target. Dec 13 14:27:08.386438 ignition[667]: Ignition finished successfully Dec 13 14:27:08.393741 systemd[1]: Reached target sysinit.target. Dec 13 14:27:08.395728 systemd[1]: Reached target basic.target. Dec 13 14:27:08.398199 systemd[1]: Starting systemd-fsck-root.service... Dec 13 14:27:08.423149 systemd-fsck[674]: ROOT: clean, 621/1628000 files, 124058/1617920 blocks Dec 13 14:27:08.435066 systemd[1]: Finished systemd-fsck-root.service. Dec 13 14:27:08.437000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:08.440523 systemd[1]: Mounting sysroot.mount... Dec 13 14:27:08.469034 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Dec 13 14:27:08.471107 systemd[1]: Mounted sysroot.mount. Dec 13 14:27:08.472744 systemd[1]: Reached target initrd-root-fs.target. Dec 13 14:27:08.478029 systemd[1]: Mounting sysroot-usr.mount... Dec 13 14:27:08.480599 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Dec 13 14:27:08.482484 systemd[1]: Starting flatcar-openstack-hostname.service... Dec 13 14:27:08.484226 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 14:27:08.484332 systemd[1]: Reached target ignition-diskful.target. Dec 13 14:27:08.488263 systemd[1]: Mounted sysroot-usr.mount. Dec 13 14:27:08.495313 systemd[1]: Mounting sysroot-usr-share-oem.mount... Dec 13 14:27:08.496746 systemd[1]: Starting initrd-setup-root.service... Dec 13 14:27:08.504521 initrd-setup-root[686]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 14:27:08.512746 initrd-setup-root[694]: cut: /sysroot/etc/group: No such file or directory Dec 13 14:27:08.523030 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (681) Dec 13 14:27:08.530894 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 14:27:08.530953 kernel: BTRFS info (device vda6): using free space tree Dec 13 14:27:08.531011 kernel: BTRFS info (device vda6): has skinny extents Dec 13 14:27:08.533257 initrd-setup-root[702]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 14:27:08.552458 initrd-setup-root[726]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 14:27:08.570497 systemd[1]: Mounted sysroot-usr-share-oem.mount. Dec 13 14:27:08.646917 systemd[1]: Finished initrd-setup-root.service. Dec 13 14:27:08.646000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:08.649287 systemd[1]: Starting ignition-mount.service... Dec 13 14:27:08.657192 systemd[1]: Starting sysroot-boot.service... Dec 13 14:27:08.668822 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Dec 13 14:27:08.669096 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Dec 13 14:27:08.704620 ignition[749]: INFO : Ignition 2.14.0 Dec 13 14:27:08.704620 ignition[749]: INFO : Stage: mount Dec 13 14:27:08.704620 ignition[749]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:27:08.704620 ignition[749]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Dec 13 14:27:08.704620 ignition[749]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 14:27:08.707000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:08.709605 coreos-metadata[680]: Dec 13 14:27:08.705 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Dec 13 14:27:08.708256 systemd[1]: Finished ignition-mount.service. Dec 13 14:27:08.711162 ignition[749]: INFO : mount: mount passed Dec 13 14:27:08.711162 ignition[749]: INFO : Ignition finished successfully Dec 13 14:27:08.719167 systemd[1]: Finished sysroot-boot.service. Dec 13 14:27:08.718000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:08.725265 coreos-metadata[680]: Dec 13 14:27:08.725 INFO Fetch successful Dec 13 14:27:08.727003 coreos-metadata[680]: Dec 13 14:27:08.726 INFO wrote hostname ci-3510-3-6-0-e70ea02b81.novalocal to /sysroot/etc/hostname Dec 13 14:27:08.729949 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Dec 13 14:27:08.730813 systemd[1]: Finished flatcar-openstack-hostname.service. Dec 13 14:27:08.730000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:08.730000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:08.733087 systemd[1]: Starting ignition-files.service... Dec 13 14:27:08.747242 systemd[1]: Mounting sysroot-usr-share-oem.mount... Dec 13 14:27:08.762042 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (757) Dec 13 14:27:08.767040 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 14:27:08.767101 kernel: BTRFS info (device vda6): using free space tree Dec 13 14:27:08.767141 kernel: BTRFS info (device vda6): has skinny extents Dec 13 14:27:08.782726 systemd[1]: Mounted sysroot-usr-share-oem.mount. Dec 13 14:27:08.802569 ignition[776]: INFO : Ignition 2.14.0 Dec 13 14:27:08.802569 ignition[776]: INFO : Stage: files Dec 13 14:27:08.805376 ignition[776]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:27:08.805376 ignition[776]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Dec 13 14:27:08.805376 ignition[776]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 14:27:08.812437 ignition[776]: DEBUG : files: compiled without relabeling support, skipping Dec 13 14:27:08.812437 ignition[776]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 14:27:08.812437 ignition[776]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 14:27:08.818635 ignition[776]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 14:27:08.818635 ignition[776]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 14:27:08.818635 ignition[776]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 14:27:08.817968 unknown[776]: wrote ssh authorized keys file for user: core Dec 13 14:27:08.825720 ignition[776]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Dec 13 14:27:08.825720 ignition[776]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 14:27:08.825720 ignition[776]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 14:27:08.825720 ignition[776]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 14:27:08.825720 ignition[776]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 14:27:08.825720 ignition[776]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 14:27:08.825720 ignition[776]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 14:27:08.825720 ignition[776]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Dec 13 14:27:08.840343 systemd-networkd[633]: eth0: Gained IPv6LL Dec 13 14:27:09.231707 ignition[776]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Dec 13 14:27:10.855446 ignition[776]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 14:27:10.857498 ignition[776]: INFO : files: op(7): [started] processing unit "coreos-metadata-sshkeys@.service" Dec 13 14:27:10.858301 ignition[776]: INFO : files: op(7): [finished] processing unit "coreos-metadata-sshkeys@.service" Dec 13 14:27:10.859197 ignition[776]: INFO : files: op(8): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Dec 13 14:27:10.862782 ignition[776]: INFO : files: op(8): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Dec 13 14:27:10.874255 ignition[776]: INFO : files: createResultFile: createFiles: op(9): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 14:27:10.875315 ignition[776]: INFO : files: createResultFile: createFiles: op(9): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 14:27:10.875315 ignition[776]: INFO : files: files passed Dec 13 14:27:10.875315 ignition[776]: INFO : Ignition finished successfully Dec 13 14:27:10.888366 kernel: kauditd_printk_skb: 27 callbacks suppressed Dec 13 14:27:10.888389 kernel: audit: type=1130 audit(1734100030.882:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:10.882000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:10.879773 systemd[1]: Finished ignition-files.service. Dec 13 14:27:10.887862 systemd[1]: Starting initrd-setup-root-after-ignition.service... Dec 13 14:27:10.889681 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Dec 13 14:27:10.891178 systemd[1]: Starting ignition-quench.service... Dec 13 14:27:10.895000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:10.896284 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 14:27:10.911308 kernel: audit: type=1130 audit(1734100030.895:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:10.911362 kernel: audit: type=1131 audit(1734100030.895:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:10.895000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:10.896391 systemd[1]: Finished ignition-quench.service. Dec 13 14:27:10.915028 initrd-setup-root-after-ignition[801]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 14:27:10.917777 systemd[1]: Finished initrd-setup-root-after-ignition.service. Dec 13 14:27:10.927963 kernel: audit: type=1130 audit(1734100030.917:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:10.917000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:10.918408 systemd[1]: Reached target ignition-complete.target. Dec 13 14:27:10.929193 systemd[1]: Starting initrd-parse-etc.service... Dec 13 14:27:10.953139 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 14:27:10.953361 systemd[1]: Finished initrd-parse-etc.service. Dec 13 14:27:10.961931 kernel: audit: type=1130 audit(1734100030.953:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:10.961956 kernel: audit: type=1131 audit(1734100030.953:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:10.953000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:10.953000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:10.955197 systemd[1]: Reached target initrd-fs.target. Dec 13 14:27:10.963037 systemd[1]: Reached target initrd.target. Dec 13 14:27:10.964590 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Dec 13 14:27:10.966380 systemd[1]: Starting dracut-pre-pivot.service... Dec 13 14:27:10.982249 systemd[1]: Finished dracut-pre-pivot.service. Dec 13 14:27:10.991791 kernel: audit: type=1130 audit(1734100030.981:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:10.981000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:10.983614 systemd[1]: Starting initrd-cleanup.service... Dec 13 14:27:10.997962 systemd[1]: Stopped target nss-lookup.target. Dec 13 14:27:10.998623 systemd[1]: Stopped target remote-cryptsetup.target. Dec 13 14:27:11.000495 systemd[1]: Stopped target timers.target. Dec 13 14:27:11.002123 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 14:27:11.012744 kernel: audit: type=1131 audit(1734100031.002:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:11.002000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:11.002270 systemd[1]: Stopped dracut-pre-pivot.service. Dec 13 14:27:11.004077 systemd[1]: Stopped target initrd.target. Dec 13 14:27:11.013293 systemd[1]: Stopped target basic.target. Dec 13 14:27:11.014155 systemd[1]: Stopped target ignition-complete.target. Dec 13 14:27:11.015053 systemd[1]: Stopped target ignition-diskful.target. Dec 13 14:27:11.016013 systemd[1]: Stopped target initrd-root-device.target. Dec 13 14:27:11.017036 systemd[1]: Stopped target remote-fs.target. Dec 13 14:27:11.017957 systemd[1]: Stopped target remote-fs-pre.target. Dec 13 14:27:11.018950 systemd[1]: Stopped target sysinit.target. Dec 13 14:27:11.019949 systemd[1]: Stopped target local-fs.target. Dec 13 14:27:11.021009 systemd[1]: Stopped target local-fs-pre.target. Dec 13 14:27:11.021998 systemd[1]: Stopped target swap.target. Dec 13 14:27:11.027514 kernel: audit: type=1131 audit(1734100031.022:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:11.022000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:11.022853 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 14:27:11.023027 systemd[1]: Stopped dracut-pre-mount.service. Dec 13 14:27:11.032958 kernel: audit: type=1131 audit(1734100031.028:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:11.028000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:11.023934 systemd[1]: Stopped target cryptsetup.target. Dec 13 14:27:11.032000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:11.028025 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 14:27:11.033000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:11.028180 systemd[1]: Stopped dracut-initqueue.service. Dec 13 14:27:11.029231 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 14:27:11.029387 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Dec 13 14:27:11.041000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:11.044000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:11.033655 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 14:27:11.045000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:11.033806 systemd[1]: Stopped ignition-files.service. Dec 13 14:27:11.035504 systemd[1]: Stopping ignition-mount.service... Dec 13 14:27:11.041559 systemd[1]: Stopping iscsiuio.service... Dec 13 14:27:11.042051 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 14:27:11.042279 systemd[1]: Stopped kmod-static-nodes.service. Dec 13 14:27:11.044012 systemd[1]: Stopping sysroot-boot.service... Dec 13 14:27:11.054000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:11.056771 ignition[814]: INFO : Ignition 2.14.0 Dec 13 14:27:11.056771 ignition[814]: INFO : Stage: umount Dec 13 14:27:11.056771 ignition[814]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:27:11.056771 ignition[814]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Dec 13 14:27:11.056771 ignition[814]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 14:27:11.056771 ignition[814]: INFO : umount: umount passed Dec 13 14:27:11.056771 ignition[814]: INFO : Ignition finished successfully Dec 13 14:27:11.056000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:11.057000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:11.058000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:11.060000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:11.062000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:11.044538 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 14:27:11.044778 systemd[1]: Stopped systemd-udev-trigger.service. Dec 13 14:27:11.045560 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 14:27:11.045758 systemd[1]: Stopped dracut-pre-trigger.service. Dec 13 14:27:11.048806 systemd[1]: iscsiuio.service: Deactivated successfully. Dec 13 14:27:11.048935 systemd[1]: Stopped iscsiuio.service. Dec 13 14:27:11.055989 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 14:27:11.056105 systemd[1]: Stopped ignition-mount.service. Dec 13 14:27:11.057488 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 14:27:11.057601 systemd[1]: Stopped ignition-disks.service. Dec 13 14:27:11.058944 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 14:27:11.059034 systemd[1]: Stopped ignition-kargs.service. Dec 13 14:27:11.060035 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 13 14:27:11.060072 systemd[1]: Stopped ignition-fetch.service. Dec 13 14:27:11.073000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:11.061532 systemd[1]: Stopped target network.target. Dec 13 14:27:11.062717 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 14:27:11.062758 systemd[1]: Stopped ignition-fetch-offline.service. Dec 13 14:27:11.076000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:11.076000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:11.063707 systemd[1]: Stopped target paths.target. Dec 13 14:27:11.064731 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 14:27:11.069012 systemd[1]: Stopped systemd-ask-password-console.path. Dec 13 14:27:11.069631 systemd[1]: Stopped target slices.target. Dec 13 14:27:11.070535 systemd[1]: Stopped target sockets.target. Dec 13 14:27:11.083000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:11.071407 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 14:27:11.071435 systemd[1]: Closed iscsid.socket. Dec 13 14:27:11.085000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:11.072356 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 14:27:11.072391 systemd[1]: Closed iscsiuio.socket. Dec 13 14:27:11.073264 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 14:27:11.088000 audit: BPF prog-id=6 op=UNLOAD Dec 13 14:27:11.073303 systemd[1]: Stopped ignition-setup.service. Dec 13 14:27:11.074630 systemd[1]: Stopping systemd-networkd.service... Dec 13 14:27:11.091000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:11.075260 systemd[1]: Stopping systemd-resolved.service... Dec 13 14:27:11.092000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:11.076403 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 14:27:11.093000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:11.076489 systemd[1]: Finished initrd-cleanup.service. Dec 13 14:27:11.079022 systemd-networkd[633]: eth0: DHCPv6 lease lost Dec 13 14:27:11.098000 audit: BPF prog-id=9 op=UNLOAD Dec 13 14:27:11.083697 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 14:27:11.083794 systemd[1]: Stopped systemd-resolved.service. Dec 13 14:27:11.085607 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 14:27:11.085696 systemd[1]: Stopped systemd-networkd.service. Dec 13 14:27:11.087581 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 14:27:11.087618 systemd[1]: Closed systemd-networkd.socket. Dec 13 14:27:11.102000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:11.089332 systemd[1]: Stopping network-cleanup.service... Dec 13 14:27:11.089899 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 14:27:11.089952 systemd[1]: Stopped parse-ip-for-networkd.service. Dec 13 14:27:11.092297 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 14:27:11.092336 systemd[1]: Stopped systemd-sysctl.service. Dec 13 14:27:11.093454 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 14:27:11.093491 systemd[1]: Stopped systemd-modules-load.service. Dec 13 14:27:11.108000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:11.098119 systemd[1]: Stopping systemd-udevd.service... Dec 13 14:27:11.109000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:11.099859 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 13 14:27:11.110000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:11.102173 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 14:27:11.102328 systemd[1]: Stopped systemd-udevd.service. Dec 13 14:27:11.104093 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 14:27:11.104134 systemd[1]: Closed systemd-udevd-control.socket. Dec 13 14:27:11.107664 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 14:27:11.107709 systemd[1]: Closed systemd-udevd-kernel.socket. Dec 13 14:27:11.108481 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 14:27:11.108523 systemd[1]: Stopped dracut-pre-udev.service. Dec 13 14:27:11.109544 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 14:27:11.109581 systemd[1]: Stopped dracut-cmdline.service. Dec 13 14:27:11.110403 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 14:27:11.110439 systemd[1]: Stopped dracut-cmdline-ask.service. Dec 13 14:27:11.111909 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Dec 13 14:27:11.121459 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 14:27:11.121535 systemd[1]: Stopped systemd-vconsole-setup.service. Dec 13 14:27:11.121000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:11.123096 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 14:27:11.123199 systemd[1]: Stopped network-cleanup.service. Dec 13 14:27:11.123000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:11.124439 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 14:27:11.124000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:11.124000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:11.124529 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Dec 13 14:27:11.163933 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 14:27:11.656086 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 14:27:11.656334 systemd[1]: Stopped sysroot-boot.service. Dec 13 14:27:11.658000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:11.658928 systemd[1]: Reached target initrd-switch-root.target. Dec 13 14:27:11.660938 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 14:27:11.663000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:11.661104 systemd[1]: Stopped initrd-setup-root.service. Dec 13 14:27:11.664776 systemd[1]: Starting initrd-switch-root.service... Dec 13 14:27:11.709099 systemd[1]: Switching root. Dec 13 14:27:11.749892 iscsid[639]: iscsid shutting down. Dec 13 14:27:11.751219 systemd-journald[184]: Received SIGTERM from PID 1 (n/a). Dec 13 14:27:11.751315 systemd-journald[184]: Journal stopped Dec 13 14:27:18.149877 kernel: SELinux: Class mctp_socket not defined in policy. Dec 13 14:27:18.149939 kernel: SELinux: Class anon_inode not defined in policy. Dec 13 14:27:18.149954 kernel: SELinux: the above unknown classes and permissions will be allowed Dec 13 14:27:18.149969 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 14:27:18.150050 kernel: SELinux: policy capability open_perms=1 Dec 13 14:27:18.150063 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 14:27:18.150075 kernel: SELinux: policy capability always_check_network=0 Dec 13 14:27:18.150086 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 14:27:18.150101 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 14:27:18.150134 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 14:27:18.150146 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 14:27:18.150159 systemd[1]: Successfully loaded SELinux policy in 88.981ms. Dec 13 14:27:18.150179 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 25.011ms. Dec 13 14:27:18.150194 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 14:27:18.150208 systemd[1]: Detected virtualization kvm. Dec 13 14:27:18.150220 systemd[1]: Detected architecture x86-64. Dec 13 14:27:18.150232 systemd[1]: Detected first boot. Dec 13 14:27:18.150249 systemd[1]: Hostname set to . Dec 13 14:27:18.150262 systemd[1]: Initializing machine ID from VM UUID. Dec 13 14:27:18.150274 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Dec 13 14:27:18.150286 systemd[1]: Populated /etc with preset unit settings. Dec 13 14:27:18.150298 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 14:27:18.150311 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 14:27:18.150325 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:27:18.150340 kernel: kauditd_printk_skb: 47 callbacks suppressed Dec 13 14:27:18.150352 kernel: audit: type=1334 audit(1734100037.901:88): prog-id=12 op=LOAD Dec 13 14:27:18.150369 kernel: audit: type=1334 audit(1734100037.901:89): prog-id=3 op=UNLOAD Dec 13 14:27:18.150380 kernel: audit: type=1334 audit(1734100037.901:90): prog-id=13 op=LOAD Dec 13 14:27:18.150391 kernel: audit: type=1334 audit(1734100037.904:91): prog-id=14 op=LOAD Dec 13 14:27:18.150403 kernel: audit: type=1334 audit(1734100037.904:92): prog-id=4 op=UNLOAD Dec 13 14:27:18.150414 kernel: audit: type=1334 audit(1734100037.904:93): prog-id=5 op=UNLOAD Dec 13 14:27:18.150428 kernel: audit: type=1334 audit(1734100037.907:94): prog-id=15 op=LOAD Dec 13 14:27:18.150439 kernel: audit: type=1334 audit(1734100037.907:95): prog-id=12 op=UNLOAD Dec 13 14:27:18.150452 systemd[1]: iscsid.service: Deactivated successfully. Dec 13 14:27:18.150463 kernel: audit: type=1334 audit(1734100037.910:96): prog-id=16 op=LOAD Dec 13 14:27:18.150475 kernel: audit: type=1334 audit(1734100037.913:97): prog-id=17 op=LOAD Dec 13 14:27:18.150486 systemd[1]: Stopped iscsid.service. Dec 13 14:27:18.150499 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 13 14:27:18.150512 systemd[1]: Stopped initrd-switch-root.service. Dec 13 14:27:18.150523 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 13 14:27:18.150538 systemd[1]: Created slice system-addon\x2dconfig.slice. Dec 13 14:27:18.150549 systemd[1]: Created slice system-addon\x2drun.slice. Dec 13 14:27:18.150562 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Dec 13 14:27:18.150573 systemd[1]: Created slice system-getty.slice. Dec 13 14:27:18.150586 systemd[1]: Created slice system-modprobe.slice. Dec 13 14:27:18.150598 systemd[1]: Created slice system-serial\x2dgetty.slice. Dec 13 14:27:18.150609 systemd[1]: Created slice system-system\x2dcloudinit.slice. Dec 13 14:27:18.150639 systemd[1]: Created slice system-systemd\x2dfsck.slice. Dec 13 14:27:18.150652 systemd[1]: Created slice user.slice. Dec 13 14:27:18.150663 systemd[1]: Started systemd-ask-password-console.path. Dec 13 14:27:18.150675 systemd[1]: Started systemd-ask-password-wall.path. Dec 13 14:27:18.150687 systemd[1]: Set up automount boot.automount. Dec 13 14:27:18.150701 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Dec 13 14:27:18.150713 systemd[1]: Stopped target initrd-switch-root.target. Dec 13 14:27:18.150725 systemd[1]: Stopped target initrd-fs.target. Dec 13 14:27:18.150736 systemd[1]: Stopped target initrd-root-fs.target. Dec 13 14:27:18.150747 systemd[1]: Reached target integritysetup.target. Dec 13 14:27:18.150758 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 14:27:18.150769 systemd[1]: Reached target remote-fs.target. Dec 13 14:27:18.150781 systemd[1]: Reached target slices.target. Dec 13 14:27:18.150792 systemd[1]: Reached target swap.target. Dec 13 14:27:18.150805 systemd[1]: Reached target torcx.target. Dec 13 14:27:18.150817 systemd[1]: Reached target veritysetup.target. Dec 13 14:27:18.150829 systemd[1]: Listening on systemd-coredump.socket. Dec 13 14:27:18.150841 systemd[1]: Listening on systemd-initctl.socket. Dec 13 14:27:18.150854 systemd[1]: Listening on systemd-networkd.socket. Dec 13 14:27:18.150866 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 14:27:18.150878 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 14:27:18.150892 systemd[1]: Listening on systemd-userdbd.socket. Dec 13 14:27:18.150903 systemd[1]: Mounting dev-hugepages.mount... Dec 13 14:27:18.150917 systemd[1]: Mounting dev-mqueue.mount... Dec 13 14:27:18.150930 systemd[1]: Mounting media.mount... Dec 13 14:27:18.150942 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:27:18.150953 systemd[1]: Mounting sys-kernel-debug.mount... Dec 13 14:27:18.150965 systemd[1]: Mounting sys-kernel-tracing.mount... Dec 13 14:27:18.151015 systemd[1]: Mounting tmp.mount... Dec 13 14:27:18.151029 systemd[1]: Starting flatcar-tmpfiles.service... Dec 13 14:27:18.151040 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:27:18.151052 systemd[1]: Starting kmod-static-nodes.service... Dec 13 14:27:18.151063 systemd[1]: Starting modprobe@configfs.service... Dec 13 14:27:18.151078 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:27:18.151089 systemd[1]: Starting modprobe@drm.service... Dec 13 14:27:18.151101 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:27:18.151112 systemd[1]: Starting modprobe@fuse.service... Dec 13 14:27:18.151123 systemd[1]: Starting modprobe@loop.service... Dec 13 14:27:18.151135 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 14:27:18.151146 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 13 14:27:18.151158 systemd[1]: Stopped systemd-fsck-root.service. Dec 13 14:27:18.151171 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 13 14:27:18.151185 systemd[1]: Stopped systemd-fsck-usr.service. Dec 13 14:27:18.151196 systemd[1]: Stopped systemd-journald.service. Dec 13 14:27:18.151208 systemd[1]: Starting systemd-journald.service... Dec 13 14:27:18.151220 systemd[1]: Starting systemd-modules-load.service... Dec 13 14:27:18.151232 systemd[1]: Starting systemd-network-generator.service... Dec 13 14:27:18.151243 systemd[1]: Starting systemd-remount-fs.service... Dec 13 14:27:18.151255 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 14:27:18.151266 systemd[1]: verity-setup.service: Deactivated successfully. Dec 13 14:27:18.151278 systemd[1]: Stopped verity-setup.service. Dec 13 14:27:18.151292 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:27:18.151304 systemd[1]: Mounted dev-hugepages.mount. Dec 13 14:27:18.151315 systemd[1]: Mounted dev-mqueue.mount. Dec 13 14:27:18.151326 systemd[1]: Mounted media.mount. Dec 13 14:27:18.151337 systemd[1]: Mounted sys-kernel-debug.mount. Dec 13 14:27:18.151349 systemd[1]: Mounted sys-kernel-tracing.mount. Dec 13 14:27:18.151361 kernel: loop: module loaded Dec 13 14:27:18.151371 kernel: fuse: init (API version 7.34) Dec 13 14:27:18.151383 systemd[1]: Mounted tmp.mount. Dec 13 14:27:18.151396 systemd[1]: Finished kmod-static-nodes.service. Dec 13 14:27:18.151408 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 14:27:18.151804 systemd[1]: Finished modprobe@configfs.service. Dec 13 14:27:18.151822 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:27:18.151836 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:27:18.151855 systemd-journald[915]: Journal started Dec 13 14:27:18.151904 systemd-journald[915]: Runtime Journal (/run/log/journal/b320d7868a524ae1819063eebbed93e6) is 4.9M, max 39.5M, 34.5M free. Dec 13 14:27:12.103000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 14:27:12.236000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 14:27:12.236000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 14:27:12.236000 audit: BPF prog-id=10 op=LOAD Dec 13 14:27:12.236000 audit: BPF prog-id=10 op=UNLOAD Dec 13 14:27:12.238000 audit: BPF prog-id=11 op=LOAD Dec 13 14:27:12.238000 audit: BPF prog-id=11 op=UNLOAD Dec 13 14:27:12.394000 audit[846]: AVC avc: denied { associate } for pid=846 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Dec 13 14:27:12.394000 audit[846]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c00014d8a2 a1=c0000cede0 a2=c0000d70c0 a3=32 items=0 ppid=829 pid=846 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:27:12.394000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Dec 13 14:27:12.398000 audit[846]: AVC avc: denied { associate } for pid=846 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Dec 13 14:27:12.398000 audit[846]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c00014d979 a2=1ed a3=0 items=2 ppid=829 pid=846 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:27:12.398000 audit: CWD cwd="/" Dec 13 14:27:12.398000 audit: PATH item=0 name=(null) inode=2 dev=00:1a mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:12.398000 audit: PATH item=1 name=(null) inode=3 dev=00:1a mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:12.398000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Dec 13 14:27:17.901000 audit: BPF prog-id=12 op=LOAD Dec 13 14:27:17.901000 audit: BPF prog-id=3 op=UNLOAD Dec 13 14:27:17.901000 audit: BPF prog-id=13 op=LOAD Dec 13 14:27:17.904000 audit: BPF prog-id=14 op=LOAD Dec 13 14:27:17.904000 audit: BPF prog-id=4 op=UNLOAD Dec 13 14:27:17.904000 audit: BPF prog-id=5 op=UNLOAD Dec 13 14:27:17.907000 audit: BPF prog-id=15 op=LOAD Dec 13 14:27:17.907000 audit: BPF prog-id=12 op=UNLOAD Dec 13 14:27:17.910000 audit: BPF prog-id=16 op=LOAD Dec 13 14:27:17.913000 audit: BPF prog-id=17 op=LOAD Dec 13 14:27:17.913000 audit: BPF prog-id=13 op=UNLOAD Dec 13 14:27:17.913000 audit: BPF prog-id=14 op=UNLOAD Dec 13 14:27:17.914000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:17.928000 audit: BPF prog-id=15 op=UNLOAD Dec 13 14:27:17.930000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:17.933000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:17.933000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:18.083000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:18.085000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:18.087000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:18.087000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:18.087000 audit: BPF prog-id=18 op=LOAD Dec 13 14:27:18.088000 audit: BPF prog-id=19 op=LOAD Dec 13 14:27:18.088000 audit: BPF prog-id=20 op=LOAD Dec 13 14:27:18.088000 audit: BPF prog-id=16 op=UNLOAD Dec 13 14:27:18.088000 audit: BPF prog-id=17 op=UNLOAD Dec 13 14:27:18.116000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:18.140000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:18.145000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:18.145000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:18.148000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Dec 13 14:27:18.148000 audit[915]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=4 a1=7fff0c21ee30 a2=4000 a3=7fff0c21eecc items=0 ppid=1 pid=915 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:27:18.148000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Dec 13 14:27:18.151000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:18.151000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:17.900050 systemd[1]: Queued start job for default target multi-user.target. Dec 13 14:27:18.154274 systemd[1]: Started systemd-journald.service. Dec 13 14:27:18.153000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:12.391050 /usr/lib/systemd/system-generators/torcx-generator[846]: time="2024-12-13T14:27:12Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 14:27:17.900084 systemd[1]: Unnecessary job was removed for dev-vda6.device. Dec 13 14:27:12.392073 /usr/lib/systemd/system-generators/torcx-generator[846]: time="2024-12-13T14:27:12Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Dec 13 14:27:17.915145 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 13 14:27:12.392115 /usr/lib/systemd/system-generators/torcx-generator[846]: time="2024-12-13T14:27:12Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Dec 13 14:27:18.154229 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 14:27:12.392197 /usr/lib/systemd/system-generators/torcx-generator[846]: time="2024-12-13T14:27:12Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Dec 13 14:27:18.153000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:18.153000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:18.154392 systemd[1]: Finished modprobe@drm.service. Dec 13 14:27:12.392221 /usr/lib/systemd/system-generators/torcx-generator[846]: time="2024-12-13T14:27:12Z" level=debug msg="skipped missing lower profile" missing profile=oem Dec 13 14:27:12.392280 /usr/lib/systemd/system-generators/torcx-generator[846]: time="2024-12-13T14:27:12Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Dec 13 14:27:12.392310 /usr/lib/systemd/system-generators/torcx-generator[846]: time="2024-12-13T14:27:12Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Dec 13 14:27:18.155239 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:27:12.392652 /usr/lib/systemd/system-generators/torcx-generator[846]: time="2024-12-13T14:27:12Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Dec 13 14:27:18.155391 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:27:12.392725 /usr/lib/systemd/system-generators/torcx-generator[846]: time="2024-12-13T14:27:12Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Dec 13 14:27:12.392753 /usr/lib/systemd/system-generators/torcx-generator[846]: time="2024-12-13T14:27:12Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Dec 13 14:27:12.393921 /usr/lib/systemd/system-generators/torcx-generator[846]: time="2024-12-13T14:27:12Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Dec 13 14:27:12.394032 /usr/lib/systemd/system-generators/torcx-generator[846]: time="2024-12-13T14:27:12Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Dec 13 14:27:18.154000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:18.154000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:12.394074 /usr/lib/systemd/system-generators/torcx-generator[846]: time="2024-12-13T14:27:12Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.6: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.6 Dec 13 14:27:12.394106 /usr/lib/systemd/system-generators/torcx-generator[846]: time="2024-12-13T14:27:12Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Dec 13 14:27:12.394141 /usr/lib/systemd/system-generators/torcx-generator[846]: time="2024-12-13T14:27:12Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.6: no such file or directory" path=/var/lib/torcx/store/3510.3.6 Dec 13 14:27:12.394169 /usr/lib/systemd/system-generators/torcx-generator[846]: time="2024-12-13T14:27:12Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Dec 13 14:27:16.863988 /usr/lib/systemd/system-generators/torcx-generator[846]: time="2024-12-13T14:27:16Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 14:27:16.864331 /usr/lib/systemd/system-generators/torcx-generator[846]: time="2024-12-13T14:27:16Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 14:27:16.864456 /usr/lib/systemd/system-generators/torcx-generator[846]: time="2024-12-13T14:27:16Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 14:27:16.864674 /usr/lib/systemd/system-generators/torcx-generator[846]: time="2024-12-13T14:27:16Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 14:27:16.864741 /usr/lib/systemd/system-generators/torcx-generator[846]: time="2024-12-13T14:27:16Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Dec 13 14:27:16.864816 /usr/lib/systemd/system-generators/torcx-generator[846]: time="2024-12-13T14:27:16Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Dec 13 14:27:18.157699 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 14:27:18.157000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:18.157000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:18.159000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:18.159000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:18.160000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:18.161000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:18.157843 systemd[1]: Finished modprobe@fuse.service. Dec 13 14:27:18.158590 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:27:18.158740 systemd[1]: Finished modprobe@loop.service. Dec 13 14:27:18.159594 systemd[1]: Finished systemd-modules-load.service. Dec 13 14:27:18.161512 systemd[1]: Finished systemd-network-generator.service. Dec 13 14:27:18.162371 systemd[1]: Finished systemd-remount-fs.service. Dec 13 14:27:18.161000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:18.163499 systemd[1]: Reached target network-pre.target. Dec 13 14:27:18.167438 systemd[1]: Mounting sys-fs-fuse-connections.mount... Dec 13 14:27:18.169199 systemd[1]: Mounting sys-kernel-config.mount... Dec 13 14:27:18.172907 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 14:27:18.185073 systemd[1]: Starting systemd-hwdb-update.service... Dec 13 14:27:18.186566 systemd[1]: Starting systemd-journal-flush.service... Dec 13 14:27:18.187154 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:27:18.188108 systemd[1]: Starting systemd-random-seed.service... Dec 13 14:27:18.188642 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 14:27:18.190077 systemd[1]: Starting systemd-sysctl.service... Dec 13 14:27:18.194838 systemd[1]: Finished flatcar-tmpfiles.service. Dec 13 14:27:18.194000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:18.195569 systemd[1]: Mounted sys-fs-fuse-connections.mount. Dec 13 14:27:18.196105 systemd[1]: Mounted sys-kernel-config.mount. Dec 13 14:27:18.203237 systemd[1]: Starting systemd-sysusers.service... Dec 13 14:27:18.210239 systemd-journald[915]: Time spent on flushing to /var/log/journal/b320d7868a524ae1819063eebbed93e6 is 34.428ms for 1089 entries. Dec 13 14:27:18.210239 systemd-journald[915]: System Journal (/var/log/journal/b320d7868a524ae1819063eebbed93e6) is 8.0M, max 584.8M, 576.8M free. Dec 13 14:27:18.264297 systemd-journald[915]: Received client request to flush runtime journal. Dec 13 14:27:18.226000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:18.238000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:18.257000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:18.262000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:18.265000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:18.226471 systemd[1]: Finished systemd-random-seed.service. Dec 13 14:27:18.227120 systemd[1]: Reached target first-boot-complete.target. Dec 13 14:27:18.239026 systemd[1]: Finished systemd-sysctl.service. Dec 13 14:27:18.258095 systemd[1]: Finished systemd-sysusers.service. Dec 13 14:27:18.263038 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 14:27:18.265024 systemd[1]: Starting systemd-udev-settle.service... Dec 13 14:27:18.265915 systemd[1]: Finished systemd-journal-flush.service. Dec 13 14:27:18.278698 udevadm[955]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Dec 13 14:27:19.110626 systemd[1]: Finished systemd-hwdb-update.service. Dec 13 14:27:19.110000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:19.112000 audit: BPF prog-id=21 op=LOAD Dec 13 14:27:19.112000 audit: BPF prog-id=22 op=LOAD Dec 13 14:27:19.112000 audit: BPF prog-id=7 op=UNLOAD Dec 13 14:27:19.112000 audit: BPF prog-id=8 op=UNLOAD Dec 13 14:27:19.114752 systemd[1]: Starting systemd-udevd.service... Dec 13 14:27:19.156515 systemd-udevd[956]: Using default interface naming scheme 'v252'. Dec 13 14:27:19.252000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:19.252338 systemd[1]: Started systemd-udevd.service. Dec 13 14:27:19.259000 audit: BPF prog-id=23 op=LOAD Dec 13 14:27:19.263295 systemd[1]: Starting systemd-networkd.service... Dec 13 14:27:19.283000 audit: BPF prog-id=24 op=LOAD Dec 13 14:27:19.283000 audit: BPF prog-id=25 op=LOAD Dec 13 14:27:19.283000 audit: BPF prog-id=26 op=LOAD Dec 13 14:27:19.286570 systemd[1]: Starting systemd-userdbd.service... Dec 13 14:27:19.333665 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Dec 13 14:27:19.357000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:19.358181 systemd[1]: Started systemd-userdbd.service. Dec 13 14:27:19.436334 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Dec 13 14:27:19.435641 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Dec 13 14:27:19.442064 kernel: ACPI: button: Power Button [PWRF] Dec 13 14:27:19.470682 systemd-networkd[968]: lo: Link UP Dec 13 14:27:19.470000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:19.470694 systemd-networkd[968]: lo: Gained carrier Dec 13 14:27:19.471247 systemd-networkd[968]: Enumeration completed Dec 13 14:27:19.471389 systemd[1]: Started systemd-networkd.service. Dec 13 14:27:19.471406 systemd-networkd[968]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 14:27:19.473357 systemd-networkd[968]: eth0: Link UP Dec 13 14:27:19.473372 systemd-networkd[968]: eth0: Gained carrier Dec 13 14:27:19.484189 systemd-networkd[968]: eth0: DHCPv4 address 172.24.4.127/24, gateway 172.24.4.1 acquired from 172.24.4.1 Dec 13 14:27:19.474000 audit[977]: AVC avc: denied { confidentiality } for pid=977 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Dec 13 14:27:19.474000 audit[977]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=5580a2e4ceb0 a1=337fc a2=7fcf8000bbc5 a3=5 items=110 ppid=956 pid=977 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:27:19.474000 audit: CWD cwd="/" Dec 13 14:27:19.474000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:19.474000 audit: PATH item=1 name=(null) inode=13218 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:19.474000 audit: PATH item=2 name=(null) inode=13218 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:19.474000 audit: PATH item=3 name=(null) inode=13219 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:19.474000 audit: PATH item=4 name=(null) inode=13218 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:19.474000 audit: PATH item=5 name=(null) inode=13220 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:19.474000 audit: PATH item=6 name=(null) inode=13218 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:19.474000 audit: PATH item=7 name=(null) inode=13221 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:19.474000 audit: PATH item=8 name=(null) inode=13221 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:19.474000 audit: PATH item=9 name=(null) inode=13222 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:19.474000 audit: PATH item=10 name=(null) inode=13221 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:19.474000 audit: PATH item=11 name=(null) inode=13223 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:19.474000 audit: PATH item=12 name=(null) inode=13221 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:19.474000 audit: PATH item=13 name=(null) inode=13224 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:19.474000 audit: PATH item=14 name=(null) inode=13221 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:19.474000 audit: PATH item=15 name=(null) inode=13225 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:19.474000 audit: PATH item=16 name=(null) inode=13221 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:19.474000 audit: PATH item=17 name=(null) inode=13226 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:19.474000 audit: PATH item=18 name=(null) inode=13218 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:19.474000 audit: PATH item=19 name=(null) inode=13227 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:19.474000 audit: PATH item=20 name=(null) inode=13227 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:19.474000 audit: PATH item=21 name=(null) inode=13228 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:19.474000 audit: PATH item=22 name=(null) inode=13227 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:19.474000 audit: PATH item=23 name=(null) inode=13229 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:19.474000 audit: PATH item=24 name=(null) inode=13227 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:19.474000 audit: PATH item=25 name=(null) inode=13230 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:19.474000 audit: PATH item=26 name=(null) inode=13227 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:19.474000 audit: PATH item=27 name=(null) inode=13231 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:19.474000 audit: PATH item=28 name=(null) inode=13227 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:19.474000 audit: PATH item=29 name=(null) inode=13232 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:19.474000 audit: PATH item=30 name=(null) inode=13218 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:19.474000 audit: PATH item=31 name=(null) inode=13233 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:19.474000 audit: PATH item=32 name=(null) inode=13233 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:19.474000 audit: PATH item=33 name=(null) inode=13234 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:19.474000 audit: PATH item=34 name=(null) inode=13233 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:19.474000 audit: PATH item=35 name=(null) inode=13235 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:19.474000 audit: PATH item=36 name=(null) inode=13233 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:19.474000 audit: PATH item=37 name=(null) inode=13236 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:19.474000 audit: PATH item=38 name=(null) inode=13233 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:19.474000 audit: PATH item=39 name=(null) inode=13237 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:19.474000 audit: PATH item=40 name=(null) inode=13233 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:19.474000 audit: PATH item=41 name=(null) inode=13238 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:19.474000 audit: PATH item=42 name=(null) inode=13218 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:19.474000 audit: PATH item=43 name=(null) inode=13239 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:19.474000 audit: PATH item=44 name=(null) inode=13239 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:19.474000 audit: PATH item=45 name=(null) inode=13240 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:19.474000 audit: PATH item=46 name=(null) inode=13239 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:19.474000 audit: PATH item=47 name=(null) inode=13241 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:19.474000 audit: PATH item=48 name=(null) inode=13239 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:19.474000 audit: PATH item=49 name=(null) inode=13242 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:19.474000 audit: PATH item=50 name=(null) inode=13239 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:19.474000 audit: PATH item=51 name=(null) inode=13243 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:19.474000 audit: PATH item=52 name=(null) inode=13239 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:19.474000 audit: PATH item=53 name=(null) inode=13244 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:19.474000 audit: PATH item=54 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:19.474000 audit: PATH item=55 name=(null) inode=13245 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:19.474000 audit: PATH item=56 name=(null) inode=13245 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:19.474000 audit: PATH item=57 name=(null) inode=13246 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:19.474000 audit: PATH item=58 name=(null) inode=13245 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:19.474000 audit: PATH item=59 name=(null) inode=13247 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:19.474000 audit: PATH item=60 name=(null) inode=13245 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:19.474000 audit: PATH item=61 name=(null) inode=13248 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:19.474000 audit: PATH item=62 name=(null) inode=13248 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:19.474000 audit: PATH item=63 name=(null) inode=13249 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:19.474000 audit: PATH item=64 name=(null) inode=13248 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:19.474000 audit: PATH item=65 name=(null) inode=13250 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:19.474000 audit: PATH item=66 name=(null) inode=13248 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:19.474000 audit: PATH item=67 name=(null) inode=13251 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:19.474000 audit: PATH item=68 name=(null) inode=13248 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:19.474000 audit: PATH item=69 name=(null) inode=13252 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:19.474000 audit: PATH item=70 name=(null) inode=13248 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:19.474000 audit: PATH item=71 name=(null) inode=13253 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:19.474000 audit: PATH item=72 name=(null) inode=13245 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:19.474000 audit: PATH item=73 name=(null) inode=13254 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:19.474000 audit: PATH item=74 name=(null) inode=13254 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:19.474000 audit: PATH item=75 name=(null) inode=13255 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:19.474000 audit: PATH item=76 name=(null) inode=13254 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:19.474000 audit: PATH item=77 name=(null) inode=13256 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:19.474000 audit: PATH item=78 name=(null) inode=13254 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:19.474000 audit: PATH item=79 name=(null) inode=13257 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:19.474000 audit: PATH item=80 name=(null) inode=13254 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:19.474000 audit: PATH item=81 name=(null) inode=13258 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:19.474000 audit: PATH item=82 name=(null) inode=13254 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:19.474000 audit: PATH item=83 name=(null) inode=13259 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:19.474000 audit: PATH item=84 name=(null) inode=13245 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:19.474000 audit: PATH item=85 name=(null) inode=13260 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:19.474000 audit: PATH item=86 name=(null) inode=13260 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:19.474000 audit: PATH item=87 name=(null) inode=13261 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:19.474000 audit: PATH item=88 name=(null) inode=13260 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:19.474000 audit: PATH item=89 name=(null) inode=13262 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:19.474000 audit: PATH item=90 name=(null) inode=13260 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:19.474000 audit: PATH item=91 name=(null) inode=13263 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:19.474000 audit: PATH item=92 name=(null) inode=13260 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:19.474000 audit: PATH item=93 name=(null) inode=13264 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:19.474000 audit: PATH item=94 name=(null) inode=13260 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:19.474000 audit: PATH item=95 name=(null) inode=13265 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:19.474000 audit: PATH item=96 name=(null) inode=13245 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:19.474000 audit: PATH item=97 name=(null) inode=13266 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:19.474000 audit: PATH item=98 name=(null) inode=13266 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:19.474000 audit: PATH item=99 name=(null) inode=13267 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:19.474000 audit: PATH item=100 name=(null) inode=13266 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:19.474000 audit: PATH item=101 name=(null) inode=13268 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:19.474000 audit: PATH item=102 name=(null) inode=13266 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:19.474000 audit: PATH item=103 name=(null) inode=13269 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:19.474000 audit: PATH item=104 name=(null) inode=13266 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:19.474000 audit: PATH item=105 name=(null) inode=13270 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:19.474000 audit: PATH item=106 name=(null) inode=13266 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:19.474000 audit: PATH item=107 name=(null) inode=13271 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:19.474000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:19.474000 audit: PATH item=109 name=(null) inode=13272 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:19.474000 audit: PROCTITLE proctitle="(udev-worker)" Dec 13 14:27:19.503000 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Dec 13 14:27:19.513021 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Dec 13 14:27:19.547028 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 14:27:19.563450 systemd[1]: Finished systemd-udev-settle.service. Dec 13 14:27:19.562000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:19.565433 systemd[1]: Starting lvm2-activation-early.service... Dec 13 14:27:19.615967 lvm[985]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 14:27:19.652963 systemd[1]: Finished lvm2-activation-early.service. Dec 13 14:27:19.653000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:19.654445 systemd[1]: Reached target cryptsetup.target. Dec 13 14:27:19.658170 systemd[1]: Starting lvm2-activation.service... Dec 13 14:27:19.667607 lvm[986]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 14:27:19.705272 systemd[1]: Finished lvm2-activation.service. Dec 13 14:27:19.706000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:19.706590 systemd[1]: Reached target local-fs-pre.target. Dec 13 14:27:19.707712 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 14:27:19.707778 systemd[1]: Reached target local-fs.target. Dec 13 14:27:19.708857 systemd[1]: Reached target machines.target. Dec 13 14:27:19.712485 systemd[1]: Starting ldconfig.service... Dec 13 14:27:19.715065 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:27:19.715181 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:27:19.717512 systemd[1]: Starting systemd-boot-update.service... Dec 13 14:27:19.720738 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Dec 13 14:27:19.725932 systemd[1]: Starting systemd-machine-id-commit.service... Dec 13 14:27:19.732258 systemd[1]: Starting systemd-sysext.service... Dec 13 14:27:19.763211 systemd[1]: boot.automount: Got automount request for /boot, triggered by 988 (bootctl) Dec 13 14:27:19.766535 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Dec 13 14:27:19.770591 systemd[1]: Unmounting usr-share-oem.mount... Dec 13 14:27:19.787426 systemd[1]: usr-share-oem.mount: Deactivated successfully. Dec 13 14:27:19.787625 systemd[1]: Unmounted usr-share-oem.mount. Dec 13 14:27:19.831027 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Dec 13 14:27:19.833655 kernel: loop0: detected capacity change from 0 to 210664 Dec 13 14:27:19.832000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:20.236309 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 14:27:20.237722 systemd[1]: Finished systemd-machine-id-commit.service. Dec 13 14:27:20.237000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:20.286071 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 14:27:20.319076 kernel: loop1: detected capacity change from 0 to 210664 Dec 13 14:27:20.388777 (sd-sysext)[1003]: Using extensions 'kubernetes'. Dec 13 14:27:20.390028 (sd-sysext)[1003]: Merged extensions into '/usr'. Dec 13 14:27:20.437596 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:27:20.446191 systemd[1]: Mounting usr-share-oem.mount... Dec 13 14:27:20.447328 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:27:20.450520 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:27:20.453039 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:27:20.455156 systemd[1]: Starting modprobe@loop.service... Dec 13 14:27:20.455747 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:27:20.455880 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:27:20.456051 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:27:20.460612 systemd-fsck[1000]: fsck.fat 4.2 (2021-01-31) Dec 13 14:27:20.460612 systemd-fsck[1000]: /dev/vda1: 789 files, 119291/258078 clusters Dec 13 14:27:20.462000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:20.462223 systemd[1]: Mounted usr-share-oem.mount. Dec 13 14:27:20.462943 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:27:20.463093 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:27:20.466000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:20.466841 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Dec 13 14:27:20.466000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:20.467762 systemd[1]: Finished systemd-sysext.service. Dec 13 14:27:20.467000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:20.468431 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:27:20.468000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:20.468000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:20.468590 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:27:20.469410 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:27:20.469526 systemd[1]: Finished modprobe@loop.service. Dec 13 14:27:20.473000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:20.473000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:20.478213 systemd[1]: Mounting boot.mount... Dec 13 14:27:20.479784 systemd[1]: Starting ensure-sysext.service... Dec 13 14:27:20.485438 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:27:20.485515 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 14:27:20.486777 systemd[1]: Starting systemd-tmpfiles-setup.service... Dec 13 14:27:20.491376 systemd[1]: Reloading. Dec 13 14:27:20.591303 /usr/lib/systemd/system-generators/torcx-generator[1030]: time="2024-12-13T14:27:20Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 14:27:20.591340 /usr/lib/systemd/system-generators/torcx-generator[1030]: time="2024-12-13T14:27:20Z" level=info msg="torcx already run" Dec 13 14:27:20.753635 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 14:27:20.754011 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 14:27:20.793912 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:27:20.853000 audit: BPF prog-id=27 op=LOAD Dec 13 14:27:20.853000 audit: BPF prog-id=24 op=UNLOAD Dec 13 14:27:20.853000 audit: BPF prog-id=28 op=LOAD Dec 13 14:27:20.854000 audit: BPF prog-id=29 op=LOAD Dec 13 14:27:20.854000 audit: BPF prog-id=25 op=UNLOAD Dec 13 14:27:20.854000 audit: BPF prog-id=26 op=UNLOAD Dec 13 14:27:20.854000 audit: BPF prog-id=30 op=LOAD Dec 13 14:27:20.855000 audit: BPF prog-id=31 op=LOAD Dec 13 14:27:20.855000 audit: BPF prog-id=21 op=UNLOAD Dec 13 14:27:20.855000 audit: BPF prog-id=22 op=UNLOAD Dec 13 14:27:20.856000 audit: BPF prog-id=32 op=LOAD Dec 13 14:27:20.856000 audit: BPF prog-id=23 op=UNLOAD Dec 13 14:27:20.857000 audit: BPF prog-id=33 op=LOAD Dec 13 14:27:20.857000 audit: BPF prog-id=18 op=UNLOAD Dec 13 14:27:20.857000 audit: BPF prog-id=34 op=LOAD Dec 13 14:27:20.858000 audit: BPF prog-id=35 op=LOAD Dec 13 14:27:20.858000 audit: BPF prog-id=19 op=UNLOAD Dec 13 14:27:20.858000 audit: BPF prog-id=20 op=UNLOAD Dec 13 14:27:20.863391 systemd[1]: Mounted boot.mount. Dec 13 14:27:20.880585 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:27:20.880834 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:27:20.882344 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:27:20.884082 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:27:20.887669 systemd[1]: Starting modprobe@loop.service... Dec 13 14:27:20.890115 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:27:20.890243 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:27:20.890361 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:27:20.891338 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:27:20.891477 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:27:20.891000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:20.891000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:20.892311 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:27:20.892000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:20.892000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:20.892422 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:27:20.893346 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:27:20.893353 systemd-tmpfiles[1011]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Dec 13 14:27:20.896571 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:27:20.896797 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:27:20.898207 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:27:20.899872 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:27:20.901122 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:27:20.901265 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:27:20.901402 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:27:20.902309 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:27:20.902454 systemd[1]: Finished modprobe@loop.service. Dec 13 14:27:20.903000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:20.903000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:20.909381 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:27:20.909693 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:27:20.911468 systemd[1]: Starting modprobe@drm.service... Dec 13 14:27:20.913449 systemd[1]: Starting modprobe@loop.service... Dec 13 14:27:20.914380 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:27:20.914547 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:27:20.917636 systemd[1]: Starting systemd-networkd-wait-online.service... Dec 13 14:27:20.918295 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:27:20.923000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:20.924000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:20.924000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:20.924374 systemd[1]: Finished ensure-sysext.service. Dec 13 14:27:20.925135 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:27:20.925255 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:27:20.928017 systemd-tmpfiles[1011]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 14:27:20.930929 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:27:20.930000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:20.930000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:20.931106 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:27:20.931727 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:27:20.933000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:20.933000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:20.933772 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:27:20.933880 systemd[1]: Finished modprobe@loop.service. Dec 13 14:27:20.934478 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 14:27:20.936383 systemd-networkd[968]: eth0: Gained IPv6LL Dec 13 14:27:20.937877 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 14:27:20.937000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:20.937000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:20.938097 systemd[1]: Finished modprobe@drm.service. Dec 13 14:27:20.941000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:20.941845 systemd[1]: Finished systemd-networkd-wait-online.service. Dec 13 14:27:20.945000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:20.946011 systemd[1]: Finished systemd-boot-update.service. Dec 13 14:27:20.953414 systemd-tmpfiles[1011]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 14:27:21.225649 systemd[1]: Finished systemd-tmpfiles-setup.service. Dec 13 14:27:21.225000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:21.228045 systemd[1]: Starting audit-rules.service... Dec 13 14:27:21.230721 systemd[1]: Starting clean-ca-certificates.service... Dec 13 14:27:21.236053 systemd[1]: Starting systemd-journal-catalog-update.service... Dec 13 14:27:21.236000 audit: BPF prog-id=36 op=LOAD Dec 13 14:27:21.238000 audit: BPF prog-id=37 op=LOAD Dec 13 14:27:21.238656 systemd[1]: Starting systemd-resolved.service... Dec 13 14:27:21.247000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:21.242079 systemd[1]: Starting systemd-timesyncd.service... Dec 13 14:27:21.246692 systemd[1]: Starting systemd-update-utmp.service... Dec 13 14:27:21.248336 systemd[1]: Finished clean-ca-certificates.service. Dec 13 14:27:21.249087 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 14:27:21.258000 audit[1092]: SYSTEM_BOOT pid=1092 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Dec 13 14:27:21.275715 systemd[1]: Finished systemd-update-utmp.service. Dec 13 14:27:21.275000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:21.307027 systemd[1]: Finished systemd-journal-catalog-update.service. Dec 13 14:27:21.306000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:21.344325 systemd[1]: Started systemd-timesyncd.service. Dec 13 14:27:21.345092 systemd[1]: Reached target time-set.target. Dec 13 14:27:21.343000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:21.355579 augenrules[1108]: No rules Dec 13 14:27:21.354000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Dec 13 14:27:21.354000 audit[1108]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffcf4d34930 a2=420 a3=0 items=0 ppid=1086 pid=1108 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:27:21.354000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 13 14:27:21.356961 systemd[1]: Finished audit-rules.service. Dec 13 14:27:21.378889 systemd-resolved[1090]: Positive Trust Anchors: Dec 13 14:27:21.379308 systemd-resolved[1090]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 14:27:21.379403 systemd-resolved[1090]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 14:27:21.396059 systemd-resolved[1090]: Using system hostname 'ci-3510-3-6-0-e70ea02b81.novalocal'. Dec 13 14:27:21.397900 systemd[1]: Started systemd-resolved.service. Dec 13 14:27:21.398499 systemd[1]: Reached target network.target. Dec 13 14:27:21.398919 systemd[1]: Reached target network-online.target. Dec 13 14:27:21.399358 systemd[1]: Reached target nss-lookup.target. Dec 13 14:27:21.411665 ldconfig[987]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 14:27:21.428752 systemd[1]: Finished ldconfig.service. Dec 13 14:27:21.430884 systemd[1]: Starting systemd-update-done.service... Dec 13 14:27:21.447380 systemd[1]: Finished systemd-update-done.service. Dec 13 14:27:21.448641 systemd[1]: Reached target sysinit.target. Dec 13 14:27:21.449869 systemd[1]: Started motdgen.path. Dec 13 14:27:21.450963 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Dec 13 14:27:21.452657 systemd[1]: Started logrotate.timer. Dec 13 14:27:21.453811 systemd[1]: Started mdadm.timer. Dec 13 14:27:21.454739 systemd[1]: Started systemd-tmpfiles-clean.timer. Dec 13 14:27:21.455770 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 14:27:21.455839 systemd[1]: Reached target paths.target. Dec 13 14:27:21.456843 systemd[1]: Reached target timers.target. Dec 13 14:27:21.459015 systemd[1]: Listening on dbus.socket. Dec 13 14:27:21.462558 systemd[1]: Starting docker.socket... Dec 13 14:27:21.470396 systemd[1]: Listening on sshd.socket. Dec 13 14:27:21.471903 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:27:21.473160 systemd[1]: Listening on docker.socket. Dec 13 14:27:21.474590 systemd[1]: Reached target sockets.target. Dec 13 14:27:21.475731 systemd[1]: Reached target basic.target. Dec 13 14:27:21.477061 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 14:27:21.477355 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 14:27:21.479704 systemd[1]: Starting containerd.service... Dec 13 14:27:21.483265 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Dec 13 14:27:21.487084 systemd[1]: Starting dbus.service... Dec 13 14:27:21.493267 systemd[1]: Starting enable-oem-cloudinit.service... Dec 13 14:27:21.497840 systemd[1]: Starting extend-filesystems.service... Dec 13 14:27:21.500948 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Dec 13 14:27:21.509681 systemd[1]: Starting kubelet.service... Dec 13 14:27:22.141439 systemd-timesyncd[1091]: Contacted time server 95.81.173.8:123 (0.flatcar.pool.ntp.org). Dec 13 14:27:22.141552 systemd-timesyncd[1091]: Initial clock synchronization to Fri 2024-12-13 14:27:22.141218 UTC. Dec 13 14:27:22.141649 systemd-resolved[1090]: Clock change detected. Flushing caches. Dec 13 14:27:22.145510 systemd[1]: Starting motdgen.service... Dec 13 14:27:22.153920 systemd[1]: Starting ssh-key-proc-cmdline.service... Dec 13 14:27:22.165385 systemd[1]: Starting sshd-keygen.service... Dec 13 14:27:22.165856 jq[1121]: false Dec 13 14:27:22.171610 systemd[1]: Starting systemd-logind.service... Dec 13 14:27:22.173767 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:27:22.173855 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 14:27:22.183299 jq[1134]: true Dec 13 14:27:22.174458 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 14:27:22.175411 systemd[1]: Starting update-engine.service... Dec 13 14:27:22.177678 systemd[1]: Starting update-ssh-keys-after-ignition.service... Dec 13 14:27:22.181260 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 14:27:22.183468 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Dec 13 14:27:22.185061 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 14:27:22.185286 systemd[1]: Finished ssh-key-proc-cmdline.service. Dec 13 14:27:22.210844 jq[1140]: true Dec 13 14:27:22.219319 systemd[1]: Created slice system-sshd.slice. Dec 13 14:27:22.239346 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 14:27:22.239569 systemd[1]: Finished motdgen.service. Dec 13 14:27:22.251700 extend-filesystems[1122]: Found loop1 Dec 13 14:27:22.251700 extend-filesystems[1122]: Found vda Dec 13 14:27:22.251700 extend-filesystems[1122]: Found vda1 Dec 13 14:27:22.251700 extend-filesystems[1122]: Found vda2 Dec 13 14:27:22.251700 extend-filesystems[1122]: Found vda3 Dec 13 14:27:22.251700 extend-filesystems[1122]: Found usr Dec 13 14:27:22.251700 extend-filesystems[1122]: Found vda4 Dec 13 14:27:22.251700 extend-filesystems[1122]: Found vda6 Dec 13 14:27:22.256897 extend-filesystems[1122]: Found vda7 Dec 13 14:27:22.256897 extend-filesystems[1122]: Found vda9 Dec 13 14:27:22.256897 extend-filesystems[1122]: Checking size of /dev/vda9 Dec 13 14:27:22.282866 env[1142]: time="2024-12-13T14:27:22.282769464Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Dec 13 14:27:22.325672 systemd-logind[1131]: Watching system buttons on /dev/input/event1 (Power Button) Dec 13 14:27:22.326102 systemd-logind[1131]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 13 14:27:22.330843 systemd-logind[1131]: New seat seat0. Dec 13 14:27:22.338286 extend-filesystems[1122]: Resized partition /dev/vda9 Dec 13 14:27:22.375315 env[1142]: time="2024-12-13T14:27:22.358936986Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 14:27:22.375457 extend-filesystems[1173]: resize2fs 1.46.5 (30-Dec-2021) Dec 13 14:27:22.443764 env[1142]: time="2024-12-13T14:27:22.443371493Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:27:22.445620 env[1142]: time="2024-12-13T14:27:22.445570648Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.173-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 14:27:22.445745 env[1142]: time="2024-12-13T14:27:22.445727011Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:27:22.446188 env[1142]: time="2024-12-13T14:27:22.446163620Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 14:27:22.446313 env[1142]: time="2024-12-13T14:27:22.446294145Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 14:27:22.446416 env[1142]: time="2024-12-13T14:27:22.446395475Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Dec 13 14:27:22.446508 env[1142]: time="2024-12-13T14:27:22.446490473Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 14:27:22.446737 env[1142]: time="2024-12-13T14:27:22.446717348Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:27:22.447210 env[1142]: time="2024-12-13T14:27:22.447192109Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:27:22.447482 env[1142]: time="2024-12-13T14:27:22.447441166Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 14:27:22.447592 env[1142]: time="2024-12-13T14:27:22.447572943Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 14:27:22.448944 env[1142]: time="2024-12-13T14:27:22.448918467Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Dec 13 14:27:22.449060 env[1142]: time="2024-12-13T14:27:22.449042038Z" level=info msg="metadata content store policy set" policy=shared Dec 13 14:27:22.457079 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 4635643 blocks Dec 13 14:27:22.468270 dbus-daemon[1118]: [system] SELinux support is enabled Dec 13 14:27:22.468609 systemd[1]: Started dbus.service. Dec 13 14:27:22.471870 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 14:27:22.471902 systemd[1]: Reached target system-config.target. Dec 13 14:27:22.472442 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 14:27:22.472460 systemd[1]: Reached target user-config.target. Dec 13 14:27:22.482045 systemd[1]: Started systemd-logind.service. Dec 13 14:27:22.483193 dbus-daemon[1118]: [system] Successfully activated service 'org.freedesktop.systemd1' Dec 13 14:27:22.529728 update_engine[1133]: I1213 14:27:22.528481 1133 main.cc:92] Flatcar Update Engine starting Dec 13 14:27:22.540608 systemd[1]: Started update-engine.service. Dec 13 14:27:22.652846 update_engine[1133]: I1213 14:27:22.545472 1133 update_check_scheduler.cc:74] Next update check in 11m12s Dec 13 14:27:22.543693 systemd[1]: Started locksmithd.service. Dec 13 14:27:22.674969 bash[1163]: Updated "/home/core/.ssh/authorized_keys" Dec 13 14:27:22.673772 systemd[1]: Finished update-ssh-keys-after-ignition.service. Dec 13 14:27:22.698702 kernel: EXT4-fs (vda9): resized filesystem to 4635643 Dec 13 14:27:22.988943 extend-filesystems[1173]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Dec 13 14:27:22.988943 extend-filesystems[1173]: old_desc_blocks = 1, new_desc_blocks = 3 Dec 13 14:27:22.988943 extend-filesystems[1173]: The filesystem on /dev/vda9 is now 4635643 (4k) blocks long. Dec 13 14:27:23.006893 extend-filesystems[1122]: Resized filesystem in /dev/vda9 Dec 13 14:27:23.009075 env[1142]: time="2024-12-13T14:27:22.996480499Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 14:27:23.009075 env[1142]: time="2024-12-13T14:27:22.996843289Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 14:27:23.009075 env[1142]: time="2024-12-13T14:27:22.996935042Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 14:27:23.009075 env[1142]: time="2024-12-13T14:27:22.997065767Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 14:27:23.009075 env[1142]: time="2024-12-13T14:27:22.997192114Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 14:27:23.009075 env[1142]: time="2024-12-13T14:27:22.997274749Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 14:27:23.009075 env[1142]: time="2024-12-13T14:27:22.997352565Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 14:27:23.009075 env[1142]: time="2024-12-13T14:27:22.997396637Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 14:27:23.009075 env[1142]: time="2024-12-13T14:27:22.997473551Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Dec 13 14:27:23.009075 env[1142]: time="2024-12-13T14:27:22.997548422Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 14:27:23.009075 env[1142]: time="2024-12-13T14:27:22.997590411Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 14:27:23.009075 env[1142]: time="2024-12-13T14:27:22.997655352Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 14:27:23.009075 env[1142]: time="2024-12-13T14:27:22.998161422Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 14:27:23.009075 env[1142]: time="2024-12-13T14:27:22.998642464Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 14:27:22.992245 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 14:27:23.010301 env[1142]: time="2024-12-13T14:27:22.999877410Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 14:27:23.010301 env[1142]: time="2024-12-13T14:27:23.000081994Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 14:27:23.010301 env[1142]: time="2024-12-13T14:27:23.000273843Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 14:27:23.010301 env[1142]: time="2024-12-13T14:27:23.000475101Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 14:27:23.010301 env[1142]: time="2024-12-13T14:27:23.000519394Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 14:27:23.010301 env[1142]: time="2024-12-13T14:27:23.000596138Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 14:27:23.010301 env[1142]: time="2024-12-13T14:27:23.000822573Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 14:27:23.010301 env[1142]: time="2024-12-13T14:27:23.000869851Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 14:27:23.010301 env[1142]: time="2024-12-13T14:27:23.000948980Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 14:27:23.010301 env[1142]: time="2024-12-13T14:27:23.001525301Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 14:27:23.010301 env[1142]: time="2024-12-13T14:27:23.001581396Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 14:27:23.010301 env[1142]: time="2024-12-13T14:27:23.001724374Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 14:27:23.010301 env[1142]: time="2024-12-13T14:27:23.002859052Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 14:27:23.010301 env[1142]: time="2024-12-13T14:27:23.004598584Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 14:27:23.010301 env[1142]: time="2024-12-13T14:27:23.004644340Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 14:27:22.992613 systemd[1]: Finished extend-filesystems.service. Dec 13 14:27:23.011379 env[1142]: time="2024-12-13T14:27:23.004743606Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 14:27:23.011379 env[1142]: time="2024-12-13T14:27:23.004830660Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Dec 13 14:27:23.011379 env[1142]: time="2024-12-13T14:27:23.004864062Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 14:27:23.011379 env[1142]: time="2024-12-13T14:27:23.004907875Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Dec 13 14:27:23.011379 env[1142]: time="2024-12-13T14:27:23.005002763Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 14:27:23.012765 env[1142]: time="2024-12-13T14:27:23.005525152Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 14:27:23.012765 env[1142]: time="2024-12-13T14:27:23.005727121Z" level=info msg="Connect containerd service" Dec 13 14:27:23.012765 env[1142]: time="2024-12-13T14:27:23.005815437Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 14:27:23.012765 env[1142]: time="2024-12-13T14:27:23.012046844Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 14:27:23.023075 env[1142]: time="2024-12-13T14:27:23.012772204Z" level=info msg="Start subscribing containerd event" Dec 13 14:27:23.023075 env[1142]: time="2024-12-13T14:27:23.013175531Z" level=info msg="Start recovering state" Dec 13 14:27:23.023075 env[1142]: time="2024-12-13T14:27:23.013297139Z" level=info msg="Start event monitor" Dec 13 14:27:23.023075 env[1142]: time="2024-12-13T14:27:23.013320142Z" level=info msg="Start snapshots syncer" Dec 13 14:27:23.023075 env[1142]: time="2024-12-13T14:27:23.013372881Z" level=info msg="Start cni network conf syncer for default" Dec 13 14:27:23.023075 env[1142]: time="2024-12-13T14:27:23.013384152Z" level=info msg="Start streaming server" Dec 13 14:27:23.023075 env[1142]: time="2024-12-13T14:27:23.016004065Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 14:27:23.023075 env[1142]: time="2024-12-13T14:27:23.016083134Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 14:27:23.023075 env[1142]: time="2024-12-13T14:27:23.016175998Z" level=info msg="containerd successfully booted in 0.735991s" Dec 13 14:27:23.016373 systemd[1]: Started containerd.service. Dec 13 14:27:23.177169 locksmithd[1177]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 14:27:23.320650 sshd_keygen[1141]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 14:27:23.366211 systemd[1]: Finished sshd-keygen.service. Dec 13 14:27:23.368612 systemd[1]: Starting issuegen.service... Dec 13 14:27:23.370365 systemd[1]: Started sshd@0-172.24.4.127:22-172.24.4.1:59606.service. Dec 13 14:27:23.379244 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 14:27:23.379420 systemd[1]: Finished issuegen.service. Dec 13 14:27:23.381552 systemd[1]: Starting systemd-user-sessions.service... Dec 13 14:27:23.398425 systemd[1]: Finished systemd-user-sessions.service. Dec 13 14:27:23.400896 systemd[1]: Started getty@tty1.service. Dec 13 14:27:23.403359 systemd[1]: Started serial-getty@ttyS0.service. Dec 13 14:27:23.404151 systemd[1]: Reached target getty.target. Dec 13 14:27:24.415359 sshd[1193]: Accepted publickey for core from 172.24.4.1 port 59606 ssh2: RSA SHA256:2ngTm68CMx36X1xnKPqUJq9w0RJJht3bhOuOq01A7tI Dec 13 14:27:24.421245 sshd[1193]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:27:24.450209 systemd[1]: Created slice user-500.slice. Dec 13 14:27:24.455187 systemd[1]: Starting user-runtime-dir@500.service... Dec 13 14:27:24.468418 systemd-logind[1131]: New session 1 of user core. Dec 13 14:27:24.480553 systemd[1]: Finished user-runtime-dir@500.service. Dec 13 14:27:24.483772 systemd[1]: Starting user@500.service... Dec 13 14:27:24.491417 (systemd)[1201]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:27:24.640232 systemd[1201]: Queued start job for default target default.target. Dec 13 14:27:24.641318 systemd[1201]: Reached target paths.target. Dec 13 14:27:24.641342 systemd[1201]: Reached target sockets.target. Dec 13 14:27:24.641358 systemd[1201]: Reached target timers.target. Dec 13 14:27:24.641372 systemd[1201]: Reached target basic.target. Dec 13 14:27:24.641424 systemd[1201]: Reached target default.target. Dec 13 14:27:24.641455 systemd[1201]: Startup finished in 141ms. Dec 13 14:27:24.641723 systemd[1]: Started user@500.service. Dec 13 14:27:24.643431 systemd[1]: Started session-1.scope. Dec 13 14:27:25.107854 systemd[1]: Started kubelet.service. Dec 13 14:27:25.130204 systemd[1]: Started sshd@1-172.24.4.127:22-172.24.4.1:57630.service. Dec 13 14:27:26.722359 sshd[1213]: Accepted publickey for core from 172.24.4.1 port 57630 ssh2: RSA SHA256:2ngTm68CMx36X1xnKPqUJq9w0RJJht3bhOuOq01A7tI Dec 13 14:27:26.726074 sshd[1213]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:27:26.738451 systemd[1]: Started session-2.scope. Dec 13 14:27:26.740162 systemd-logind[1131]: New session 2 of user core. Dec 13 14:27:27.179783 kubelet[1211]: E1213 14:27:27.179621 1211 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:27:27.183325 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:27:27.183631 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:27:27.184278 systemd[1]: kubelet.service: Consumed 2.172s CPU time. Dec 13 14:27:27.330385 sshd[1213]: pam_unix(sshd:session): session closed for user core Dec 13 14:27:27.339014 systemd[1]: Started sshd@2-172.24.4.127:22-172.24.4.1:57632.service. Dec 13 14:27:27.341292 systemd[1]: sshd@1-172.24.4.127:22-172.24.4.1:57630.service: Deactivated successfully. Dec 13 14:27:27.343048 systemd[1]: session-2.scope: Deactivated successfully. Dec 13 14:27:27.347150 systemd-logind[1131]: Session 2 logged out. Waiting for processes to exit. Dec 13 14:27:27.349836 systemd-logind[1131]: Removed session 2. Dec 13 14:27:28.884431 sshd[1225]: Accepted publickey for core from 172.24.4.1 port 57632 ssh2: RSA SHA256:2ngTm68CMx36X1xnKPqUJq9w0RJJht3bhOuOq01A7tI Dec 13 14:27:28.887188 sshd[1225]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:27:28.898082 systemd-logind[1131]: New session 3 of user core. Dec 13 14:27:28.899013 systemd[1]: Started session-3.scope. Dec 13 14:27:29.266211 coreos-metadata[1117]: Dec 13 14:27:29.266 WARN failed to locate config-drive, using the metadata service API instead Dec 13 14:27:29.343206 coreos-metadata[1117]: Dec 13 14:27:29.342 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Dec 13 14:27:29.408913 sshd[1225]: pam_unix(sshd:session): session closed for user core Dec 13 14:27:29.414251 systemd[1]: sshd@2-172.24.4.127:22-172.24.4.1:57632.service: Deactivated successfully. Dec 13 14:27:29.415878 systemd[1]: session-3.scope: Deactivated successfully. Dec 13 14:27:29.417207 systemd-logind[1131]: Session 3 logged out. Waiting for processes to exit. Dec 13 14:27:29.419239 systemd-logind[1131]: Removed session 3. Dec 13 14:27:29.623970 coreos-metadata[1117]: Dec 13 14:27:29.623 INFO Fetch successful Dec 13 14:27:29.623970 coreos-metadata[1117]: Dec 13 14:27:29.623 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Dec 13 14:27:29.639155 coreos-metadata[1117]: Dec 13 14:27:29.639 INFO Fetch successful Dec 13 14:27:29.649419 unknown[1117]: wrote ssh authorized keys file for user: core Dec 13 14:27:29.683489 update-ssh-keys[1233]: Updated "/home/core/.ssh/authorized_keys" Dec 13 14:27:29.685173 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Dec 13 14:27:29.686264 systemd[1]: Reached target multi-user.target. Dec 13 14:27:29.689715 systemd[1]: Starting systemd-update-utmp-runlevel.service... Dec 13 14:27:29.707544 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Dec 13 14:27:29.707953 systemd[1]: Finished systemd-update-utmp-runlevel.service. Dec 13 14:27:29.709042 systemd[1]: Startup finished in 950ms (kernel) + 7.260s (initrd) + 17.092s (userspace) = 25.304s. Dec 13 14:27:37.266532 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 14:27:37.268030 systemd[1]: Stopped kubelet.service. Dec 13 14:27:37.268133 systemd[1]: kubelet.service: Consumed 2.172s CPU time. Dec 13 14:27:37.272234 systemd[1]: Starting kubelet.service... Dec 13 14:27:37.608263 systemd[1]: Started kubelet.service. Dec 13 14:27:37.913965 kubelet[1239]: E1213 14:27:37.913762 1239 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:27:37.921061 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:27:37.921335 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:27:39.421141 systemd[1]: Started sshd@3-172.24.4.127:22-172.24.4.1:51106.service. Dec 13 14:27:40.548615 sshd[1247]: Accepted publickey for core from 172.24.4.1 port 51106 ssh2: RSA SHA256:2ngTm68CMx36X1xnKPqUJq9w0RJJht3bhOuOq01A7tI Dec 13 14:27:40.552107 sshd[1247]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:27:40.564272 systemd-logind[1131]: New session 4 of user core. Dec 13 14:27:40.564326 systemd[1]: Started session-4.scope. Dec 13 14:27:41.471792 sshd[1247]: pam_unix(sshd:session): session closed for user core Dec 13 14:27:41.479118 systemd[1]: sshd@3-172.24.4.127:22-172.24.4.1:51106.service: Deactivated successfully. Dec 13 14:27:41.481158 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 14:27:41.483841 systemd-logind[1131]: Session 4 logged out. Waiting for processes to exit. Dec 13 14:27:41.487019 systemd[1]: Started sshd@4-172.24.4.127:22-172.24.4.1:51120.service. Dec 13 14:27:41.490415 systemd-logind[1131]: Removed session 4. Dec 13 14:27:43.030055 sshd[1253]: Accepted publickey for core from 172.24.4.1 port 51120 ssh2: RSA SHA256:2ngTm68CMx36X1xnKPqUJq9w0RJJht3bhOuOq01A7tI Dec 13 14:27:43.033266 sshd[1253]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:27:43.045979 systemd-logind[1131]: New session 5 of user core. Dec 13 14:27:43.047479 systemd[1]: Started session-5.scope. Dec 13 14:27:43.852488 sshd[1253]: pam_unix(sshd:session): session closed for user core Dec 13 14:27:43.859909 systemd[1]: Started sshd@5-172.24.4.127:22-172.24.4.1:51128.service. Dec 13 14:27:43.864577 systemd[1]: sshd@4-172.24.4.127:22-172.24.4.1:51120.service: Deactivated successfully. Dec 13 14:27:43.866224 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 14:27:43.869046 systemd-logind[1131]: Session 5 logged out. Waiting for processes to exit. Dec 13 14:27:43.872002 systemd-logind[1131]: Removed session 5. Dec 13 14:27:45.476616 sshd[1258]: Accepted publickey for core from 172.24.4.1 port 51128 ssh2: RSA SHA256:2ngTm68CMx36X1xnKPqUJq9w0RJJht3bhOuOq01A7tI Dec 13 14:27:45.479222 sshd[1258]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:27:45.489811 systemd-logind[1131]: New session 6 of user core. Dec 13 14:27:45.490807 systemd[1]: Started session-6.scope. Dec 13 14:27:46.138470 sshd[1258]: pam_unix(sshd:session): session closed for user core Dec 13 14:27:46.146415 systemd[1]: sshd@5-172.24.4.127:22-172.24.4.1:51128.service: Deactivated successfully. Dec 13 14:27:46.148065 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 14:27:46.149937 systemd-logind[1131]: Session 6 logged out. Waiting for processes to exit. Dec 13 14:27:46.153077 systemd[1]: Started sshd@6-172.24.4.127:22-172.24.4.1:56914.service. Dec 13 14:27:46.156109 systemd-logind[1131]: Removed session 6. Dec 13 14:27:47.374549 sshd[1265]: Accepted publickey for core from 172.24.4.1 port 56914 ssh2: RSA SHA256:2ngTm68CMx36X1xnKPqUJq9w0RJJht3bhOuOq01A7tI Dec 13 14:27:47.377842 sshd[1265]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:27:47.386401 systemd-logind[1131]: New session 7 of user core. Dec 13 14:27:47.388765 systemd[1]: Started session-7.scope. Dec 13 14:27:47.877893 sudo[1268]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 14:27:47.879046 sudo[1268]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Dec 13 14:27:47.905293 systemd[1]: Starting coreos-metadata.service... Dec 13 14:27:48.016170 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 14:27:48.016840 systemd[1]: Stopped kubelet.service. Dec 13 14:27:48.019732 systemd[1]: Starting kubelet.service... Dec 13 14:27:48.397977 systemd[1]: Started kubelet.service. Dec 13 14:27:48.696276 kubelet[1279]: E1213 14:27:48.695924 1279 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:27:48.699959 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:27:48.700103 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:27:54.976604 coreos-metadata[1272]: Dec 13 14:27:54.976 WARN failed to locate config-drive, using the metadata service API instead Dec 13 14:27:55.061048 coreos-metadata[1272]: Dec 13 14:27:55.060 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Dec 13 14:27:55.369768 coreos-metadata[1272]: Dec 13 14:27:55.369 INFO Fetch successful Dec 13 14:27:55.370023 coreos-metadata[1272]: Dec 13 14:27:55.369 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Dec 13 14:27:55.381400 coreos-metadata[1272]: Dec 13 14:27:55.381 INFO Fetch successful Dec 13 14:27:55.381829 coreos-metadata[1272]: Dec 13 14:27:55.381 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Dec 13 14:27:55.398580 coreos-metadata[1272]: Dec 13 14:27:55.398 INFO Fetch successful Dec 13 14:27:55.398753 coreos-metadata[1272]: Dec 13 14:27:55.398 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Dec 13 14:27:55.410394 coreos-metadata[1272]: Dec 13 14:27:55.410 INFO Fetch successful Dec 13 14:27:55.410791 coreos-metadata[1272]: Dec 13 14:27:55.410 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Dec 13 14:27:55.423605 coreos-metadata[1272]: Dec 13 14:27:55.423 INFO Fetch successful Dec 13 14:27:55.440114 systemd[1]: Finished coreos-metadata.service. Dec 13 14:27:57.068155 systemd[1]: Stopped kubelet.service. Dec 13 14:27:57.074185 systemd[1]: Starting kubelet.service... Dec 13 14:27:57.117877 systemd[1]: Reloading. Dec 13 14:27:57.242005 /usr/lib/systemd/system-generators/torcx-generator[1343]: time="2024-12-13T14:27:57Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 14:27:57.242044 /usr/lib/systemd/system-generators/torcx-generator[1343]: time="2024-12-13T14:27:57Z" level=info msg="torcx already run" Dec 13 14:27:57.380327 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 14:27:57.380593 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 14:27:57.420791 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:27:57.679550 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 13 14:27:57.679780 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 13 14:27:57.681339 systemd[1]: Stopped kubelet.service. Dec 13 14:27:57.685328 systemd[1]: Starting kubelet.service... Dec 13 14:27:57.814776 systemd[1]: Started kubelet.service. Dec 13 14:27:57.869462 kubelet[1395]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 14:27:57.869885 kubelet[1395]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 14:27:57.869939 kubelet[1395]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 14:27:58.282612 kubelet[1395]: I1213 14:27:58.282473 1395 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 14:27:59.169236 kubelet[1395]: I1213 14:27:59.169158 1395 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Dec 13 14:27:59.169236 kubelet[1395]: I1213 14:27:59.169228 1395 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 14:27:59.169826 kubelet[1395]: I1213 14:27:59.169787 1395 server.go:927] "Client rotation is on, will bootstrap in background" Dec 13 14:27:59.200840 kubelet[1395]: I1213 14:27:59.200812 1395 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 14:27:59.228312 kubelet[1395]: I1213 14:27:59.228290 1395 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 14:27:59.230621 kubelet[1395]: I1213 14:27:59.230587 1395 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 14:27:59.231041 kubelet[1395]: I1213 14:27:59.230742 1395 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172.24.4.127","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 14:27:59.231188 kubelet[1395]: I1213 14:27:59.231175 1395 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 14:27:59.231257 kubelet[1395]: I1213 14:27:59.231248 1395 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 14:27:59.231425 kubelet[1395]: I1213 14:27:59.231413 1395 state_mem.go:36] "Initialized new in-memory state store" Dec 13 14:27:59.233244 kubelet[1395]: I1213 14:27:59.233233 1395 kubelet.go:400] "Attempting to sync node with API server" Dec 13 14:27:59.233561 kubelet[1395]: I1213 14:27:59.233550 1395 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 14:27:59.233675 kubelet[1395]: I1213 14:27:59.233645 1395 kubelet.go:312] "Adding apiserver pod source" Dec 13 14:27:59.234052 kubelet[1395]: I1213 14:27:59.234042 1395 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 14:27:59.234238 kubelet[1395]: E1213 14:27:59.233833 1395 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:27:59.234304 kubelet[1395]: E1213 14:27:59.233757 1395 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:27:59.240177 kubelet[1395]: I1213 14:27:59.240162 1395 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Dec 13 14:27:59.242331 kubelet[1395]: I1213 14:27:59.242315 1395 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 14:27:59.242455 kubelet[1395]: W1213 14:27:59.242443 1395 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 14:27:59.243102 kubelet[1395]: I1213 14:27:59.243089 1395 server.go:1264] "Started kubelet" Dec 13 14:27:59.245561 kubelet[1395]: W1213 14:27:59.245541 1395 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "172.24.4.127" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Dec 13 14:27:59.246415 kubelet[1395]: E1213 14:27:59.246400 1395 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes "172.24.4.127" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Dec 13 14:27:59.246711 kubelet[1395]: W1213 14:27:59.246644 1395 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Dec 13 14:27:59.246843 kubelet[1395]: E1213 14:27:59.246831 1395 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Dec 13 14:27:59.250051 kubelet[1395]: I1213 14:27:59.250005 1395 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 14:27:59.251880 kubelet[1395]: I1213 14:27:59.251831 1395 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 14:27:59.254450 kubelet[1395]: I1213 14:27:59.254427 1395 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 14:27:59.257136 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Dec 13 14:27:59.258159 kubelet[1395]: I1213 14:27:59.258143 1395 server.go:455] "Adding debug handlers to kubelet server" Dec 13 14:27:59.259061 kubelet[1395]: I1213 14:27:59.258195 1395 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 14:27:59.267254 kubelet[1395]: I1213 14:27:59.267238 1395 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 14:27:59.267785 kubelet[1395]: I1213 14:27:59.267770 1395 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Dec 13 14:27:59.267905 kubelet[1395]: I1213 14:27:59.267895 1395 reconciler.go:26] "Reconciler: start to sync state" Dec 13 14:27:59.268818 kubelet[1395]: I1213 14:27:59.268801 1395 factory.go:221] Registration of the systemd container factory successfully Dec 13 14:27:59.269001 kubelet[1395]: I1213 14:27:59.268982 1395 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 14:27:59.270277 kubelet[1395]: E1213 14:27:59.270262 1395 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 14:27:59.271314 kubelet[1395]: I1213 14:27:59.271301 1395 factory.go:221] Registration of the containerd container factory successfully Dec 13 14:27:59.282919 kubelet[1395]: E1213 14:27:59.281313 1395 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"172.24.4.127\" not found" node="172.24.4.127" Dec 13 14:27:59.294592 kubelet[1395]: I1213 14:27:59.294553 1395 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 14:27:59.294842 kubelet[1395]: I1213 14:27:59.294827 1395 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 14:27:59.294936 kubelet[1395]: I1213 14:27:59.294925 1395 state_mem.go:36] "Initialized new in-memory state store" Dec 13 14:27:59.305969 kubelet[1395]: I1213 14:27:59.305948 1395 policy_none.go:49] "None policy: Start" Dec 13 14:27:59.306826 kubelet[1395]: I1213 14:27:59.306797 1395 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 14:27:59.306952 kubelet[1395]: I1213 14:27:59.306940 1395 state_mem.go:35] "Initializing new in-memory state store" Dec 13 14:27:59.312503 systemd[1]: Created slice kubepods.slice. Dec 13 14:27:59.317277 systemd[1]: Created slice kubepods-burstable.slice. Dec 13 14:27:59.323686 systemd[1]: Created slice kubepods-besteffort.slice. Dec 13 14:27:59.336581 kubelet[1395]: I1213 14:27:59.336508 1395 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 14:27:59.336808 kubelet[1395]: I1213 14:27:59.336732 1395 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 14:27:59.336947 kubelet[1395]: I1213 14:27:59.336904 1395 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 14:27:59.339961 kubelet[1395]: E1213 14:27:59.339942 1395 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172.24.4.127\" not found" Dec 13 14:27:59.368590 kubelet[1395]: I1213 14:27:59.368556 1395 kubelet_node_status.go:73] "Attempting to register node" node="172.24.4.127" Dec 13 14:27:59.378755 kubelet[1395]: I1213 14:27:59.378643 1395 kubelet_node_status.go:76] "Successfully registered node" node="172.24.4.127" Dec 13 14:27:59.399347 kubelet[1395]: E1213 14:27:59.399279 1395 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.24.4.127\" not found" Dec 13 14:27:59.444475 kubelet[1395]: I1213 14:27:59.444358 1395 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 14:27:59.447083 kubelet[1395]: I1213 14:27:59.447065 1395 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 14:27:59.447220 kubelet[1395]: I1213 14:27:59.447209 1395 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 14:27:59.447312 kubelet[1395]: I1213 14:27:59.447302 1395 kubelet.go:2337] "Starting kubelet main sync loop" Dec 13 14:27:59.447425 kubelet[1395]: E1213 14:27:59.447406 1395 kubelet.go:2361] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Dec 13 14:27:59.499670 kubelet[1395]: E1213 14:27:59.499612 1395 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.24.4.127\" not found" Dec 13 14:27:59.599908 kubelet[1395]: E1213 14:27:59.599851 1395 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.24.4.127\" not found" Dec 13 14:27:59.701014 kubelet[1395]: E1213 14:27:59.700796 1395 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.24.4.127\" not found" Dec 13 14:27:59.801880 kubelet[1395]: E1213 14:27:59.801828 1395 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.24.4.127\" not found" Dec 13 14:27:59.854400 sudo[1268]: pam_unix(sudo:session): session closed for user root Dec 13 14:27:59.902758 kubelet[1395]: E1213 14:27:59.902704 1395 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.24.4.127\" not found" Dec 13 14:28:00.003930 kubelet[1395]: E1213 14:28:00.003793 1395 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.24.4.127\" not found" Dec 13 14:28:00.104945 kubelet[1395]: E1213 14:28:00.104867 1395 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.24.4.127\" not found" Dec 13 14:28:00.117379 sshd[1265]: pam_unix(sshd:session): session closed for user core Dec 13 14:28:00.124764 systemd[1]: sshd@6-172.24.4.127:22-172.24.4.1:56914.service: Deactivated successfully. Dec 13 14:28:00.126751 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 14:28:00.127123 systemd[1]: session-7.scope: Consumed 1.100s CPU time. Dec 13 14:28:00.128424 systemd-logind[1131]: Session 7 logged out. Waiting for processes to exit. Dec 13 14:28:00.130571 systemd-logind[1131]: Removed session 7. Dec 13 14:28:00.173472 kubelet[1395]: I1213 14:28:00.173393 1395 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Dec 13 14:28:00.174339 kubelet[1395]: W1213 14:28:00.173735 1395 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Dec 13 14:28:00.174339 kubelet[1395]: W1213 14:28:00.173806 1395 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Dec 13 14:28:00.174339 kubelet[1395]: W1213 14:28:00.173857 1395 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.Node ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Dec 13 14:28:00.207694 kubelet[1395]: I1213 14:28:00.207584 1395 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Dec 13 14:28:00.208611 env[1142]: time="2024-12-13T14:28:00.208480071Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 14:28:00.209733 kubelet[1395]: I1213 14:28:00.209689 1395 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Dec 13 14:28:00.234852 kubelet[1395]: I1213 14:28:00.234803 1395 apiserver.go:52] "Watching apiserver" Dec 13 14:28:00.235626 kubelet[1395]: E1213 14:28:00.235591 1395 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:28:00.262547 kubelet[1395]: I1213 14:28:00.262284 1395 topology_manager.go:215] "Topology Admit Handler" podUID="23da3eee-0524-499e-82ee-64acfa7a2160" podNamespace="kube-system" podName="cilium-rb8k5" Dec 13 14:28:00.265130 kubelet[1395]: I1213 14:28:00.265041 1395 topology_manager.go:215] "Topology Admit Handler" podUID="8bfeea44-ae39-47af-850a-781e2c6c6358" podNamespace="kube-system" podName="kube-proxy-495n6" Dec 13 14:28:00.269457 kubelet[1395]: I1213 14:28:00.269173 1395 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Dec 13 14:28:00.273238 kubelet[1395]: I1213 14:28:00.273191 1395 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/23da3eee-0524-499e-82ee-64acfa7a2160-hostproc\") pod \"cilium-rb8k5\" (UID: \"23da3eee-0524-499e-82ee-64acfa7a2160\") " pod="kube-system/cilium-rb8k5" Dec 13 14:28:00.273512 kubelet[1395]: I1213 14:28:00.273473 1395 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/23da3eee-0524-499e-82ee-64acfa7a2160-cni-path\") pod \"cilium-rb8k5\" (UID: \"23da3eee-0524-499e-82ee-64acfa7a2160\") " pod="kube-system/cilium-rb8k5" Dec 13 14:28:00.273820 kubelet[1395]: I1213 14:28:00.273757 1395 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/23da3eee-0524-499e-82ee-64acfa7a2160-lib-modules\") pod \"cilium-rb8k5\" (UID: \"23da3eee-0524-499e-82ee-64acfa7a2160\") " pod="kube-system/cilium-rb8k5" Dec 13 14:28:00.274292 kubelet[1395]: I1213 14:28:00.274220 1395 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/23da3eee-0524-499e-82ee-64acfa7a2160-bpf-maps\") pod \"cilium-rb8k5\" (UID: \"23da3eee-0524-499e-82ee-64acfa7a2160\") " pod="kube-system/cilium-rb8k5" Dec 13 14:28:00.274755 kubelet[1395]: I1213 14:28:00.274648 1395 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/23da3eee-0524-499e-82ee-64acfa7a2160-cilium-cgroup\") pod \"cilium-rb8k5\" (UID: \"23da3eee-0524-499e-82ee-64acfa7a2160\") " pod="kube-system/cilium-rb8k5" Dec 13 14:28:00.275039 kubelet[1395]: I1213 14:28:00.274994 1395 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/23da3eee-0524-499e-82ee-64acfa7a2160-xtables-lock\") pod \"cilium-rb8k5\" (UID: \"23da3eee-0524-499e-82ee-64acfa7a2160\") " pod="kube-system/cilium-rb8k5" Dec 13 14:28:00.275310 kubelet[1395]: I1213 14:28:00.275247 1395 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/23da3eee-0524-499e-82ee-64acfa7a2160-cilium-config-path\") pod \"cilium-rb8k5\" (UID: \"23da3eee-0524-499e-82ee-64acfa7a2160\") " pod="kube-system/cilium-rb8k5" Dec 13 14:28:00.276250 kubelet[1395]: I1213 14:28:00.276206 1395 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/23da3eee-0524-499e-82ee-64acfa7a2160-host-proc-sys-kernel\") pod \"cilium-rb8k5\" (UID: \"23da3eee-0524-499e-82ee-64acfa7a2160\") " pod="kube-system/cilium-rb8k5" Dec 13 14:28:00.276510 kubelet[1395]: I1213 14:28:00.276474 1395 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/23da3eee-0524-499e-82ee-64acfa7a2160-hubble-tls\") pod \"cilium-rb8k5\" (UID: \"23da3eee-0524-499e-82ee-64acfa7a2160\") " pod="kube-system/cilium-rb8k5" Dec 13 14:28:00.276862 kubelet[1395]: I1213 14:28:00.276825 1395 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/23da3eee-0524-499e-82ee-64acfa7a2160-cilium-run\") pod \"cilium-rb8k5\" (UID: \"23da3eee-0524-499e-82ee-64acfa7a2160\") " pod="kube-system/cilium-rb8k5" Dec 13 14:28:00.277177 kubelet[1395]: I1213 14:28:00.277138 1395 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/23da3eee-0524-499e-82ee-64acfa7a2160-etc-cni-netd\") pod \"cilium-rb8k5\" (UID: \"23da3eee-0524-499e-82ee-64acfa7a2160\") " pod="kube-system/cilium-rb8k5" Dec 13 14:28:00.277409 kubelet[1395]: I1213 14:28:00.277374 1395 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/23da3eee-0524-499e-82ee-64acfa7a2160-host-proc-sys-net\") pod \"cilium-rb8k5\" (UID: \"23da3eee-0524-499e-82ee-64acfa7a2160\") " pod="kube-system/cilium-rb8k5" Dec 13 14:28:00.277632 kubelet[1395]: I1213 14:28:00.277595 1395 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8bfeea44-ae39-47af-850a-781e2c6c6358-xtables-lock\") pod \"kube-proxy-495n6\" (UID: \"8bfeea44-ae39-47af-850a-781e2c6c6358\") " pod="kube-system/kube-proxy-495n6" Dec 13 14:28:00.278004 kubelet[1395]: I1213 14:28:00.277965 1395 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8bfeea44-ae39-47af-850a-781e2c6c6358-lib-modules\") pod \"kube-proxy-495n6\" (UID: \"8bfeea44-ae39-47af-850a-781e2c6c6358\") " pod="kube-system/kube-proxy-495n6" Dec 13 14:28:00.279721 kubelet[1395]: I1213 14:28:00.279636 1395 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/23da3eee-0524-499e-82ee-64acfa7a2160-clustermesh-secrets\") pod \"cilium-rb8k5\" (UID: \"23da3eee-0524-499e-82ee-64acfa7a2160\") " pod="kube-system/cilium-rb8k5" Dec 13 14:28:00.280005 kubelet[1395]: I1213 14:28:00.279965 1395 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ctw9g\" (UniqueName: \"kubernetes.io/projected/23da3eee-0524-499e-82ee-64acfa7a2160-kube-api-access-ctw9g\") pod \"cilium-rb8k5\" (UID: \"23da3eee-0524-499e-82ee-64acfa7a2160\") " pod="kube-system/cilium-rb8k5" Dec 13 14:28:00.280239 kubelet[1395]: I1213 14:28:00.280185 1395 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/8bfeea44-ae39-47af-850a-781e2c6c6358-kube-proxy\") pod \"kube-proxy-495n6\" (UID: \"8bfeea44-ae39-47af-850a-781e2c6c6358\") " pod="kube-system/kube-proxy-495n6" Dec 13 14:28:00.280467 kubelet[1395]: I1213 14:28:00.280431 1395 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbpv2\" (UniqueName: \"kubernetes.io/projected/8bfeea44-ae39-47af-850a-781e2c6c6358-kube-api-access-hbpv2\") pod \"kube-proxy-495n6\" (UID: \"8bfeea44-ae39-47af-850a-781e2c6c6358\") " pod="kube-system/kube-proxy-495n6" Dec 13 14:28:00.281083 systemd[1]: Created slice kubepods-burstable-pod23da3eee_0524_499e_82ee_64acfa7a2160.slice. Dec 13 14:28:00.310090 systemd[1]: Created slice kubepods-besteffort-pod8bfeea44_ae39_47af_850a_781e2c6c6358.slice. Dec 13 14:28:00.607609 env[1142]: time="2024-12-13T14:28:00.607080044Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rb8k5,Uid:23da3eee-0524-499e-82ee-64acfa7a2160,Namespace:kube-system,Attempt:0,}" Dec 13 14:28:00.625169 env[1142]: time="2024-12-13T14:28:00.625057848Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-495n6,Uid:8bfeea44-ae39-47af-850a-781e2c6c6358,Namespace:kube-system,Attempt:0,}" Dec 13 14:28:01.236653 kubelet[1395]: E1213 14:28:01.236554 1395 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:28:01.442840 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount633453755.mount: Deactivated successfully. Dec 13 14:28:01.460980 env[1142]: time="2024-12-13T14:28:01.460794430Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:28:01.464194 env[1142]: time="2024-12-13T14:28:01.464082303Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:28:01.467646 env[1142]: time="2024-12-13T14:28:01.467591674Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:28:01.471484 env[1142]: time="2024-12-13T14:28:01.471409541Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:28:01.476296 env[1142]: time="2024-12-13T14:28:01.476237209Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:28:01.479254 env[1142]: time="2024-12-13T14:28:01.479194833Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:28:01.484640 env[1142]: time="2024-12-13T14:28:01.484579457Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:28:01.487796 env[1142]: time="2024-12-13T14:28:01.487565156Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:28:01.533347 env[1142]: time="2024-12-13T14:28:01.533202354Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:28:01.533503 env[1142]: time="2024-12-13T14:28:01.533381199Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:28:01.533560 env[1142]: time="2024-12-13T14:28:01.533490611Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:28:01.535765 env[1142]: time="2024-12-13T14:28:01.535234450Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/748d93f9867452b6f1bb23c88b57252c836f694549e2051358f1949f4337cbcc pid=1452 runtime=io.containerd.runc.v2 Dec 13 14:28:01.550267 env[1142]: time="2024-12-13T14:28:01.550126604Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:28:01.550267 env[1142]: time="2024-12-13T14:28:01.550182913Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:28:01.550267 env[1142]: time="2024-12-13T14:28:01.550197632Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:28:01.550825 env[1142]: time="2024-12-13T14:28:01.550736422Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8717ce339ff73d3f63740890d238a230e15feb6296230fa698248aef974ba6ec pid=1464 runtime=io.containerd.runc.v2 Dec 13 14:28:01.558480 systemd[1]: Started cri-containerd-748d93f9867452b6f1bb23c88b57252c836f694549e2051358f1949f4337cbcc.scope. Dec 13 14:28:01.577722 systemd[1]: Started cri-containerd-8717ce339ff73d3f63740890d238a230e15feb6296230fa698248aef974ba6ec.scope. Dec 13 14:28:01.615371 env[1142]: time="2024-12-13T14:28:01.615245895Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rb8k5,Uid:23da3eee-0524-499e-82ee-64acfa7a2160,Namespace:kube-system,Attempt:0,} returns sandbox id \"748d93f9867452b6f1bb23c88b57252c836f694549e2051358f1949f4337cbcc\"" Dec 13 14:28:01.621399 env[1142]: time="2024-12-13T14:28:01.620895551Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Dec 13 14:28:01.634115 env[1142]: time="2024-12-13T14:28:01.634051631Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-495n6,Uid:8bfeea44-ae39-47af-850a-781e2c6c6358,Namespace:kube-system,Attempt:0,} returns sandbox id \"8717ce339ff73d3f63740890d238a230e15feb6296230fa698248aef974ba6ec\"" Dec 13 14:28:02.237587 kubelet[1395]: E1213 14:28:02.237487 1395 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:28:03.238875 kubelet[1395]: E1213 14:28:03.238750 1395 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:28:04.239166 kubelet[1395]: E1213 14:28:04.238994 1395 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:28:05.239240 kubelet[1395]: E1213 14:28:05.239166 1395 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:28:06.240527 kubelet[1395]: E1213 14:28:06.240480 1395 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:28:07.241015 kubelet[1395]: E1213 14:28:07.240955 1395 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:28:07.411761 update_engine[1133]: I1213 14:28:07.410794 1133 update_attempter.cc:509] Updating boot flags... Dec 13 14:28:08.241269 kubelet[1395]: E1213 14:28:08.241204 1395 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:28:09.243140 kubelet[1395]: E1213 14:28:09.243026 1395 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:28:10.244384 kubelet[1395]: E1213 14:28:10.244121 1395 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:28:11.245123 kubelet[1395]: E1213 14:28:11.245058 1395 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:28:11.628189 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1878960610.mount: Deactivated successfully. Dec 13 14:28:12.245653 kubelet[1395]: E1213 14:28:12.245562 1395 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:28:13.246912 kubelet[1395]: E1213 14:28:13.246714 1395 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:28:14.248947 kubelet[1395]: E1213 14:28:14.248817 1395 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:28:15.249683 kubelet[1395]: E1213 14:28:15.249213 1395 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:28:16.250054 kubelet[1395]: E1213 14:28:16.249958 1395 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:28:16.913186 env[1142]: time="2024-12-13T14:28:16.913051684Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:28:16.921790 env[1142]: time="2024-12-13T14:28:16.921598266Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:28:16.934623 env[1142]: time="2024-12-13T14:28:16.934516974Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:28:16.936330 env[1142]: time="2024-12-13T14:28:16.936263828Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Dec 13 14:28:16.943236 env[1142]: time="2024-12-13T14:28:16.943138649Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\"" Dec 13 14:28:16.946793 env[1142]: time="2024-12-13T14:28:16.946694986Z" level=info msg="CreateContainer within sandbox \"748d93f9867452b6f1bb23c88b57252c836f694549e2051358f1949f4337cbcc\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 14:28:16.978870 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3669749223.mount: Deactivated successfully. Dec 13 14:28:17.010803 env[1142]: time="2024-12-13T14:28:17.010642744Z" level=info msg="CreateContainer within sandbox \"748d93f9867452b6f1bb23c88b57252c836f694549e2051358f1949f4337cbcc\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"0a34daa4586411625b1c052a7fed94160eea7c66a29d9c824d480788eaa9ead4\"" Dec 13 14:28:17.013175 env[1142]: time="2024-12-13T14:28:17.013060577Z" level=info msg="StartContainer for \"0a34daa4586411625b1c052a7fed94160eea7c66a29d9c824d480788eaa9ead4\"" Dec 13 14:28:17.057216 systemd[1]: Started cri-containerd-0a34daa4586411625b1c052a7fed94160eea7c66a29d9c824d480788eaa9ead4.scope. Dec 13 14:28:17.097752 env[1142]: time="2024-12-13T14:28:17.097360553Z" level=info msg="StartContainer for \"0a34daa4586411625b1c052a7fed94160eea7c66a29d9c824d480788eaa9ead4\" returns successfully" Dec 13 14:28:17.103003 systemd[1]: cri-containerd-0a34daa4586411625b1c052a7fed94160eea7c66a29d9c824d480788eaa9ead4.scope: Deactivated successfully. Dec 13 14:28:17.252829 kubelet[1395]: E1213 14:28:17.250942 1395 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:28:17.692866 env[1142]: time="2024-12-13T14:28:17.692735249Z" level=info msg="shim disconnected" id=0a34daa4586411625b1c052a7fed94160eea7c66a29d9c824d480788eaa9ead4 Dec 13 14:28:17.692866 env[1142]: time="2024-12-13T14:28:17.692863402Z" level=warning msg="cleaning up after shim disconnected" id=0a34daa4586411625b1c052a7fed94160eea7c66a29d9c824d480788eaa9ead4 namespace=k8s.io Dec 13 14:28:17.693347 env[1142]: time="2024-12-13T14:28:17.692890794Z" level=info msg="cleaning up dead shim" Dec 13 14:28:17.711466 env[1142]: time="2024-12-13T14:28:17.711384372Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:28:17Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1589 runtime=io.containerd.runc.v2\n" Dec 13 14:28:17.973998 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0a34daa4586411625b1c052a7fed94160eea7c66a29d9c824d480788eaa9ead4-rootfs.mount: Deactivated successfully. Dec 13 14:28:18.252307 kubelet[1395]: E1213 14:28:18.252113 1395 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:28:18.549240 env[1142]: time="2024-12-13T14:28:18.549171270Z" level=info msg="CreateContainer within sandbox \"748d93f9867452b6f1bb23c88b57252c836f694549e2051358f1949f4337cbcc\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 14:28:18.692604 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3423705503.mount: Deactivated successfully. Dec 13 14:28:18.721314 env[1142]: time="2024-12-13T14:28:18.721267152Z" level=info msg="CreateContainer within sandbox \"748d93f9867452b6f1bb23c88b57252c836f694549e2051358f1949f4337cbcc\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"5c1d280b22484e95e2aa6b0b945a43d0d470b214a5d1a9a6de13f695b3a53648\"" Dec 13 14:28:18.723285 env[1142]: time="2024-12-13T14:28:18.723211505Z" level=info msg="StartContainer for \"5c1d280b22484e95e2aa6b0b945a43d0d470b214a5d1a9a6de13f695b3a53648\"" Dec 13 14:28:18.768822 systemd[1]: Started cri-containerd-5c1d280b22484e95e2aa6b0b945a43d0d470b214a5d1a9a6de13f695b3a53648.scope. Dec 13 14:28:18.818286 env[1142]: time="2024-12-13T14:28:18.818118869Z" level=info msg="StartContainer for \"5c1d280b22484e95e2aa6b0b945a43d0d470b214a5d1a9a6de13f695b3a53648\" returns successfully" Dec 13 14:28:18.825859 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 14:28:18.826503 systemd[1]: Stopped systemd-sysctl.service. Dec 13 14:28:18.827089 systemd[1]: Stopping systemd-sysctl.service... Dec 13 14:28:18.836987 systemd[1]: Starting systemd-sysctl.service... Dec 13 14:28:18.839879 systemd[1]: cri-containerd-5c1d280b22484e95e2aa6b0b945a43d0d470b214a5d1a9a6de13f695b3a53648.scope: Deactivated successfully. Dec 13 14:28:18.849050 systemd[1]: Finished systemd-sysctl.service. Dec 13 14:28:18.973231 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2223382743.mount: Deactivated successfully. Dec 13 14:28:19.235589 kubelet[1395]: E1213 14:28:19.234501 1395 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:28:19.252442 kubelet[1395]: E1213 14:28:19.252312 1395 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:28:19.302036 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2007262573.mount: Deactivated successfully. Dec 13 14:28:19.669104 env[1142]: time="2024-12-13T14:28:19.668984472Z" level=info msg="shim disconnected" id=5c1d280b22484e95e2aa6b0b945a43d0d470b214a5d1a9a6de13f695b3a53648 Dec 13 14:28:19.669872 env[1142]: time="2024-12-13T14:28:19.669759089Z" level=warning msg="cleaning up after shim disconnected" id=5c1d280b22484e95e2aa6b0b945a43d0d470b214a5d1a9a6de13f695b3a53648 namespace=k8s.io Dec 13 14:28:19.669872 env[1142]: time="2024-12-13T14:28:19.669817069Z" level=info msg="cleaning up dead shim" Dec 13 14:28:19.691464 env[1142]: time="2024-12-13T14:28:19.691363390Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:28:19Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1653 runtime=io.containerd.runc.v2\n" Dec 13 14:28:20.253249 kubelet[1395]: E1213 14:28:20.253130 1395 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:28:20.563580 env[1142]: time="2024-12-13T14:28:20.563474663Z" level=info msg="CreateContainer within sandbox \"748d93f9867452b6f1bb23c88b57252c836f694549e2051358f1949f4337cbcc\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 14:28:20.577375 env[1142]: time="2024-12-13T14:28:20.577219490Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.30.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:28:20.580291 env[1142]: time="2024-12-13T14:28:20.580244924Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:28:20.593362 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2695084759.mount: Deactivated successfully. Dec 13 14:28:20.594145 env[1142]: time="2024-12-13T14:28:20.594116801Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.30.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:28:20.605489 env[1142]: time="2024-12-13T14:28:20.605407094Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:28:20.605781 env[1142]: time="2024-12-13T14:28:20.605717282Z" level=info msg="CreateContainer within sandbox \"748d93f9867452b6f1bb23c88b57252c836f694549e2051358f1949f4337cbcc\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"f90f1bb64a8aa07264830f2d9b5b2a9a217eef0d5f2c4db10ba7efd991c28fef\"" Dec 13 14:28:20.606284 env[1142]: time="2024-12-13T14:28:20.606248156Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\" returns image reference \"sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2\"" Dec 13 14:28:20.607723 env[1142]: time="2024-12-13T14:28:20.607652152Z" level=info msg="StartContainer for \"f90f1bb64a8aa07264830f2d9b5b2a9a217eef0d5f2c4db10ba7efd991c28fef\"" Dec 13 14:28:20.611634 env[1142]: time="2024-12-13T14:28:20.611539268Z" level=info msg="CreateContainer within sandbox \"8717ce339ff73d3f63740890d238a230e15feb6296230fa698248aef974ba6ec\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 14:28:20.649283 systemd[1]: Started cri-containerd-f90f1bb64a8aa07264830f2d9b5b2a9a217eef0d5f2c4db10ba7efd991c28fef.scope. Dec 13 14:28:20.672936 env[1142]: time="2024-12-13T14:28:20.672883177Z" level=info msg="CreateContainer within sandbox \"8717ce339ff73d3f63740890d238a230e15feb6296230fa698248aef974ba6ec\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"1baae628522a9aa28b3348359e383e988597f631bb589204f89356730cd0e41b\"" Dec 13 14:28:20.673909 env[1142]: time="2024-12-13T14:28:20.673863713Z" level=info msg="StartContainer for \"1baae628522a9aa28b3348359e383e988597f631bb589204f89356730cd0e41b\"" Dec 13 14:28:20.693817 env[1142]: time="2024-12-13T14:28:20.693773775Z" level=info msg="StartContainer for \"f90f1bb64a8aa07264830f2d9b5b2a9a217eef0d5f2c4db10ba7efd991c28fef\" returns successfully" Dec 13 14:28:20.693841 systemd[1]: cri-containerd-f90f1bb64a8aa07264830f2d9b5b2a9a217eef0d5f2c4db10ba7efd991c28fef.scope: Deactivated successfully. Dec 13 14:28:20.711967 systemd[1]: Started cri-containerd-1baae628522a9aa28b3348359e383e988597f631bb589204f89356730cd0e41b.scope. Dec 13 14:28:21.178369 env[1142]: time="2024-12-13T14:28:21.177623566Z" level=info msg="StartContainer for \"1baae628522a9aa28b3348359e383e988597f631bb589204f89356730cd0e41b\" returns successfully" Dec 13 14:28:21.181569 env[1142]: time="2024-12-13T14:28:21.181503102Z" level=info msg="shim disconnected" id=f90f1bb64a8aa07264830f2d9b5b2a9a217eef0d5f2c4db10ba7efd991c28fef Dec 13 14:28:21.181569 env[1142]: time="2024-12-13T14:28:21.181568656Z" level=warning msg="cleaning up after shim disconnected" id=f90f1bb64a8aa07264830f2d9b5b2a9a217eef0d5f2c4db10ba7efd991c28fef namespace=k8s.io Dec 13 14:28:21.182067 env[1142]: time="2024-12-13T14:28:21.181582803Z" level=info msg="cleaning up dead shim" Dec 13 14:28:21.196652 env[1142]: time="2024-12-13T14:28:21.196579263Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:28:21Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1764 runtime=io.containerd.runc.v2\n" Dec 13 14:28:21.254287 kubelet[1395]: E1213 14:28:21.254241 1395 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:28:21.573899 env[1142]: time="2024-12-13T14:28:21.573713827Z" level=info msg="CreateContainer within sandbox \"748d93f9867452b6f1bb23c88b57252c836f694549e2051358f1949f4337cbcc\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 14:28:21.588692 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1070067855.mount: Deactivated successfully. Dec 13 14:28:21.588874 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f90f1bb64a8aa07264830f2d9b5b2a9a217eef0d5f2c4db10ba7efd991c28fef-rootfs.mount: Deactivated successfully. Dec 13 14:28:21.627055 env[1142]: time="2024-12-13T14:28:21.626988845Z" level=info msg="CreateContainer within sandbox \"748d93f9867452b6f1bb23c88b57252c836f694549e2051358f1949f4337cbcc\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"e2749d76860d109963babb280ae9f2dd07b8a2bcd0be2047b28832855e3bdfe5\"" Dec 13 14:28:21.627729 env[1142]: time="2024-12-13T14:28:21.627701492Z" level=info msg="StartContainer for \"e2749d76860d109963babb280ae9f2dd07b8a2bcd0be2047b28832855e3bdfe5\"" Dec 13 14:28:21.636628 kubelet[1395]: I1213 14:28:21.636426 1395 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-495n6" podStartSLOduration=3.663994314 podStartE2EDuration="22.636403549s" podCreationTimestamp="2024-12-13 14:27:59 +0000 UTC" firstStartedPulling="2024-12-13 14:28:01.635630611 +0000 UTC m=+3.815834034" lastFinishedPulling="2024-12-13 14:28:20.608039836 +0000 UTC m=+22.788243269" observedRunningTime="2024-12-13 14:28:21.636005165 +0000 UTC m=+23.816208618" watchObservedRunningTime="2024-12-13 14:28:21.636403549 +0000 UTC m=+23.816607002" Dec 13 14:28:21.657286 systemd[1]: Started cri-containerd-e2749d76860d109963babb280ae9f2dd07b8a2bcd0be2047b28832855e3bdfe5.scope. Dec 13 14:28:21.691437 systemd[1]: cri-containerd-e2749d76860d109963babb280ae9f2dd07b8a2bcd0be2047b28832855e3bdfe5.scope: Deactivated successfully. Dec 13 14:28:21.703321 env[1142]: time="2024-12-13T14:28:21.703241716Z" level=info msg="StartContainer for \"e2749d76860d109963babb280ae9f2dd07b8a2bcd0be2047b28832855e3bdfe5\" returns successfully" Dec 13 14:28:21.747889 env[1142]: time="2024-12-13T14:28:21.747845174Z" level=info msg="shim disconnected" id=e2749d76860d109963babb280ae9f2dd07b8a2bcd0be2047b28832855e3bdfe5 Dec 13 14:28:21.748096 env[1142]: time="2024-12-13T14:28:21.748076492Z" level=warning msg="cleaning up after shim disconnected" id=e2749d76860d109963babb280ae9f2dd07b8a2bcd0be2047b28832855e3bdfe5 namespace=k8s.io Dec 13 14:28:21.748176 env[1142]: time="2024-12-13T14:28:21.748159809Z" level=info msg="cleaning up dead shim" Dec 13 14:28:21.756073 env[1142]: time="2024-12-13T14:28:21.756035372Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:28:21Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1928 runtime=io.containerd.runc.v2\n" Dec 13 14:28:22.254876 kubelet[1395]: E1213 14:28:22.254792 1395 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:28:22.586545 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e2749d76860d109963babb280ae9f2dd07b8a2bcd0be2047b28832855e3bdfe5-rootfs.mount: Deactivated successfully. Dec 13 14:28:22.603761 env[1142]: time="2024-12-13T14:28:22.603444709Z" level=info msg="CreateContainer within sandbox \"748d93f9867452b6f1bb23c88b57252c836f694549e2051358f1949f4337cbcc\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 14:28:22.632362 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1198644928.mount: Deactivated successfully. Dec 13 14:28:22.652439 env[1142]: time="2024-12-13T14:28:22.652369684Z" level=info msg="CreateContainer within sandbox \"748d93f9867452b6f1bb23c88b57252c836f694549e2051358f1949f4337cbcc\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"f20fc8cbe37acb2975b398140442743c950827040a62627d776b223116d50cf3\"" Dec 13 14:28:22.653804 env[1142]: time="2024-12-13T14:28:22.653754041Z" level=info msg="StartContainer for \"f20fc8cbe37acb2975b398140442743c950827040a62627d776b223116d50cf3\"" Dec 13 14:28:22.697964 systemd[1]: Started cri-containerd-f20fc8cbe37acb2975b398140442743c950827040a62627d776b223116d50cf3.scope. Dec 13 14:28:22.755416 env[1142]: time="2024-12-13T14:28:22.755346972Z" level=info msg="StartContainer for \"f20fc8cbe37acb2975b398140442743c950827040a62627d776b223116d50cf3\" returns successfully" Dec 13 14:28:22.905760 kubelet[1395]: I1213 14:28:22.905259 1395 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Dec 13 14:28:23.256638 kubelet[1395]: E1213 14:28:23.256494 1395 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:28:23.295741 kernel: Initializing XFRM netlink socket Dec 13 14:28:24.256831 kubelet[1395]: E1213 14:28:24.256756 1395 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:28:25.121212 systemd-networkd[968]: cilium_host: Link UP Dec 13 14:28:25.124023 systemd-networkd[968]: cilium_net: Link UP Dec 13 14:28:25.130204 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Dec 13 14:28:25.130354 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Dec 13 14:28:25.133124 systemd-networkd[968]: cilium_net: Gained carrier Dec 13 14:28:25.134768 systemd-networkd[968]: cilium_host: Gained carrier Dec 13 14:28:25.159163 systemd-networkd[968]: cilium_net: Gained IPv6LL Dec 13 14:28:25.252875 systemd-networkd[968]: cilium_vxlan: Link UP Dec 13 14:28:25.252883 systemd-networkd[968]: cilium_vxlan: Gained carrier Dec 13 14:28:25.258464 kubelet[1395]: E1213 14:28:25.258419 1395 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:28:25.543726 kernel: NET: Registered PF_ALG protocol family Dec 13 14:28:25.822093 systemd-networkd[968]: cilium_host: Gained IPv6LL Dec 13 14:28:26.259536 kubelet[1395]: E1213 14:28:26.259350 1395 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:28:26.466118 kubelet[1395]: I1213 14:28:26.466026 1395 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-rb8k5" podStartSLOduration=12.144723779 podStartE2EDuration="27.465993702s" podCreationTimestamp="2024-12-13 14:27:59 +0000 UTC" firstStartedPulling="2024-12-13 14:28:01.619444458 +0000 UTC m=+3.799647881" lastFinishedPulling="2024-12-13 14:28:16.940714341 +0000 UTC m=+19.120917804" observedRunningTime="2024-12-13 14:28:23.643411386 +0000 UTC m=+25.823614859" watchObservedRunningTime="2024-12-13 14:28:26.465993702 +0000 UTC m=+28.646197135" Dec 13 14:28:26.466489 kubelet[1395]: I1213 14:28:26.466334 1395 topology_manager.go:215] "Topology Admit Handler" podUID="b4b47470-4d8f-4899-ab0b-ab853a95ddb1" podNamespace="default" podName="nginx-deployment-85f456d6dd-2tbdl" Dec 13 14:28:26.472024 kubelet[1395]: I1213 14:28:26.471953 1395 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4p8h5\" (UniqueName: \"kubernetes.io/projected/b4b47470-4d8f-4899-ab0b-ab853a95ddb1-kube-api-access-4p8h5\") pod \"nginx-deployment-85f456d6dd-2tbdl\" (UID: \"b4b47470-4d8f-4899-ab0b-ab853a95ddb1\") " pod="default/nginx-deployment-85f456d6dd-2tbdl" Dec 13 14:28:26.476309 systemd[1]: Created slice kubepods-besteffort-podb4b47470_4d8f_4899_ab0b_ab853a95ddb1.slice. Dec 13 14:28:26.505462 systemd-networkd[968]: lxc_health: Link UP Dec 13 14:28:26.510691 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Dec 13 14:28:26.511010 systemd-networkd[968]: lxc_health: Gained carrier Dec 13 14:28:26.783011 env[1142]: time="2024-12-13T14:28:26.782938058Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-2tbdl,Uid:b4b47470-4d8f-4899-ab0b-ab853a95ddb1,Namespace:default,Attempt:0,}" Dec 13 14:28:26.864089 systemd-networkd[968]: lxcf8361636fb78: Link UP Dec 13 14:28:26.875736 kernel: eth0: renamed from tmp82095 Dec 13 14:28:26.887188 systemd-networkd[968]: lxcf8361636fb78: Gained carrier Dec 13 14:28:26.887689 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcf8361636fb78: link becomes ready Dec 13 14:28:27.037844 systemd-networkd[968]: cilium_vxlan: Gained IPv6LL Dec 13 14:28:27.261389 kubelet[1395]: E1213 14:28:27.261316 1395 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:28:27.677889 systemd-networkd[968]: lxc_health: Gained IPv6LL Dec 13 14:28:28.261683 kubelet[1395]: E1213 14:28:28.261594 1395 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:28:28.765970 systemd-networkd[968]: lxcf8361636fb78: Gained IPv6LL Dec 13 14:28:29.263366 kubelet[1395]: E1213 14:28:29.262778 1395 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:28:30.263317 kubelet[1395]: E1213 14:28:30.263219 1395 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:28:31.264324 kubelet[1395]: E1213 14:28:31.264119 1395 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:28:31.581480 env[1142]: time="2024-12-13T14:28:31.581383113Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:28:31.582733 env[1142]: time="2024-12-13T14:28:31.581440501Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:28:31.582733 env[1142]: time="2024-12-13T14:28:31.581455249Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:28:31.582733 env[1142]: time="2024-12-13T14:28:31.581641550Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/820951d59ffd5502a97315cdd89115cd80842bdc6282be2f81d0cc402e6c84df pid=2452 runtime=io.containerd.runc.v2 Dec 13 14:28:31.600434 systemd[1]: run-containerd-runc-k8s.io-820951d59ffd5502a97315cdd89115cd80842bdc6282be2f81d0cc402e6c84df-runc.KMGh6E.mount: Deactivated successfully. Dec 13 14:28:31.606637 systemd[1]: Started cri-containerd-820951d59ffd5502a97315cdd89115cd80842bdc6282be2f81d0cc402e6c84df.scope. Dec 13 14:28:31.643931 env[1142]: time="2024-12-13T14:28:31.643879516Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-2tbdl,Uid:b4b47470-4d8f-4899-ab0b-ab853a95ddb1,Namespace:default,Attempt:0,} returns sandbox id \"820951d59ffd5502a97315cdd89115cd80842bdc6282be2f81d0cc402e6c84df\"" Dec 13 14:28:31.646293 env[1142]: time="2024-12-13T14:28:31.646272383Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Dec 13 14:28:32.265257 kubelet[1395]: E1213 14:28:32.265175 1395 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:28:33.265593 kubelet[1395]: E1213 14:28:33.265548 1395 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:28:34.265923 kubelet[1395]: E1213 14:28:34.265879 1395 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:28:35.266520 kubelet[1395]: E1213 14:28:35.266422 1395 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:28:35.620953 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2340250715.mount: Deactivated successfully. Dec 13 14:28:36.267372 kubelet[1395]: E1213 14:28:36.267319 1395 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:28:37.268173 kubelet[1395]: E1213 14:28:37.268106 1395 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:28:38.012735 env[1142]: time="2024-12-13T14:28:38.012590117Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:28:38.017167 env[1142]: time="2024-12-13T14:28:38.017074901Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:28:38.022584 env[1142]: time="2024-12-13T14:28:38.022495866Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:28:38.027983 env[1142]: time="2024-12-13T14:28:38.027907594Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx@sha256:e04edf30a4ea4c5a4107110797c72d3ee8a654415f00acd4019be17218afd9a1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:28:38.031052 env[1142]: time="2024-12-13T14:28:38.030981475Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3\"" Dec 13 14:28:38.038447 env[1142]: time="2024-12-13T14:28:38.038328152Z" level=info msg="CreateContainer within sandbox \"820951d59ffd5502a97315cdd89115cd80842bdc6282be2f81d0cc402e6c84df\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Dec 13 14:28:38.065009 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount217373470.mount: Deactivated successfully. Dec 13 14:28:38.086335 env[1142]: time="2024-12-13T14:28:38.086242054Z" level=info msg="CreateContainer within sandbox \"820951d59ffd5502a97315cdd89115cd80842bdc6282be2f81d0cc402e6c84df\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"744cca7c01654996281fc61a0a38cbe406e37ae86c92db56af7eb7b668907cca\"" Dec 13 14:28:38.087866 env[1142]: time="2024-12-13T14:28:38.087736665Z" level=info msg="StartContainer for \"744cca7c01654996281fc61a0a38cbe406e37ae86c92db56af7eb7b668907cca\"" Dec 13 14:28:38.134387 systemd[1]: Started cri-containerd-744cca7c01654996281fc61a0a38cbe406e37ae86c92db56af7eb7b668907cca.scope. Dec 13 14:28:38.177458 env[1142]: time="2024-12-13T14:28:38.177405515Z" level=info msg="StartContainer for \"744cca7c01654996281fc61a0a38cbe406e37ae86c92db56af7eb7b668907cca\" returns successfully" Dec 13 14:28:38.268456 kubelet[1395]: E1213 14:28:38.268296 1395 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:28:38.721538 kubelet[1395]: I1213 14:28:38.721312 1395 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-85f456d6dd-2tbdl" podStartSLOduration=6.333040056 podStartE2EDuration="12.72127984s" podCreationTimestamp="2024-12-13 14:28:26 +0000 UTC" firstStartedPulling="2024-12-13 14:28:31.645710705 +0000 UTC m=+33.825914128" lastFinishedPulling="2024-12-13 14:28:38.033950439 +0000 UTC m=+40.214153912" observedRunningTime="2024-12-13 14:28:38.721043165 +0000 UTC m=+40.901246628" watchObservedRunningTime="2024-12-13 14:28:38.72127984 +0000 UTC m=+40.901483303" Dec 13 14:28:39.057132 systemd[1]: run-containerd-runc-k8s.io-744cca7c01654996281fc61a0a38cbe406e37ae86c92db56af7eb7b668907cca-runc.wyqyQ2.mount: Deactivated successfully. Dec 13 14:28:39.234470 kubelet[1395]: E1213 14:28:39.234339 1395 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:28:39.269002 kubelet[1395]: E1213 14:28:39.268940 1395 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:28:40.269730 kubelet[1395]: E1213 14:28:40.269624 1395 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:28:41.271330 kubelet[1395]: E1213 14:28:41.271261 1395 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:28:42.272330 kubelet[1395]: E1213 14:28:42.272275 1395 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:28:43.273387 kubelet[1395]: E1213 14:28:43.273281 1395 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:28:44.273873 kubelet[1395]: E1213 14:28:44.273797 1395 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:28:45.275019 kubelet[1395]: E1213 14:28:45.274915 1395 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:28:46.276916 kubelet[1395]: E1213 14:28:46.276809 1395 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:28:47.277643 kubelet[1395]: E1213 14:28:47.277548 1395 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:28:48.278460 kubelet[1395]: E1213 14:28:48.278349 1395 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:28:49.279812 kubelet[1395]: E1213 14:28:49.279570 1395 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:28:50.280350 kubelet[1395]: E1213 14:28:50.280252 1395 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:28:50.354585 kubelet[1395]: I1213 14:28:50.354509 1395 topology_manager.go:215] "Topology Admit Handler" podUID="70d1131c-55e7-46e0-a6b6-637659b5766f" podNamespace="default" podName="nfs-server-provisioner-0" Dec 13 14:28:50.363925 systemd[1]: Created slice kubepods-besteffort-pod70d1131c_55e7_46e0_a6b6_637659b5766f.slice. Dec 13 14:28:50.529330 kubelet[1395]: I1213 14:28:50.529262 1395 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/70d1131c-55e7-46e0-a6b6-637659b5766f-data\") pod \"nfs-server-provisioner-0\" (UID: \"70d1131c-55e7-46e0-a6b6-637659b5766f\") " pod="default/nfs-server-provisioner-0" Dec 13 14:28:50.529761 kubelet[1395]: I1213 14:28:50.529718 1395 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9jnx\" (UniqueName: \"kubernetes.io/projected/70d1131c-55e7-46e0-a6b6-637659b5766f-kube-api-access-z9jnx\") pod \"nfs-server-provisioner-0\" (UID: \"70d1131c-55e7-46e0-a6b6-637659b5766f\") " pod="default/nfs-server-provisioner-0" Dec 13 14:28:50.669870 env[1142]: time="2024-12-13T14:28:50.669572158Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:70d1131c-55e7-46e0-a6b6-637659b5766f,Namespace:default,Attempt:0,}" Dec 13 14:28:50.770891 systemd-networkd[968]: lxca9ab73d4f58b: Link UP Dec 13 14:28:50.780797 kernel: eth0: renamed from tmp80a53 Dec 13 14:28:50.788818 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 14:28:50.788965 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxca9ab73d4f58b: link becomes ready Dec 13 14:28:50.791790 systemd-networkd[968]: lxca9ab73d4f58b: Gained carrier Dec 13 14:28:51.139759 env[1142]: time="2024-12-13T14:28:51.139295277Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:28:51.139759 env[1142]: time="2024-12-13T14:28:51.139378924Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:28:51.139759 env[1142]: time="2024-12-13T14:28:51.139411175Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:28:51.140547 env[1142]: time="2024-12-13T14:28:51.140414710Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/80a53ab914ec33f71934f36c82724330cb746ec3ed5d94614c1f53467675194a pid=2570 runtime=io.containerd.runc.v2 Dec 13 14:28:51.189898 systemd[1]: Started cri-containerd-80a53ab914ec33f71934f36c82724330cb746ec3ed5d94614c1f53467675194a.scope. Dec 13 14:28:51.245945 env[1142]: time="2024-12-13T14:28:51.245885368Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:70d1131c-55e7-46e0-a6b6-637659b5766f,Namespace:default,Attempt:0,} returns sandbox id \"80a53ab914ec33f71934f36c82724330cb746ec3ed5d94614c1f53467675194a\"" Dec 13 14:28:51.247851 env[1142]: time="2024-12-13T14:28:51.247827053Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Dec 13 14:28:51.280703 kubelet[1395]: E1213 14:28:51.280587 1395 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:28:51.650436 systemd[1]: run-containerd-runc-k8s.io-80a53ab914ec33f71934f36c82724330cb746ec3ed5d94614c1f53467675194a-runc.pMUKCh.mount: Deactivated successfully. Dec 13 14:28:52.281504 kubelet[1395]: E1213 14:28:52.281453 1395 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:28:52.637999 systemd-networkd[968]: lxca9ab73d4f58b: Gained IPv6LL Dec 13 14:28:53.281800 kubelet[1395]: E1213 14:28:53.281717 1395 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:28:54.282115 kubelet[1395]: E1213 14:28:54.281950 1395 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:28:55.282703 kubelet[1395]: E1213 14:28:55.282582 1395 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:28:56.282899 kubelet[1395]: E1213 14:28:56.282844 1395 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:28:56.395960 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount324434289.mount: Deactivated successfully. Dec 13 14:28:57.283819 kubelet[1395]: E1213 14:28:57.283782 1395 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:28:58.284099 kubelet[1395]: E1213 14:28:58.284027 1395 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:28:59.233970 kubelet[1395]: E1213 14:28:59.233866 1395 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:28:59.284832 kubelet[1395]: E1213 14:28:59.284762 1395 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:28:59.858920 env[1142]: time="2024-12-13T14:28:59.858878517Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:28:59.862168 env[1142]: time="2024-12-13T14:28:59.862144149Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:28:59.865887 env[1142]: time="2024-12-13T14:28:59.865866320Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:28:59.869274 env[1142]: time="2024-12-13T14:28:59.869249910Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:28:59.870779 env[1142]: time="2024-12-13T14:28:59.870753753Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Dec 13 14:28:59.878154 env[1142]: time="2024-12-13T14:28:59.878040626Z" level=info msg="CreateContainer within sandbox \"80a53ab914ec33f71934f36c82724330cb746ec3ed5d94614c1f53467675194a\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Dec 13 14:28:59.889954 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1444832709.mount: Deactivated successfully. Dec 13 14:28:59.901432 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount763163900.mount: Deactivated successfully. Dec 13 14:28:59.912982 env[1142]: time="2024-12-13T14:28:59.912942271Z" level=info msg="CreateContainer within sandbox \"80a53ab914ec33f71934f36c82724330cb746ec3ed5d94614c1f53467675194a\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"a2f9bb179a370bb8ad64c90976414aa6f5baf8a24b5eb92beae7149afb570293\"" Dec 13 14:28:59.914066 env[1142]: time="2024-12-13T14:28:59.914041902Z" level=info msg="StartContainer for \"a2f9bb179a370bb8ad64c90976414aa6f5baf8a24b5eb92beae7149afb570293\"" Dec 13 14:28:59.946851 systemd[1]: Started cri-containerd-a2f9bb179a370bb8ad64c90976414aa6f5baf8a24b5eb92beae7149afb570293.scope. Dec 13 14:28:59.998847 env[1142]: time="2024-12-13T14:28:59.998793837Z" level=info msg="StartContainer for \"a2f9bb179a370bb8ad64c90976414aa6f5baf8a24b5eb92beae7149afb570293\" returns successfully" Dec 13 14:29:00.285305 kubelet[1395]: E1213 14:29:00.285213 1395 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:29:00.929157 kubelet[1395]: I1213 14:29:00.929063 1395 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=2.302235017 podStartE2EDuration="10.929032825s" podCreationTimestamp="2024-12-13 14:28:50 +0000 UTC" firstStartedPulling="2024-12-13 14:28:51.247497364 +0000 UTC m=+53.427700787" lastFinishedPulling="2024-12-13 14:28:59.874295122 +0000 UTC m=+62.054498595" observedRunningTime="2024-12-13 14:29:00.927927412 +0000 UTC m=+63.108130915" watchObservedRunningTime="2024-12-13 14:29:00.929032825 +0000 UTC m=+63.109236298" Dec 13 14:29:01.286352 kubelet[1395]: E1213 14:29:01.286292 1395 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:29:02.287180 kubelet[1395]: E1213 14:29:02.287096 1395 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:29:03.287960 kubelet[1395]: E1213 14:29:03.287878 1395 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:29:04.288735 kubelet[1395]: E1213 14:29:04.288593 1395 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:29:05.289788 kubelet[1395]: E1213 14:29:05.289729 1395 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:29:06.290918 kubelet[1395]: E1213 14:29:06.290839 1395 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:29:07.292717 kubelet[1395]: E1213 14:29:07.292591 1395 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:29:08.294394 kubelet[1395]: E1213 14:29:08.294341 1395 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:29:09.295441 kubelet[1395]: E1213 14:29:09.295412 1395 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:29:10.296113 kubelet[1395]: E1213 14:29:10.296043 1395 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:29:10.301087 kubelet[1395]: I1213 14:29:10.300958 1395 topology_manager.go:215] "Topology Admit Handler" podUID="d557d921-695f-4384-a978-3a3a271bb620" podNamespace="default" podName="test-pod-1" Dec 13 14:29:10.314574 systemd[1]: Created slice kubepods-besteffort-podd557d921_695f_4384_a978_3a3a271bb620.slice. Dec 13 14:29:10.470873 kubelet[1395]: I1213 14:29:10.470810 1395 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-a80dad17-ec24-441a-8a08-f919f7cedec1\" (UniqueName: \"kubernetes.io/nfs/d557d921-695f-4384-a978-3a3a271bb620-pvc-a80dad17-ec24-441a-8a08-f919f7cedec1\") pod \"test-pod-1\" (UID: \"d557d921-695f-4384-a978-3a3a271bb620\") " pod="default/test-pod-1" Dec 13 14:29:10.471275 kubelet[1395]: I1213 14:29:10.471234 1395 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9v5ws\" (UniqueName: \"kubernetes.io/projected/d557d921-695f-4384-a978-3a3a271bb620-kube-api-access-9v5ws\") pod \"test-pod-1\" (UID: \"d557d921-695f-4384-a978-3a3a271bb620\") " pod="default/test-pod-1" Dec 13 14:29:10.706756 kernel: FS-Cache: Loaded Dec 13 14:29:10.789727 kernel: RPC: Registered named UNIX socket transport module. Dec 13 14:29:10.789990 kernel: RPC: Registered udp transport module. Dec 13 14:29:10.790091 kernel: RPC: Registered tcp transport module. Dec 13 14:29:10.790147 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Dec 13 14:29:10.870779 kernel: FS-Cache: Netfs 'nfs' registered for caching Dec 13 14:29:11.096049 kernel: NFS: Registering the id_resolver key type Dec 13 14:29:11.096307 kernel: Key type id_resolver registered Dec 13 14:29:11.096366 kernel: Key type id_legacy registered Dec 13 14:29:11.167798 nfsidmap[2694]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'novalocal' Dec 13 14:29:11.177377 nfsidmap[2695]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'novalocal' Dec 13 14:29:11.223902 env[1142]: time="2024-12-13T14:29:11.223050220Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:d557d921-695f-4384-a978-3a3a271bb620,Namespace:default,Attempt:0,}" Dec 13 14:29:11.297165 kubelet[1395]: E1213 14:29:11.297063 1395 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:29:11.308581 systemd-networkd[968]: lxc7e96c4db6351: Link UP Dec 13 14:29:11.319758 kernel: eth0: renamed from tmp939f1 Dec 13 14:29:11.331344 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 14:29:11.331516 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc7e96c4db6351: link becomes ready Dec 13 14:29:11.331355 systemd-networkd[968]: lxc7e96c4db6351: Gained carrier Dec 13 14:29:11.577320 env[1142]: time="2024-12-13T14:29:11.577185863Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:29:11.577320 env[1142]: time="2024-12-13T14:29:11.577245795Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:29:11.577320 env[1142]: time="2024-12-13T14:29:11.577260792Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:29:11.577879 env[1142]: time="2024-12-13T14:29:11.577815888Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/939f1abd90d1148ca087530cc3679e04313cd2512f214b507e97c2e8dc031466 pid=2722 runtime=io.containerd.runc.v2 Dec 13 14:29:11.595412 systemd[1]: Started cri-containerd-939f1abd90d1148ca087530cc3679e04313cd2512f214b507e97c2e8dc031466.scope. Dec 13 14:29:11.663893 env[1142]: time="2024-12-13T14:29:11.663828421Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:d557d921-695f-4384-a978-3a3a271bb620,Namespace:default,Attempt:0,} returns sandbox id \"939f1abd90d1148ca087530cc3679e04313cd2512f214b507e97c2e8dc031466\"" Dec 13 14:29:11.665698 env[1142]: time="2024-12-13T14:29:11.665623690Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Dec 13 14:29:12.298040 kubelet[1395]: E1213 14:29:12.297865 1395 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:29:12.303741 env[1142]: time="2024-12-13T14:29:12.303595707Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:29:12.401275 env[1142]: time="2024-12-13T14:29:12.401192172Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:29:12.462362 env[1142]: time="2024-12-13T14:29:12.462237948Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:29:12.499498 env[1142]: time="2024-12-13T14:29:12.499399517Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx@sha256:e04edf30a4ea4c5a4107110797c72d3ee8a654415f00acd4019be17218afd9a1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:29:12.501254 env[1142]: time="2024-12-13T14:29:12.501181592Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3\"" Dec 13 14:29:12.507893 env[1142]: time="2024-12-13T14:29:12.507821150Z" level=info msg="CreateContainer within sandbox \"939f1abd90d1148ca087530cc3679e04313cd2512f214b507e97c2e8dc031466\" for container &ContainerMetadata{Name:test,Attempt:0,}" Dec 13 14:29:12.769562 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2274024921.mount: Deactivated successfully. Dec 13 14:29:12.926172 systemd-networkd[968]: lxc7e96c4db6351: Gained IPv6LL Dec 13 14:29:13.107205 env[1142]: time="2024-12-13T14:29:13.106796185Z" level=info msg="CreateContainer within sandbox \"939f1abd90d1148ca087530cc3679e04313cd2512f214b507e97c2e8dc031466\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"93d3aa6d5ddc302ffd4940ac60b99a81d45b33c3463cae4f0219360280821f89\"" Dec 13 14:29:13.108793 env[1142]: time="2024-12-13T14:29:13.108729201Z" level=info msg="StartContainer for \"93d3aa6d5ddc302ffd4940ac60b99a81d45b33c3463cae4f0219360280821f89\"" Dec 13 14:29:13.163759 systemd[1]: Started cri-containerd-93d3aa6d5ddc302ffd4940ac60b99a81d45b33c3463cae4f0219360280821f89.scope. Dec 13 14:29:13.230942 env[1142]: time="2024-12-13T14:29:13.230843258Z" level=info msg="StartContainer for \"93d3aa6d5ddc302ffd4940ac60b99a81d45b33c3463cae4f0219360280821f89\" returns successfully" Dec 13 14:29:13.298338 kubelet[1395]: E1213 14:29:13.298159 1395 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:29:14.298445 kubelet[1395]: E1213 14:29:14.298370 1395 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:29:15.300259 kubelet[1395]: E1213 14:29:15.300191 1395 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:29:16.300766 kubelet[1395]: E1213 14:29:16.300733 1395 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:29:17.301737 kubelet[1395]: E1213 14:29:17.301691 1395 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:29:18.303333 kubelet[1395]: E1213 14:29:18.303279 1395 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:29:19.234457 kubelet[1395]: E1213 14:29:19.234382 1395 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:29:19.304977 kubelet[1395]: E1213 14:29:19.304878 1395 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:29:20.305641 kubelet[1395]: E1213 14:29:20.305574 1395 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:29:21.306976 kubelet[1395]: E1213 14:29:21.306919 1395 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:29:22.308562 kubelet[1395]: E1213 14:29:22.308442 1395 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:29:22.331242 kubelet[1395]: I1213 14:29:22.331058 1395 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=27.491389442 podStartE2EDuration="28.331019905s" podCreationTimestamp="2024-12-13 14:28:54 +0000 UTC" firstStartedPulling="2024-12-13 14:29:11.665168619 +0000 UTC m=+73.845372042" lastFinishedPulling="2024-12-13 14:29:12.504799042 +0000 UTC m=+74.685002505" observedRunningTime="2024-12-13 14:29:13.976798581 +0000 UTC m=+76.157002055" watchObservedRunningTime="2024-12-13 14:29:22.331019905 +0000 UTC m=+84.511223378" Dec 13 14:29:22.375135 systemd[1]: run-containerd-runc-k8s.io-f20fc8cbe37acb2975b398140442743c950827040a62627d776b223116d50cf3-runc.wA9jdd.mount: Deactivated successfully. Dec 13 14:29:22.429198 env[1142]: time="2024-12-13T14:29:22.429121564Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 14:29:22.439313 env[1142]: time="2024-12-13T14:29:22.439257990Z" level=info msg="StopContainer for \"f20fc8cbe37acb2975b398140442743c950827040a62627d776b223116d50cf3\" with timeout 2 (s)" Dec 13 14:29:22.441397 env[1142]: time="2024-12-13T14:29:22.441351133Z" level=info msg="Stop container \"f20fc8cbe37acb2975b398140442743c950827040a62627d776b223116d50cf3\" with signal terminated" Dec 13 14:29:22.450549 systemd-networkd[968]: lxc_health: Link DOWN Dec 13 14:29:22.450558 systemd-networkd[968]: lxc_health: Lost carrier Dec 13 14:29:22.490557 systemd[1]: cri-containerd-f20fc8cbe37acb2975b398140442743c950827040a62627d776b223116d50cf3.scope: Deactivated successfully. Dec 13 14:29:22.490868 systemd[1]: cri-containerd-f20fc8cbe37acb2975b398140442743c950827040a62627d776b223116d50cf3.scope: Consumed 8.839s CPU time. Dec 13 14:29:22.523750 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f20fc8cbe37acb2975b398140442743c950827040a62627d776b223116d50cf3-rootfs.mount: Deactivated successfully. Dec 13 14:29:22.587388 env[1142]: time="2024-12-13T14:29:22.585871139Z" level=info msg="shim disconnected" id=f20fc8cbe37acb2975b398140442743c950827040a62627d776b223116d50cf3 Dec 13 14:29:22.587388 env[1142]: time="2024-12-13T14:29:22.586773823Z" level=warning msg="cleaning up after shim disconnected" id=f20fc8cbe37acb2975b398140442743c950827040a62627d776b223116d50cf3 namespace=k8s.io Dec 13 14:29:22.587388 env[1142]: time="2024-12-13T14:29:22.586839175Z" level=info msg="cleaning up dead shim" Dec 13 14:29:22.604433 env[1142]: time="2024-12-13T14:29:22.604315688Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:29:22Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2857 runtime=io.containerd.runc.v2\n" Dec 13 14:29:22.608366 env[1142]: time="2024-12-13T14:29:22.608306431Z" level=info msg="StopContainer for \"f20fc8cbe37acb2975b398140442743c950827040a62627d776b223116d50cf3\" returns successfully" Dec 13 14:29:22.610073 env[1142]: time="2024-12-13T14:29:22.609980617Z" level=info msg="StopPodSandbox for \"748d93f9867452b6f1bb23c88b57252c836f694549e2051358f1949f4337cbcc\"" Dec 13 14:29:22.610211 env[1142]: time="2024-12-13T14:29:22.610107562Z" level=info msg="Container to stop \"5c1d280b22484e95e2aa6b0b945a43d0d470b214a5d1a9a6de13f695b3a53648\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:29:22.610211 env[1142]: time="2024-12-13T14:29:22.610186520Z" level=info msg="Container to stop \"e2749d76860d109963babb280ae9f2dd07b8a2bcd0be2047b28832855e3bdfe5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:29:22.610384 env[1142]: time="2024-12-13T14:29:22.610219791Z" level=info msg="Container to stop \"f20fc8cbe37acb2975b398140442743c950827040a62627d776b223116d50cf3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:29:22.610384 env[1142]: time="2024-12-13T14:29:22.610251239Z" level=info msg="Container to stop \"0a34daa4586411625b1c052a7fed94160eea7c66a29d9c824d480788eaa9ead4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:29:22.610384 env[1142]: time="2024-12-13T14:29:22.610313154Z" level=info msg="Container to stop \"f90f1bb64a8aa07264830f2d9b5b2a9a217eef0d5f2c4db10ba7efd991c28fef\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:29:22.614623 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-748d93f9867452b6f1bb23c88b57252c836f694549e2051358f1949f4337cbcc-shm.mount: Deactivated successfully. Dec 13 14:29:22.629849 systemd[1]: cri-containerd-748d93f9867452b6f1bb23c88b57252c836f694549e2051358f1949f4337cbcc.scope: Deactivated successfully. Dec 13 14:29:22.677182 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-748d93f9867452b6f1bb23c88b57252c836f694549e2051358f1949f4337cbcc-rootfs.mount: Deactivated successfully. Dec 13 14:29:22.704957 env[1142]: time="2024-12-13T14:29:22.704815215Z" level=info msg="shim disconnected" id=748d93f9867452b6f1bb23c88b57252c836f694549e2051358f1949f4337cbcc Dec 13 14:29:22.705349 env[1142]: time="2024-12-13T14:29:22.704962689Z" level=warning msg="cleaning up after shim disconnected" id=748d93f9867452b6f1bb23c88b57252c836f694549e2051358f1949f4337cbcc namespace=k8s.io Dec 13 14:29:22.705349 env[1142]: time="2024-12-13T14:29:22.705000880Z" level=info msg="cleaning up dead shim" Dec 13 14:29:22.725695 env[1142]: time="2024-12-13T14:29:22.725527342Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:29:22Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2890 runtime=io.containerd.runc.v2\n" Dec 13 14:29:22.726965 env[1142]: time="2024-12-13T14:29:22.726866024Z" level=info msg="TearDown network for sandbox \"748d93f9867452b6f1bb23c88b57252c836f694549e2051358f1949f4337cbcc\" successfully" Dec 13 14:29:22.726965 env[1142]: time="2024-12-13T14:29:22.726943479Z" level=info msg="StopPodSandbox for \"748d93f9867452b6f1bb23c88b57252c836f694549e2051358f1949f4337cbcc\" returns successfully" Dec 13 14:29:22.873129 kubelet[1395]: I1213 14:29:22.870472 1395 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/23da3eee-0524-499e-82ee-64acfa7a2160-cilium-cgroup\") pod \"23da3eee-0524-499e-82ee-64acfa7a2160\" (UID: \"23da3eee-0524-499e-82ee-64acfa7a2160\") " Dec 13 14:29:22.873129 kubelet[1395]: I1213 14:29:22.871162 1395 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/23da3eee-0524-499e-82ee-64acfa7a2160-xtables-lock\") pod \"23da3eee-0524-499e-82ee-64acfa7a2160\" (UID: \"23da3eee-0524-499e-82ee-64acfa7a2160\") " Dec 13 14:29:22.873129 kubelet[1395]: I1213 14:29:22.871252 1395 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/23da3eee-0524-499e-82ee-64acfa7a2160-hubble-tls\") pod \"23da3eee-0524-499e-82ee-64acfa7a2160\" (UID: \"23da3eee-0524-499e-82ee-64acfa7a2160\") " Dec 13 14:29:22.873129 kubelet[1395]: I1213 14:29:22.871322 1395 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/23da3eee-0524-499e-82ee-64acfa7a2160-clustermesh-secrets\") pod \"23da3eee-0524-499e-82ee-64acfa7a2160\" (UID: \"23da3eee-0524-499e-82ee-64acfa7a2160\") " Dec 13 14:29:22.873129 kubelet[1395]: I1213 14:29:22.871385 1395 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ctw9g\" (UniqueName: \"kubernetes.io/projected/23da3eee-0524-499e-82ee-64acfa7a2160-kube-api-access-ctw9g\") pod \"23da3eee-0524-499e-82ee-64acfa7a2160\" (UID: \"23da3eee-0524-499e-82ee-64acfa7a2160\") " Dec 13 14:29:22.873129 kubelet[1395]: I1213 14:29:22.871445 1395 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/23da3eee-0524-499e-82ee-64acfa7a2160-lib-modules\") pod \"23da3eee-0524-499e-82ee-64acfa7a2160\" (UID: \"23da3eee-0524-499e-82ee-64acfa7a2160\") " Dec 13 14:29:22.874328 kubelet[1395]: I1213 14:29:22.871496 1395 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/23da3eee-0524-499e-82ee-64acfa7a2160-cni-path\") pod \"23da3eee-0524-499e-82ee-64acfa7a2160\" (UID: \"23da3eee-0524-499e-82ee-64acfa7a2160\") " Dec 13 14:29:22.874328 kubelet[1395]: I1213 14:29:22.871545 1395 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/23da3eee-0524-499e-82ee-64acfa7a2160-bpf-maps\") pod \"23da3eee-0524-499e-82ee-64acfa7a2160\" (UID: \"23da3eee-0524-499e-82ee-64acfa7a2160\") " Dec 13 14:29:22.874328 kubelet[1395]: I1213 14:29:22.871604 1395 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/23da3eee-0524-499e-82ee-64acfa7a2160-cilium-config-path\") pod \"23da3eee-0524-499e-82ee-64acfa7a2160\" (UID: \"23da3eee-0524-499e-82ee-64acfa7a2160\") " Dec 13 14:29:22.874328 kubelet[1395]: I1213 14:29:22.871654 1395 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/23da3eee-0524-499e-82ee-64acfa7a2160-etc-cni-netd\") pod \"23da3eee-0524-499e-82ee-64acfa7a2160\" (UID: \"23da3eee-0524-499e-82ee-64acfa7a2160\") " Dec 13 14:29:22.874328 kubelet[1395]: I1213 14:29:22.871761 1395 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/23da3eee-0524-499e-82ee-64acfa7a2160-host-proc-sys-net\") pod \"23da3eee-0524-499e-82ee-64acfa7a2160\" (UID: \"23da3eee-0524-499e-82ee-64acfa7a2160\") " Dec 13 14:29:22.874328 kubelet[1395]: I1213 14:29:22.871825 1395 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/23da3eee-0524-499e-82ee-64acfa7a2160-hostproc\") pod \"23da3eee-0524-499e-82ee-64acfa7a2160\" (UID: \"23da3eee-0524-499e-82ee-64acfa7a2160\") " Dec 13 14:29:22.875103 kubelet[1395]: I1213 14:29:22.871877 1395 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/23da3eee-0524-499e-82ee-64acfa7a2160-host-proc-sys-kernel\") pod \"23da3eee-0524-499e-82ee-64acfa7a2160\" (UID: \"23da3eee-0524-499e-82ee-64acfa7a2160\") " Dec 13 14:29:22.875103 kubelet[1395]: I1213 14:29:22.871927 1395 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/23da3eee-0524-499e-82ee-64acfa7a2160-cilium-run\") pod \"23da3eee-0524-499e-82ee-64acfa7a2160\" (UID: \"23da3eee-0524-499e-82ee-64acfa7a2160\") " Dec 13 14:29:22.875103 kubelet[1395]: I1213 14:29:22.870874 1395 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/23da3eee-0524-499e-82ee-64acfa7a2160-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "23da3eee-0524-499e-82ee-64acfa7a2160" (UID: "23da3eee-0524-499e-82ee-64acfa7a2160"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:29:22.875103 kubelet[1395]: I1213 14:29:22.872038 1395 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/23da3eee-0524-499e-82ee-64acfa7a2160-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "23da3eee-0524-499e-82ee-64acfa7a2160" (UID: "23da3eee-0524-499e-82ee-64acfa7a2160"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:29:22.875103 kubelet[1395]: I1213 14:29:22.872141 1395 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/23da3eee-0524-499e-82ee-64acfa7a2160-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "23da3eee-0524-499e-82ee-64acfa7a2160" (UID: "23da3eee-0524-499e-82ee-64acfa7a2160"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:29:22.875741 kubelet[1395]: I1213 14:29:22.872830 1395 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/23da3eee-0524-499e-82ee-64acfa7a2160-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "23da3eee-0524-499e-82ee-64acfa7a2160" (UID: "23da3eee-0524-499e-82ee-64acfa7a2160"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:29:22.879294 kubelet[1395]: I1213 14:29:22.879098 1395 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/23da3eee-0524-499e-82ee-64acfa7a2160-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "23da3eee-0524-499e-82ee-64acfa7a2160" (UID: "23da3eee-0524-499e-82ee-64acfa7a2160"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:29:22.879614 kubelet[1395]: I1213 14:29:22.879299 1395 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/23da3eee-0524-499e-82ee-64acfa7a2160-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "23da3eee-0524-499e-82ee-64acfa7a2160" (UID: "23da3eee-0524-499e-82ee-64acfa7a2160"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:29:22.879614 kubelet[1395]: I1213 14:29:22.879400 1395 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/23da3eee-0524-499e-82ee-64acfa7a2160-hostproc" (OuterVolumeSpecName: "hostproc") pod "23da3eee-0524-499e-82ee-64acfa7a2160" (UID: "23da3eee-0524-499e-82ee-64acfa7a2160"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:29:22.879614 kubelet[1395]: I1213 14:29:22.879495 1395 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/23da3eee-0524-499e-82ee-64acfa7a2160-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "23da3eee-0524-499e-82ee-64acfa7a2160" (UID: "23da3eee-0524-499e-82ee-64acfa7a2160"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:29:22.880063 kubelet[1395]: I1213 14:29:22.879587 1395 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/23da3eee-0524-499e-82ee-64acfa7a2160-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "23da3eee-0524-499e-82ee-64acfa7a2160" (UID: "23da3eee-0524-499e-82ee-64acfa7a2160"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:29:22.880220 kubelet[1395]: I1213 14:29:22.880155 1395 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/23da3eee-0524-499e-82ee-64acfa7a2160-cni-path" (OuterVolumeSpecName: "cni-path") pod "23da3eee-0524-499e-82ee-64acfa7a2160" (UID: "23da3eee-0524-499e-82ee-64acfa7a2160"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:29:22.891772 kubelet[1395]: I1213 14:29:22.891634 1395 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23da3eee-0524-499e-82ee-64acfa7a2160-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "23da3eee-0524-499e-82ee-64acfa7a2160" (UID: "23da3eee-0524-499e-82ee-64acfa7a2160"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 14:29:22.895045 kubelet[1395]: I1213 14:29:22.894976 1395 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23da3eee-0524-499e-82ee-64acfa7a2160-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "23da3eee-0524-499e-82ee-64acfa7a2160" (UID: "23da3eee-0524-499e-82ee-64acfa7a2160"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 14:29:22.898151 kubelet[1395]: I1213 14:29:22.898058 1395 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23da3eee-0524-499e-82ee-64acfa7a2160-kube-api-access-ctw9g" (OuterVolumeSpecName: "kube-api-access-ctw9g") pod "23da3eee-0524-499e-82ee-64acfa7a2160" (UID: "23da3eee-0524-499e-82ee-64acfa7a2160"). InnerVolumeSpecName "kube-api-access-ctw9g". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 14:29:22.901551 kubelet[1395]: I1213 14:29:22.901468 1395 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23da3eee-0524-499e-82ee-64acfa7a2160-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "23da3eee-0524-499e-82ee-64acfa7a2160" (UID: "23da3eee-0524-499e-82ee-64acfa7a2160"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 14:29:22.972793 kubelet[1395]: I1213 14:29:22.972620 1395 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/23da3eee-0524-499e-82ee-64acfa7a2160-hostproc\") on node \"172.24.4.127\" DevicePath \"\"" Dec 13 14:29:22.972793 kubelet[1395]: I1213 14:29:22.972731 1395 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/23da3eee-0524-499e-82ee-64acfa7a2160-host-proc-sys-kernel\") on node \"172.24.4.127\" DevicePath \"\"" Dec 13 14:29:22.972793 kubelet[1395]: I1213 14:29:22.972768 1395 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/23da3eee-0524-499e-82ee-64acfa7a2160-cilium-run\") on node \"172.24.4.127\" DevicePath \"\"" Dec 13 14:29:22.972793 kubelet[1395]: I1213 14:29:22.972792 1395 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/23da3eee-0524-499e-82ee-64acfa7a2160-cilium-cgroup\") on node \"172.24.4.127\" DevicePath \"\"" Dec 13 14:29:22.973239 kubelet[1395]: I1213 14:29:22.972814 1395 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/23da3eee-0524-499e-82ee-64acfa7a2160-xtables-lock\") on node \"172.24.4.127\" DevicePath \"\"" Dec 13 14:29:22.973239 kubelet[1395]: I1213 14:29:22.972835 1395 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/23da3eee-0524-499e-82ee-64acfa7a2160-hubble-tls\") on node \"172.24.4.127\" DevicePath \"\"" Dec 13 14:29:22.973239 kubelet[1395]: I1213 14:29:22.972857 1395 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/23da3eee-0524-499e-82ee-64acfa7a2160-clustermesh-secrets\") on node \"172.24.4.127\" DevicePath \"\"" Dec 13 14:29:22.973239 kubelet[1395]: I1213 14:29:22.972879 1395 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-ctw9g\" (UniqueName: \"kubernetes.io/projected/23da3eee-0524-499e-82ee-64acfa7a2160-kube-api-access-ctw9g\") on node \"172.24.4.127\" DevicePath \"\"" Dec 13 14:29:22.973239 kubelet[1395]: I1213 14:29:22.972901 1395 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/23da3eee-0524-499e-82ee-64acfa7a2160-lib-modules\") on node \"172.24.4.127\" DevicePath \"\"" Dec 13 14:29:22.973239 kubelet[1395]: I1213 14:29:22.972922 1395 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/23da3eee-0524-499e-82ee-64acfa7a2160-cni-path\") on node \"172.24.4.127\" DevicePath \"\"" Dec 13 14:29:22.973239 kubelet[1395]: I1213 14:29:22.972941 1395 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/23da3eee-0524-499e-82ee-64acfa7a2160-bpf-maps\") on node \"172.24.4.127\" DevicePath \"\"" Dec 13 14:29:22.973239 kubelet[1395]: I1213 14:29:22.972961 1395 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/23da3eee-0524-499e-82ee-64acfa7a2160-cilium-config-path\") on node \"172.24.4.127\" DevicePath \"\"" Dec 13 14:29:22.973846 kubelet[1395]: I1213 14:29:22.972981 1395 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/23da3eee-0524-499e-82ee-64acfa7a2160-etc-cni-netd\") on node \"172.24.4.127\" DevicePath \"\"" Dec 13 14:29:22.973846 kubelet[1395]: I1213 14:29:22.973019 1395 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/23da3eee-0524-499e-82ee-64acfa7a2160-host-proc-sys-net\") on node \"172.24.4.127\" DevicePath \"\"" Dec 13 14:29:22.984426 kubelet[1395]: I1213 14:29:22.984386 1395 scope.go:117] "RemoveContainer" containerID="f20fc8cbe37acb2975b398140442743c950827040a62627d776b223116d50cf3" Dec 13 14:29:22.993353 systemd[1]: Removed slice kubepods-burstable-pod23da3eee_0524_499e_82ee_64acfa7a2160.slice. Dec 13 14:29:22.993590 systemd[1]: kubepods-burstable-pod23da3eee_0524_499e_82ee_64acfa7a2160.slice: Consumed 8.955s CPU time. Dec 13 14:29:22.999592 env[1142]: time="2024-12-13T14:29:22.999517873Z" level=info msg="RemoveContainer for \"f20fc8cbe37acb2975b398140442743c950827040a62627d776b223116d50cf3\"" Dec 13 14:29:23.006371 env[1142]: time="2024-12-13T14:29:23.006293205Z" level=info msg="RemoveContainer for \"f20fc8cbe37acb2975b398140442743c950827040a62627d776b223116d50cf3\" returns successfully" Dec 13 14:29:23.007277 kubelet[1395]: I1213 14:29:23.007229 1395 scope.go:117] "RemoveContainer" containerID="e2749d76860d109963babb280ae9f2dd07b8a2bcd0be2047b28832855e3bdfe5" Dec 13 14:29:23.009600 env[1142]: time="2024-12-13T14:29:23.009525112Z" level=info msg="RemoveContainer for \"e2749d76860d109963babb280ae9f2dd07b8a2bcd0be2047b28832855e3bdfe5\"" Dec 13 14:29:23.017260 env[1142]: time="2024-12-13T14:29:23.017178214Z" level=info msg="RemoveContainer for \"e2749d76860d109963babb280ae9f2dd07b8a2bcd0be2047b28832855e3bdfe5\" returns successfully" Dec 13 14:29:23.018557 kubelet[1395]: I1213 14:29:23.018471 1395 scope.go:117] "RemoveContainer" containerID="f90f1bb64a8aa07264830f2d9b5b2a9a217eef0d5f2c4db10ba7efd991c28fef" Dec 13 14:29:23.025762 env[1142]: time="2024-12-13T14:29:23.024968892Z" level=info msg="RemoveContainer for \"f90f1bb64a8aa07264830f2d9b5b2a9a217eef0d5f2c4db10ba7efd991c28fef\"" Dec 13 14:29:23.031530 env[1142]: time="2024-12-13T14:29:23.031469875Z" level=info msg="RemoveContainer for \"f90f1bb64a8aa07264830f2d9b5b2a9a217eef0d5f2c4db10ba7efd991c28fef\" returns successfully" Dec 13 14:29:23.032335 kubelet[1395]: I1213 14:29:23.032296 1395 scope.go:117] "RemoveContainer" containerID="5c1d280b22484e95e2aa6b0b945a43d0d470b214a5d1a9a6de13f695b3a53648" Dec 13 14:29:23.035919 env[1142]: time="2024-12-13T14:29:23.035855394Z" level=info msg="RemoveContainer for \"5c1d280b22484e95e2aa6b0b945a43d0d470b214a5d1a9a6de13f695b3a53648\"" Dec 13 14:29:23.041932 env[1142]: time="2024-12-13T14:29:23.041853825Z" level=info msg="RemoveContainer for \"5c1d280b22484e95e2aa6b0b945a43d0d470b214a5d1a9a6de13f695b3a53648\" returns successfully" Dec 13 14:29:23.042745 kubelet[1395]: I1213 14:29:23.042631 1395 scope.go:117] "RemoveContainer" containerID="0a34daa4586411625b1c052a7fed94160eea7c66a29d9c824d480788eaa9ead4" Dec 13 14:29:23.045768 env[1142]: time="2024-12-13T14:29:23.045609485Z" level=info msg="RemoveContainer for \"0a34daa4586411625b1c052a7fed94160eea7c66a29d9c824d480788eaa9ead4\"" Dec 13 14:29:23.055450 env[1142]: time="2024-12-13T14:29:23.055345752Z" level=info msg="RemoveContainer for \"0a34daa4586411625b1c052a7fed94160eea7c66a29d9c824d480788eaa9ead4\" returns successfully" Dec 13 14:29:23.055828 kubelet[1395]: I1213 14:29:23.055784 1395 scope.go:117] "RemoveContainer" containerID="f20fc8cbe37acb2975b398140442743c950827040a62627d776b223116d50cf3" Dec 13 14:29:23.056507 env[1142]: time="2024-12-13T14:29:23.056346620Z" level=error msg="ContainerStatus for \"f20fc8cbe37acb2975b398140442743c950827040a62627d776b223116d50cf3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f20fc8cbe37acb2975b398140442743c950827040a62627d776b223116d50cf3\": not found" Dec 13 14:29:23.057163 kubelet[1395]: E1213 14:29:23.057058 1395 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f20fc8cbe37acb2975b398140442743c950827040a62627d776b223116d50cf3\": not found" containerID="f20fc8cbe37acb2975b398140442743c950827040a62627d776b223116d50cf3" Dec 13 14:29:23.057441 kubelet[1395]: I1213 14:29:23.057209 1395 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f20fc8cbe37acb2975b398140442743c950827040a62627d776b223116d50cf3"} err="failed to get container status \"f20fc8cbe37acb2975b398140442743c950827040a62627d776b223116d50cf3\": rpc error: code = NotFound desc = an error occurred when try to find container \"f20fc8cbe37acb2975b398140442743c950827040a62627d776b223116d50cf3\": not found" Dec 13 14:29:23.057441 kubelet[1395]: I1213 14:29:23.057436 1395 scope.go:117] "RemoveContainer" containerID="e2749d76860d109963babb280ae9f2dd07b8a2bcd0be2047b28832855e3bdfe5" Dec 13 14:29:23.058337 env[1142]: time="2024-12-13T14:29:23.058089745Z" level=error msg="ContainerStatus for \"e2749d76860d109963babb280ae9f2dd07b8a2bcd0be2047b28832855e3bdfe5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e2749d76860d109963babb280ae9f2dd07b8a2bcd0be2047b28832855e3bdfe5\": not found" Dec 13 14:29:23.059153 kubelet[1395]: E1213 14:29:23.059046 1395 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e2749d76860d109963babb280ae9f2dd07b8a2bcd0be2047b28832855e3bdfe5\": not found" containerID="e2749d76860d109963babb280ae9f2dd07b8a2bcd0be2047b28832855e3bdfe5" Dec 13 14:29:23.059324 kubelet[1395]: I1213 14:29:23.059146 1395 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e2749d76860d109963babb280ae9f2dd07b8a2bcd0be2047b28832855e3bdfe5"} err="failed to get container status \"e2749d76860d109963babb280ae9f2dd07b8a2bcd0be2047b28832855e3bdfe5\": rpc error: code = NotFound desc = an error occurred when try to find container \"e2749d76860d109963babb280ae9f2dd07b8a2bcd0be2047b28832855e3bdfe5\": not found" Dec 13 14:29:23.059324 kubelet[1395]: I1213 14:29:23.059226 1395 scope.go:117] "RemoveContainer" containerID="f90f1bb64a8aa07264830f2d9b5b2a9a217eef0d5f2c4db10ba7efd991c28fef" Dec 13 14:29:23.060068 env[1142]: time="2024-12-13T14:29:23.059842878Z" level=error msg="ContainerStatus for \"f90f1bb64a8aa07264830f2d9b5b2a9a217eef0d5f2c4db10ba7efd991c28fef\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f90f1bb64a8aa07264830f2d9b5b2a9a217eef0d5f2c4db10ba7efd991c28fef\": not found" Dec 13 14:29:23.060411 kubelet[1395]: E1213 14:29:23.060290 1395 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f90f1bb64a8aa07264830f2d9b5b2a9a217eef0d5f2c4db10ba7efd991c28fef\": not found" containerID="f90f1bb64a8aa07264830f2d9b5b2a9a217eef0d5f2c4db10ba7efd991c28fef" Dec 13 14:29:23.060565 kubelet[1395]: I1213 14:29:23.060384 1395 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f90f1bb64a8aa07264830f2d9b5b2a9a217eef0d5f2c4db10ba7efd991c28fef"} err="failed to get container status \"f90f1bb64a8aa07264830f2d9b5b2a9a217eef0d5f2c4db10ba7efd991c28fef\": rpc error: code = NotFound desc = an error occurred when try to find container \"f90f1bb64a8aa07264830f2d9b5b2a9a217eef0d5f2c4db10ba7efd991c28fef\": not found" Dec 13 14:29:23.060565 kubelet[1395]: I1213 14:29:23.060466 1395 scope.go:117] "RemoveContainer" containerID="5c1d280b22484e95e2aa6b0b945a43d0d470b214a5d1a9a6de13f695b3a53648" Dec 13 14:29:23.061074 env[1142]: time="2024-12-13T14:29:23.060939663Z" level=error msg="ContainerStatus for \"5c1d280b22484e95e2aa6b0b945a43d0d470b214a5d1a9a6de13f695b3a53648\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5c1d280b22484e95e2aa6b0b945a43d0d470b214a5d1a9a6de13f695b3a53648\": not found" Dec 13 14:29:23.061391 kubelet[1395]: E1213 14:29:23.061347 1395 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5c1d280b22484e95e2aa6b0b945a43d0d470b214a5d1a9a6de13f695b3a53648\": not found" containerID="5c1d280b22484e95e2aa6b0b945a43d0d470b214a5d1a9a6de13f695b3a53648" Dec 13 14:29:23.061610 kubelet[1395]: I1213 14:29:23.061563 1395 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5c1d280b22484e95e2aa6b0b945a43d0d470b214a5d1a9a6de13f695b3a53648"} err="failed to get container status \"5c1d280b22484e95e2aa6b0b945a43d0d470b214a5d1a9a6de13f695b3a53648\": rpc error: code = NotFound desc = an error occurred when try to find container \"5c1d280b22484e95e2aa6b0b945a43d0d470b214a5d1a9a6de13f695b3a53648\": not found" Dec 13 14:29:23.061857 kubelet[1395]: I1213 14:29:23.061826 1395 scope.go:117] "RemoveContainer" containerID="0a34daa4586411625b1c052a7fed94160eea7c66a29d9c824d480788eaa9ead4" Dec 13 14:29:23.062782 env[1142]: time="2024-12-13T14:29:23.062577323Z" level=error msg="ContainerStatus for \"0a34daa4586411625b1c052a7fed94160eea7c66a29d9c824d480788eaa9ead4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0a34daa4586411625b1c052a7fed94160eea7c66a29d9c824d480788eaa9ead4\": not found" Dec 13 14:29:23.063422 kubelet[1395]: E1213 14:29:23.063322 1395 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0a34daa4586411625b1c052a7fed94160eea7c66a29d9c824d480788eaa9ead4\": not found" containerID="0a34daa4586411625b1c052a7fed94160eea7c66a29d9c824d480788eaa9ead4" Dec 13 14:29:23.063558 kubelet[1395]: I1213 14:29:23.063415 1395 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0a34daa4586411625b1c052a7fed94160eea7c66a29d9c824d480788eaa9ead4"} err="failed to get container status \"0a34daa4586411625b1c052a7fed94160eea7c66a29d9c824d480788eaa9ead4\": rpc error: code = NotFound desc = an error occurred when try to find container \"0a34daa4586411625b1c052a7fed94160eea7c66a29d9c824d480788eaa9ead4\": not found" Dec 13 14:29:23.309498 kubelet[1395]: E1213 14:29:23.309393 1395 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:29:23.360381 systemd[1]: var-lib-kubelet-pods-23da3eee\x2d0524\x2d499e\x2d82ee\x2d64acfa7a2160-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dctw9g.mount: Deactivated successfully. Dec 13 14:29:23.360599 systemd[1]: var-lib-kubelet-pods-23da3eee\x2d0524\x2d499e\x2d82ee\x2d64acfa7a2160-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 14:29:23.360814 systemd[1]: var-lib-kubelet-pods-23da3eee\x2d0524\x2d499e\x2d82ee\x2d64acfa7a2160-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 14:29:23.453603 kubelet[1395]: I1213 14:29:23.453546 1395 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="23da3eee-0524-499e-82ee-64acfa7a2160" path="/var/lib/kubelet/pods/23da3eee-0524-499e-82ee-64acfa7a2160/volumes" Dec 13 14:29:24.310346 kubelet[1395]: E1213 14:29:24.310215 1395 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:29:24.362502 kubelet[1395]: E1213 14:29:24.362393 1395 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 14:29:25.310934 kubelet[1395]: E1213 14:29:25.310883 1395 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:29:26.312215 kubelet[1395]: E1213 14:29:26.312109 1395 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:29:26.854671 kubelet[1395]: I1213 14:29:26.854599 1395 topology_manager.go:215] "Topology Admit Handler" podUID="d52999bb-8492-473d-9071-736588a4056e" podNamespace="kube-system" podName="cilium-d875x" Dec 13 14:29:26.854922 kubelet[1395]: E1213 14:29:26.854728 1395 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="23da3eee-0524-499e-82ee-64acfa7a2160" containerName="apply-sysctl-overwrites" Dec 13 14:29:26.854922 kubelet[1395]: E1213 14:29:26.854763 1395 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="23da3eee-0524-499e-82ee-64acfa7a2160" containerName="clean-cilium-state" Dec 13 14:29:26.854922 kubelet[1395]: E1213 14:29:26.854786 1395 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="23da3eee-0524-499e-82ee-64acfa7a2160" containerName="mount-cgroup" Dec 13 14:29:26.854922 kubelet[1395]: E1213 14:29:26.854808 1395 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="23da3eee-0524-499e-82ee-64acfa7a2160" containerName="mount-bpf-fs" Dec 13 14:29:26.854922 kubelet[1395]: E1213 14:29:26.854833 1395 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="23da3eee-0524-499e-82ee-64acfa7a2160" containerName="cilium-agent" Dec 13 14:29:26.854922 kubelet[1395]: I1213 14:29:26.854894 1395 memory_manager.go:354] "RemoveStaleState removing state" podUID="23da3eee-0524-499e-82ee-64acfa7a2160" containerName="cilium-agent" Dec 13 14:29:26.859617 kubelet[1395]: I1213 14:29:26.859478 1395 topology_manager.go:215] "Topology Admit Handler" podUID="f52618ad-02e3-4894-b193-cbe024eeefdb" podNamespace="kube-system" podName="cilium-operator-599987898-rnhxp" Dec 13 14:29:26.867319 systemd[1]: Created slice kubepods-burstable-podd52999bb_8492_473d_9071_736588a4056e.slice. Dec 13 14:29:26.880400 systemd[1]: Created slice kubepods-besteffort-podf52618ad_02e3_4894_b193_cbe024eeefdb.slice. Dec 13 14:29:27.003556 kubelet[1395]: I1213 14:29:27.003520 1395 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d52999bb-8492-473d-9071-736588a4056e-host-proc-sys-net\") pod \"cilium-d875x\" (UID: \"d52999bb-8492-473d-9071-736588a4056e\") " pod="kube-system/cilium-d875x" Dec 13 14:29:27.003805 kubelet[1395]: I1213 14:29:27.003785 1395 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f52618ad-02e3-4894-b193-cbe024eeefdb-cilium-config-path\") pod \"cilium-operator-599987898-rnhxp\" (UID: \"f52618ad-02e3-4894-b193-cbe024eeefdb\") " pod="kube-system/cilium-operator-599987898-rnhxp" Dec 13 14:29:27.003917 kubelet[1395]: I1213 14:29:27.003901 1395 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d52999bb-8492-473d-9071-736588a4056e-bpf-maps\") pod \"cilium-d875x\" (UID: \"d52999bb-8492-473d-9071-736588a4056e\") " pod="kube-system/cilium-d875x" Dec 13 14:29:27.004042 kubelet[1395]: I1213 14:29:27.004020 1395 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d52999bb-8492-473d-9071-736588a4056e-etc-cni-netd\") pod \"cilium-d875x\" (UID: \"d52999bb-8492-473d-9071-736588a4056e\") " pod="kube-system/cilium-d875x" Dec 13 14:29:27.004173 kubelet[1395]: I1213 14:29:27.004150 1395 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/d52999bb-8492-473d-9071-736588a4056e-cilium-ipsec-secrets\") pod \"cilium-d875x\" (UID: \"d52999bb-8492-473d-9071-736588a4056e\") " pod="kube-system/cilium-d875x" Dec 13 14:29:27.004297 kubelet[1395]: I1213 14:29:27.004279 1395 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxgwb\" (UniqueName: \"kubernetes.io/projected/f52618ad-02e3-4894-b193-cbe024eeefdb-kube-api-access-dxgwb\") pod \"cilium-operator-599987898-rnhxp\" (UID: \"f52618ad-02e3-4894-b193-cbe024eeefdb\") " pod="kube-system/cilium-operator-599987898-rnhxp" Dec 13 14:29:27.004392 kubelet[1395]: I1213 14:29:27.004377 1395 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d52999bb-8492-473d-9071-736588a4056e-cilium-config-path\") pod \"cilium-d875x\" (UID: \"d52999bb-8492-473d-9071-736588a4056e\") " pod="kube-system/cilium-d875x" Dec 13 14:29:27.004483 kubelet[1395]: I1213 14:29:27.004469 1395 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d52999bb-8492-473d-9071-736588a4056e-cilium-cgroup\") pod \"cilium-d875x\" (UID: \"d52999bb-8492-473d-9071-736588a4056e\") " pod="kube-system/cilium-d875x" Dec 13 14:29:27.004579 kubelet[1395]: I1213 14:29:27.004564 1395 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d52999bb-8492-473d-9071-736588a4056e-clustermesh-secrets\") pod \"cilium-d875x\" (UID: \"d52999bb-8492-473d-9071-736588a4056e\") " pod="kube-system/cilium-d875x" Dec 13 14:29:27.004690 kubelet[1395]: I1213 14:29:27.004673 1395 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d52999bb-8492-473d-9071-736588a4056e-xtables-lock\") pod \"cilium-d875x\" (UID: \"d52999bb-8492-473d-9071-736588a4056e\") " pod="kube-system/cilium-d875x" Dec 13 14:29:27.004811 kubelet[1395]: I1213 14:29:27.004795 1395 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d52999bb-8492-473d-9071-736588a4056e-host-proc-sys-kernel\") pod \"cilium-d875x\" (UID: \"d52999bb-8492-473d-9071-736588a4056e\") " pod="kube-system/cilium-d875x" Dec 13 14:29:27.004901 kubelet[1395]: I1213 14:29:27.004887 1395 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d52999bb-8492-473d-9071-736588a4056e-hubble-tls\") pod \"cilium-d875x\" (UID: \"d52999bb-8492-473d-9071-736588a4056e\") " pod="kube-system/cilium-d875x" Dec 13 14:29:27.004992 kubelet[1395]: I1213 14:29:27.004977 1395 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d52999bb-8492-473d-9071-736588a4056e-lib-modules\") pod \"cilium-d875x\" (UID: \"d52999bb-8492-473d-9071-736588a4056e\") " pod="kube-system/cilium-d875x" Dec 13 14:29:27.005088 kubelet[1395]: I1213 14:29:27.005073 1395 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2pkkh\" (UniqueName: \"kubernetes.io/projected/d52999bb-8492-473d-9071-736588a4056e-kube-api-access-2pkkh\") pod \"cilium-d875x\" (UID: \"d52999bb-8492-473d-9071-736588a4056e\") " pod="kube-system/cilium-d875x" Dec 13 14:29:27.005192 kubelet[1395]: I1213 14:29:27.005176 1395 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d52999bb-8492-473d-9071-736588a4056e-cilium-run\") pod \"cilium-d875x\" (UID: \"d52999bb-8492-473d-9071-736588a4056e\") " pod="kube-system/cilium-d875x" Dec 13 14:29:27.005288 kubelet[1395]: I1213 14:29:27.005273 1395 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d52999bb-8492-473d-9071-736588a4056e-hostproc\") pod \"cilium-d875x\" (UID: \"d52999bb-8492-473d-9071-736588a4056e\") " pod="kube-system/cilium-d875x" Dec 13 14:29:27.005414 kubelet[1395]: I1213 14:29:27.005396 1395 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d52999bb-8492-473d-9071-736588a4056e-cni-path\") pod \"cilium-d875x\" (UID: \"d52999bb-8492-473d-9071-736588a4056e\") " pod="kube-system/cilium-d875x" Dec 13 14:29:27.189636 env[1142]: time="2024-12-13T14:29:27.187268862Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-rnhxp,Uid:f52618ad-02e3-4894-b193-cbe024eeefdb,Namespace:kube-system,Attempt:0,}" Dec 13 14:29:27.238413 env[1142]: time="2024-12-13T14:29:27.238013315Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:29:27.238413 env[1142]: time="2024-12-13T14:29:27.238098923Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:29:27.238413 env[1142]: time="2024-12-13T14:29:27.238131935Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:29:27.239635 env[1142]: time="2024-12-13T14:29:27.239394691Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f08cd2a70e5dcac3040a9bfda64f9de7e21a8838b93fb90fb5a6b7fb6f0ff09d pid=2919 runtime=io.containerd.runc.v2 Dec 13 14:29:27.267914 systemd[1]: Started cri-containerd-f08cd2a70e5dcac3040a9bfda64f9de7e21a8838b93fb90fb5a6b7fb6f0ff09d.scope. Dec 13 14:29:27.313037 kubelet[1395]: E1213 14:29:27.312933 1395 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:29:27.330081 env[1142]: time="2024-12-13T14:29:27.329761244Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-rnhxp,Uid:f52618ad-02e3-4894-b193-cbe024eeefdb,Namespace:kube-system,Attempt:0,} returns sandbox id \"f08cd2a70e5dcac3040a9bfda64f9de7e21a8838b93fb90fb5a6b7fb6f0ff09d\"" Dec 13 14:29:27.332972 env[1142]: time="2024-12-13T14:29:27.332929960Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Dec 13 14:29:27.479506 env[1142]: time="2024-12-13T14:29:27.479319361Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-d875x,Uid:d52999bb-8492-473d-9071-736588a4056e,Namespace:kube-system,Attempt:0,}" Dec 13 14:29:27.519137 env[1142]: time="2024-12-13T14:29:27.519004779Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:29:27.519382 env[1142]: time="2024-12-13T14:29:27.519103792Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:29:27.519616 env[1142]: time="2024-12-13T14:29:27.519552716Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:29:27.520117 env[1142]: time="2024-12-13T14:29:27.520046614Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/37e6b82baaefdaafb2cbc805de60a1bb02b111c03180377f8d81998e5094fd99 pid=2959 runtime=io.containerd.runc.v2 Dec 13 14:29:27.541525 systemd[1]: Started cri-containerd-37e6b82baaefdaafb2cbc805de60a1bb02b111c03180377f8d81998e5094fd99.scope. Dec 13 14:29:27.587124 env[1142]: time="2024-12-13T14:29:27.587058904Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-d875x,Uid:d52999bb-8492-473d-9071-736588a4056e,Namespace:kube-system,Attempt:0,} returns sandbox id \"37e6b82baaefdaafb2cbc805de60a1bb02b111c03180377f8d81998e5094fd99\"" Dec 13 14:29:27.589960 env[1142]: time="2024-12-13T14:29:27.589923125Z" level=info msg="CreateContainer within sandbox \"37e6b82baaefdaafb2cbc805de60a1bb02b111c03180377f8d81998e5094fd99\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 14:29:27.633724 env[1142]: time="2024-12-13T14:29:27.633596793Z" level=info msg="CreateContainer within sandbox \"37e6b82baaefdaafb2cbc805de60a1bb02b111c03180377f8d81998e5094fd99\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"ef02c58ca11a0dcf8611bc997e3049e94fb8b0ddd6abd611bbc77c6c512f6048\"" Dec 13 14:29:27.634924 env[1142]: time="2024-12-13T14:29:27.634898921Z" level=info msg="StartContainer for \"ef02c58ca11a0dcf8611bc997e3049e94fb8b0ddd6abd611bbc77c6c512f6048\"" Dec 13 14:29:27.654601 systemd[1]: Started cri-containerd-ef02c58ca11a0dcf8611bc997e3049e94fb8b0ddd6abd611bbc77c6c512f6048.scope. Dec 13 14:29:27.677465 systemd[1]: cri-containerd-ef02c58ca11a0dcf8611bc997e3049e94fb8b0ddd6abd611bbc77c6c512f6048.scope: Deactivated successfully. Dec 13 14:29:27.738897 env[1142]: time="2024-12-13T14:29:27.738692216Z" level=info msg="shim disconnected" id=ef02c58ca11a0dcf8611bc997e3049e94fb8b0ddd6abd611bbc77c6c512f6048 Dec 13 14:29:27.738897 env[1142]: time="2024-12-13T14:29:27.738781582Z" level=warning msg="cleaning up after shim disconnected" id=ef02c58ca11a0dcf8611bc997e3049e94fb8b0ddd6abd611bbc77c6c512f6048 namespace=k8s.io Dec 13 14:29:27.738897 env[1142]: time="2024-12-13T14:29:27.738803933Z" level=info msg="cleaning up dead shim" Dec 13 14:29:27.755087 env[1142]: time="2024-12-13T14:29:27.754995841Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:29:27Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3016 runtime=io.containerd.runc.v2\ntime=\"2024-12-13T14:29:27Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/ef02c58ca11a0dcf8611bc997e3049e94fb8b0ddd6abd611bbc77c6c512f6048/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Dec 13 14:29:27.756044 env[1142]: time="2024-12-13T14:29:27.755850048Z" level=error msg="copy shim log" error="read /proc/self/fd/66: file already closed" Dec 13 14:29:27.756921 env[1142]: time="2024-12-13T14:29:27.756290786Z" level=error msg="Failed to pipe stdout of container \"ef02c58ca11a0dcf8611bc997e3049e94fb8b0ddd6abd611bbc77c6c512f6048\"" error="reading from a closed fifo" Dec 13 14:29:27.757204 env[1142]: time="2024-12-13T14:29:27.756778733Z" level=error msg="Failed to pipe stderr of container \"ef02c58ca11a0dcf8611bc997e3049e94fb8b0ddd6abd611bbc77c6c512f6048\"" error="reading from a closed fifo" Dec 13 14:29:27.762856 env[1142]: time="2024-12-13T14:29:27.762757669Z" level=error msg="StartContainer for \"ef02c58ca11a0dcf8611bc997e3049e94fb8b0ddd6abd611bbc77c6c512f6048\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Dec 13 14:29:27.763312 kubelet[1395]: E1213 14:29:27.763169 1395 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="ef02c58ca11a0dcf8611bc997e3049e94fb8b0ddd6abd611bbc77c6c512f6048" Dec 13 14:29:27.763550 kubelet[1395]: E1213 14:29:27.763503 1395 kuberuntime_manager.go:1256] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Dec 13 14:29:27.763550 kubelet[1395]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Dec 13 14:29:27.763550 kubelet[1395]: rm /hostbin/cilium-mount Dec 13 14:29:27.763847 kubelet[1395]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2pkkh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:&AppArmorProfile{Type:Unconfined,LocalhostProfile:nil,},},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-d875x_kube-system(d52999bb-8492-473d-9071-736588a4056e): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Dec 13 14:29:27.763847 kubelet[1395]: E1213 14:29:27.763577 1395 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-d875x" podUID="d52999bb-8492-473d-9071-736588a4056e" Dec 13 14:29:28.009803 env[1142]: time="2024-12-13T14:29:28.009525944Z" level=info msg="CreateContainer within sandbox \"37e6b82baaefdaafb2cbc805de60a1bb02b111c03180377f8d81998e5094fd99\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:1,}" Dec 13 14:29:28.035438 env[1142]: time="2024-12-13T14:29:28.035350718Z" level=info msg="CreateContainer within sandbox \"37e6b82baaefdaafb2cbc805de60a1bb02b111c03180377f8d81998e5094fd99\" for &ContainerMetadata{Name:mount-cgroup,Attempt:1,} returns container id \"744b23ac7dedd3d2eb9a93130ccb4a108d2f8bd71cc902154194a5d8dd51cb14\"" Dec 13 14:29:28.037352 env[1142]: time="2024-12-13T14:29:28.037300550Z" level=info msg="StartContainer for \"744b23ac7dedd3d2eb9a93130ccb4a108d2f8bd71cc902154194a5d8dd51cb14\"" Dec 13 14:29:28.072893 systemd[1]: Started cri-containerd-744b23ac7dedd3d2eb9a93130ccb4a108d2f8bd71cc902154194a5d8dd51cb14.scope. Dec 13 14:29:28.095850 systemd[1]: cri-containerd-744b23ac7dedd3d2eb9a93130ccb4a108d2f8bd71cc902154194a5d8dd51cb14.scope: Deactivated successfully. Dec 13 14:29:28.128313 env[1142]: time="2024-12-13T14:29:28.128265944Z" level=info msg="shim disconnected" id=744b23ac7dedd3d2eb9a93130ccb4a108d2f8bd71cc902154194a5d8dd51cb14 Dec 13 14:29:28.128588 env[1142]: time="2024-12-13T14:29:28.128557616Z" level=warning msg="cleaning up after shim disconnected" id=744b23ac7dedd3d2eb9a93130ccb4a108d2f8bd71cc902154194a5d8dd51cb14 namespace=k8s.io Dec 13 14:29:28.128678 env[1142]: time="2024-12-13T14:29:28.128647203Z" level=info msg="cleaning up dead shim" Dec 13 14:29:28.144255 env[1142]: time="2024-12-13T14:29:28.144218885Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:29:28Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3053 runtime=io.containerd.runc.v2\ntime=\"2024-12-13T14:29:28Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/744b23ac7dedd3d2eb9a93130ccb4a108d2f8bd71cc902154194a5d8dd51cb14/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Dec 13 14:29:28.144602 env[1142]: time="2024-12-13T14:29:28.144553958Z" level=error msg="copy shim log" error="read /proc/self/fd/67: file already closed" Dec 13 14:29:28.148775 env[1142]: time="2024-12-13T14:29:28.144914468Z" level=error msg="Failed to pipe stderr of container \"744b23ac7dedd3d2eb9a93130ccb4a108d2f8bd71cc902154194a5d8dd51cb14\"" error="reading from a closed fifo" Dec 13 14:29:28.148848 env[1142]: time="2024-12-13T14:29:28.148700822Z" level=error msg="Failed to pipe stdout of container \"744b23ac7dedd3d2eb9a93130ccb4a108d2f8bd71cc902154194a5d8dd51cb14\"" error="reading from a closed fifo" Dec 13 14:29:28.152756 env[1142]: time="2024-12-13T14:29:28.152719730Z" level=error msg="StartContainer for \"744b23ac7dedd3d2eb9a93130ccb4a108d2f8bd71cc902154194a5d8dd51cb14\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Dec 13 14:29:28.153490 kubelet[1395]: E1213 14:29:28.153015 1395 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="744b23ac7dedd3d2eb9a93130ccb4a108d2f8bd71cc902154194a5d8dd51cb14" Dec 13 14:29:28.153490 kubelet[1395]: E1213 14:29:28.153141 1395 kuberuntime_manager.go:1256] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Dec 13 14:29:28.153490 kubelet[1395]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Dec 13 14:29:28.153490 kubelet[1395]: rm /hostbin/cilium-mount Dec 13 14:29:28.153490 kubelet[1395]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2pkkh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:&AppArmorProfile{Type:Unconfined,LocalhostProfile:nil,},},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-d875x_kube-system(d52999bb-8492-473d-9071-736588a4056e): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Dec 13 14:29:28.153490 kubelet[1395]: E1213 14:29:28.153171 1395 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-d875x" podUID="d52999bb-8492-473d-9071-736588a4056e" Dec 13 14:29:28.313212 kubelet[1395]: E1213 14:29:28.313125 1395 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:29:29.028123 kubelet[1395]: I1213 14:29:29.027524 1395 scope.go:117] "RemoveContainer" containerID="ef02c58ca11a0dcf8611bc997e3049e94fb8b0ddd6abd611bbc77c6c512f6048" Dec 13 14:29:29.028123 kubelet[1395]: I1213 14:29:29.027878 1395 scope.go:117] "RemoveContainer" containerID="ef02c58ca11a0dcf8611bc997e3049e94fb8b0ddd6abd611bbc77c6c512f6048" Dec 13 14:29:29.029850 env[1142]: time="2024-12-13T14:29:29.029818449Z" level=info msg="RemoveContainer for \"ef02c58ca11a0dcf8611bc997e3049e94fb8b0ddd6abd611bbc77c6c512f6048\"" Dec 13 14:29:29.039620 env[1142]: time="2024-12-13T14:29:29.039586111Z" level=info msg="RemoveContainer for \"ef02c58ca11a0dcf8611bc997e3049e94fb8b0ddd6abd611bbc77c6c512f6048\" returns successfully" Dec 13 14:29:29.054453 env[1142]: time="2024-12-13T14:29:29.054411393Z" level=info msg="RemoveContainer for \"ef02c58ca11a0dcf8611bc997e3049e94fb8b0ddd6abd611bbc77c6c512f6048\"" Dec 13 14:29:29.054453 env[1142]: time="2024-12-13T14:29:29.054451598Z" level=info msg="RemoveContainer for \"ef02c58ca11a0dcf8611bc997e3049e94fb8b0ddd6abd611bbc77c6c512f6048\" returns successfully" Dec 13 14:29:29.062844 kubelet[1395]: E1213 14:29:29.054947 1395 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mount-cgroup pod=cilium-d875x_kube-system(d52999bb-8492-473d-9071-736588a4056e)\"" pod="kube-system/cilium-d875x" podUID="d52999bb-8492-473d-9071-736588a4056e" Dec 13 14:29:29.261633 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3977866974.mount: Deactivated successfully. Dec 13 14:29:29.314564 kubelet[1395]: E1213 14:29:29.313878 1395 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:29:29.363489 kubelet[1395]: E1213 14:29:29.363392 1395 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 14:29:30.031582 env[1142]: time="2024-12-13T14:29:30.031490473Z" level=info msg="StopPodSandbox for \"37e6b82baaefdaafb2cbc805de60a1bb02b111c03180377f8d81998e5094fd99\"" Dec 13 14:29:30.034173 env[1142]: time="2024-12-13T14:29:30.031604795Z" level=info msg="Container to stop \"744b23ac7dedd3d2eb9a93130ccb4a108d2f8bd71cc902154194a5d8dd51cb14\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:29:30.035942 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-37e6b82baaefdaafb2cbc805de60a1bb02b111c03180377f8d81998e5094fd99-shm.mount: Deactivated successfully. Dec 13 14:29:30.054034 systemd[1]: cri-containerd-37e6b82baaefdaafb2cbc805de60a1bb02b111c03180377f8d81998e5094fd99.scope: Deactivated successfully. Dec 13 14:29:30.110728 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-37e6b82baaefdaafb2cbc805de60a1bb02b111c03180377f8d81998e5094fd99-rootfs.mount: Deactivated successfully. Dec 13 14:29:30.406289 kubelet[1395]: E1213 14:29:30.314378 1395 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:29:30.657788 env[1142]: time="2024-12-13T14:29:30.657516468Z" level=info msg="shim disconnected" id=37e6b82baaefdaafb2cbc805de60a1bb02b111c03180377f8d81998e5094fd99 Dec 13 14:29:30.657788 env[1142]: time="2024-12-13T14:29:30.657609972Z" level=warning msg="cleaning up after shim disconnected" id=37e6b82baaefdaafb2cbc805de60a1bb02b111c03180377f8d81998e5094fd99 namespace=k8s.io Dec 13 14:29:30.657788 env[1142]: time="2024-12-13T14:29:30.657633726Z" level=info msg="cleaning up dead shim" Dec 13 14:29:30.673737 env[1142]: time="2024-12-13T14:29:30.673621264Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:29:30Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3085 runtime=io.containerd.runc.v2\n" Dec 13 14:29:30.674334 env[1142]: time="2024-12-13T14:29:30.674281722Z" level=info msg="TearDown network for sandbox \"37e6b82baaefdaafb2cbc805de60a1bb02b111c03180377f8d81998e5094fd99\" successfully" Dec 13 14:29:30.674432 env[1142]: time="2024-12-13T14:29:30.674338828Z" level=info msg="StopPodSandbox for \"37e6b82baaefdaafb2cbc805de60a1bb02b111c03180377f8d81998e5094fd99\" returns successfully" Dec 13 14:29:30.719413 env[1142]: time="2024-12-13T14:29:30.719334046Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:29:30.723191 env[1142]: time="2024-12-13T14:29:30.723121416Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:29:30.727490 env[1142]: time="2024-12-13T14:29:30.727434654Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:29:30.728770 env[1142]: time="2024-12-13T14:29:30.728721165Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Dec 13 14:29:30.733560 env[1142]: time="2024-12-13T14:29:30.733523281Z" level=info msg="CreateContainer within sandbox \"f08cd2a70e5dcac3040a9bfda64f9de7e21a8838b93fb90fb5a6b7fb6f0ff09d\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Dec 13 14:29:30.759875 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount92865234.mount: Deactivated successfully. Dec 13 14:29:30.773012 env[1142]: time="2024-12-13T14:29:30.772923770Z" level=info msg="CreateContainer within sandbox \"f08cd2a70e5dcac3040a9bfda64f9de7e21a8838b93fb90fb5a6b7fb6f0ff09d\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"3fdbd756a1ee0da899afbc5add1efa248f2062b9fd4282f3c87dd8073d76df9c\"" Dec 13 14:29:30.774034 env[1142]: time="2024-12-13T14:29:30.773998839Z" level=info msg="StartContainer for \"3fdbd756a1ee0da899afbc5add1efa248f2062b9fd4282f3c87dd8073d76df9c\"" Dec 13 14:29:30.808396 systemd[1]: Started cri-containerd-3fdbd756a1ee0da899afbc5add1efa248f2062b9fd4282f3c87dd8073d76df9c.scope. Dec 13 14:29:30.816707 kubelet[1395]: I1213 14:29:30.812342 1395 setters.go:580] "Node became not ready" node="172.24.4.127" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-12-13T14:29:30Z","lastTransitionTime":"2024-12-13T14:29:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Dec 13 14:29:30.834485 kubelet[1395]: I1213 14:29:30.834417 1395 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d52999bb-8492-473d-9071-736588a4056e-cilium-cgroup\") pod \"d52999bb-8492-473d-9071-736588a4056e\" (UID: \"d52999bb-8492-473d-9071-736588a4056e\") " Dec 13 14:29:30.834698 kubelet[1395]: I1213 14:29:30.834565 1395 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d52999bb-8492-473d-9071-736588a4056e-clustermesh-secrets\") pod \"d52999bb-8492-473d-9071-736588a4056e\" (UID: \"d52999bb-8492-473d-9071-736588a4056e\") " Dec 13 14:29:30.834698 kubelet[1395]: I1213 14:29:30.834509 1395 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d52999bb-8492-473d-9071-736588a4056e-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "d52999bb-8492-473d-9071-736588a4056e" (UID: "d52999bb-8492-473d-9071-736588a4056e"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:29:30.835695 kubelet[1395]: I1213 14:29:30.835265 1395 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d52999bb-8492-473d-9071-736588a4056e-etc-cni-netd\") pod \"d52999bb-8492-473d-9071-736588a4056e\" (UID: \"d52999bb-8492-473d-9071-736588a4056e\") " Dec 13 14:29:30.835695 kubelet[1395]: I1213 14:29:30.835354 1395 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d52999bb-8492-473d-9071-736588a4056e-hubble-tls\") pod \"d52999bb-8492-473d-9071-736588a4056e\" (UID: \"d52999bb-8492-473d-9071-736588a4056e\") " Dec 13 14:29:30.835695 kubelet[1395]: I1213 14:29:30.835377 1395 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d52999bb-8492-473d-9071-736588a4056e-lib-modules\") pod \"d52999bb-8492-473d-9071-736588a4056e\" (UID: \"d52999bb-8492-473d-9071-736588a4056e\") " Dec 13 14:29:30.835695 kubelet[1395]: I1213 14:29:30.835483 1395 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2pkkh\" (UniqueName: \"kubernetes.io/projected/d52999bb-8492-473d-9071-736588a4056e-kube-api-access-2pkkh\") pod \"d52999bb-8492-473d-9071-736588a4056e\" (UID: \"d52999bb-8492-473d-9071-736588a4056e\") " Dec 13 14:29:30.837822 kubelet[1395]: I1213 14:29:30.835505 1395 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d52999bb-8492-473d-9071-736588a4056e-bpf-maps\") pod \"d52999bb-8492-473d-9071-736588a4056e\" (UID: \"d52999bb-8492-473d-9071-736588a4056e\") " Dec 13 14:29:30.837822 kubelet[1395]: I1213 14:29:30.836408 1395 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d52999bb-8492-473d-9071-736588a4056e-xtables-lock\") pod \"d52999bb-8492-473d-9071-736588a4056e\" (UID: \"d52999bb-8492-473d-9071-736588a4056e\") " Dec 13 14:29:30.837822 kubelet[1395]: I1213 14:29:30.836429 1395 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d52999bb-8492-473d-9071-736588a4056e-host-proc-sys-kernel\") pod \"d52999bb-8492-473d-9071-736588a4056e\" (UID: \"d52999bb-8492-473d-9071-736588a4056e\") " Dec 13 14:29:30.837822 kubelet[1395]: I1213 14:29:30.836483 1395 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d52999bb-8492-473d-9071-736588a4056e-hostproc\") pod \"d52999bb-8492-473d-9071-736588a4056e\" (UID: \"d52999bb-8492-473d-9071-736588a4056e\") " Dec 13 14:29:30.837822 kubelet[1395]: I1213 14:29:30.836500 1395 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d52999bb-8492-473d-9071-736588a4056e-cni-path\") pod \"d52999bb-8492-473d-9071-736588a4056e\" (UID: \"d52999bb-8492-473d-9071-736588a4056e\") " Dec 13 14:29:30.837822 kubelet[1395]: I1213 14:29:30.836540 1395 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d52999bb-8492-473d-9071-736588a4056e-host-proc-sys-net\") pod \"d52999bb-8492-473d-9071-736588a4056e\" (UID: \"d52999bb-8492-473d-9071-736588a4056e\") " Dec 13 14:29:30.837822 kubelet[1395]: I1213 14:29:30.836581 1395 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/d52999bb-8492-473d-9071-736588a4056e-cilium-ipsec-secrets\") pod \"d52999bb-8492-473d-9071-736588a4056e\" (UID: \"d52999bb-8492-473d-9071-736588a4056e\") " Dec 13 14:29:30.837822 kubelet[1395]: I1213 14:29:30.836627 1395 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d52999bb-8492-473d-9071-736588a4056e-cilium-config-path\") pod \"d52999bb-8492-473d-9071-736588a4056e\" (UID: \"d52999bb-8492-473d-9071-736588a4056e\") " Dec 13 14:29:30.837822 kubelet[1395]: I1213 14:29:30.836679 1395 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d52999bb-8492-473d-9071-736588a4056e-cilium-run\") pod \"d52999bb-8492-473d-9071-736588a4056e\" (UID: \"d52999bb-8492-473d-9071-736588a4056e\") " Dec 13 14:29:30.837822 kubelet[1395]: I1213 14:29:30.836727 1395 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d52999bb-8492-473d-9071-736588a4056e-cilium-cgroup\") on node \"172.24.4.127\" DevicePath \"\"" Dec 13 14:29:30.837822 kubelet[1395]: I1213 14:29:30.836789 1395 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d52999bb-8492-473d-9071-736588a4056e-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "d52999bb-8492-473d-9071-736588a4056e" (UID: "d52999bb-8492-473d-9071-736588a4056e"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:29:30.837822 kubelet[1395]: I1213 14:29:30.836846 1395 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d52999bb-8492-473d-9071-736588a4056e-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "d52999bb-8492-473d-9071-736588a4056e" (UID: "d52999bb-8492-473d-9071-736588a4056e"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:29:30.838478 kubelet[1395]: I1213 14:29:30.838326 1395 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d52999bb-8492-473d-9071-736588a4056e-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "d52999bb-8492-473d-9071-736588a4056e" (UID: "d52999bb-8492-473d-9071-736588a4056e"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:29:30.838478 kubelet[1395]: I1213 14:29:30.838384 1395 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d52999bb-8492-473d-9071-736588a4056e-hostproc" (OuterVolumeSpecName: "hostproc") pod "d52999bb-8492-473d-9071-736588a4056e" (UID: "d52999bb-8492-473d-9071-736588a4056e"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:29:30.838478 kubelet[1395]: I1213 14:29:30.838406 1395 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d52999bb-8492-473d-9071-736588a4056e-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "d52999bb-8492-473d-9071-736588a4056e" (UID: "d52999bb-8492-473d-9071-736588a4056e"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:29:30.838478 kubelet[1395]: I1213 14:29:30.838428 1395 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d52999bb-8492-473d-9071-736588a4056e-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "d52999bb-8492-473d-9071-736588a4056e" (UID: "d52999bb-8492-473d-9071-736588a4056e"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:29:30.838478 kubelet[1395]: I1213 14:29:30.838447 1395 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d52999bb-8492-473d-9071-736588a4056e-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "d52999bb-8492-473d-9071-736588a4056e" (UID: "d52999bb-8492-473d-9071-736588a4056e"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:29:30.838680 kubelet[1395]: I1213 14:29:30.838571 1395 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d52999bb-8492-473d-9071-736588a4056e-cni-path" (OuterVolumeSpecName: "cni-path") pod "d52999bb-8492-473d-9071-736588a4056e" (UID: "d52999bb-8492-473d-9071-736588a4056e"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:29:30.841998 systemd[1]: var-lib-kubelet-pods-d52999bb\x2d8492\x2d473d\x2d9071\x2d736588a4056e-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 14:29:30.849962 kubelet[1395]: I1213 14:29:30.838600 1395 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d52999bb-8492-473d-9071-736588a4056e-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "d52999bb-8492-473d-9071-736588a4056e" (UID: "d52999bb-8492-473d-9071-736588a4056e"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:29:30.850202 kubelet[1395]: I1213 14:29:30.848180 1395 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d52999bb-8492-473d-9071-736588a4056e-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "d52999bb-8492-473d-9071-736588a4056e" (UID: "d52999bb-8492-473d-9071-736588a4056e"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 14:29:30.850313 kubelet[1395]: I1213 14:29:30.848241 1395 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d52999bb-8492-473d-9071-736588a4056e-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "d52999bb-8492-473d-9071-736588a4056e" (UID: "d52999bb-8492-473d-9071-736588a4056e"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 14:29:30.850406 kubelet[1395]: I1213 14:29:30.849901 1395 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d52999bb-8492-473d-9071-736588a4056e-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "d52999bb-8492-473d-9071-736588a4056e" (UID: "d52999bb-8492-473d-9071-736588a4056e"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 14:29:30.851777 kubelet[1395]: W1213 14:29:30.851719 1395 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd52999bb_8492_473d_9071_736588a4056e.slice/cri-containerd-ef02c58ca11a0dcf8611bc997e3049e94fb8b0ddd6abd611bbc77c6c512f6048.scope WatchSource:0}: container "ef02c58ca11a0dcf8611bc997e3049e94fb8b0ddd6abd611bbc77c6c512f6048" in namespace "k8s.io": not found Dec 13 14:29:30.854165 kubelet[1395]: I1213 14:29:30.854093 1395 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d52999bb-8492-473d-9071-736588a4056e-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "d52999bb-8492-473d-9071-736588a4056e" (UID: "d52999bb-8492-473d-9071-736588a4056e"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 14:29:30.858309 kubelet[1395]: I1213 14:29:30.858269 1395 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d52999bb-8492-473d-9071-736588a4056e-kube-api-access-2pkkh" (OuterVolumeSpecName: "kube-api-access-2pkkh") pod "d52999bb-8492-473d-9071-736588a4056e" (UID: "d52999bb-8492-473d-9071-736588a4056e"). InnerVolumeSpecName "kube-api-access-2pkkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 14:29:30.858786 env[1142]: time="2024-12-13T14:29:30.858732804Z" level=info msg="StartContainer for \"3fdbd756a1ee0da899afbc5add1efa248f2062b9fd4282f3c87dd8073d76df9c\" returns successfully" Dec 13 14:29:30.937833 kubelet[1395]: I1213 14:29:30.937722 1395 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d52999bb-8492-473d-9071-736588a4056e-bpf-maps\") on node \"172.24.4.127\" DevicePath \"\"" Dec 13 14:29:30.938005 kubelet[1395]: I1213 14:29:30.937990 1395 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d52999bb-8492-473d-9071-736588a4056e-xtables-lock\") on node \"172.24.4.127\" DevicePath \"\"" Dec 13 14:29:30.938130 kubelet[1395]: I1213 14:29:30.938115 1395 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d52999bb-8492-473d-9071-736588a4056e-host-proc-sys-kernel\") on node \"172.24.4.127\" DevicePath \"\"" Dec 13 14:29:30.938235 kubelet[1395]: I1213 14:29:30.938223 1395 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d52999bb-8492-473d-9071-736588a4056e-hostproc\") on node \"172.24.4.127\" DevicePath \"\"" Dec 13 14:29:30.938330 kubelet[1395]: I1213 14:29:30.938318 1395 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d52999bb-8492-473d-9071-736588a4056e-cni-path\") on node \"172.24.4.127\" DevicePath \"\"" Dec 13 14:29:30.938424 kubelet[1395]: I1213 14:29:30.938412 1395 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d52999bb-8492-473d-9071-736588a4056e-host-proc-sys-net\") on node \"172.24.4.127\" DevicePath \"\"" Dec 13 14:29:30.938518 kubelet[1395]: I1213 14:29:30.938506 1395 reconciler_common.go:289] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/d52999bb-8492-473d-9071-736588a4056e-cilium-ipsec-secrets\") on node \"172.24.4.127\" DevicePath \"\"" Dec 13 14:29:30.938611 kubelet[1395]: I1213 14:29:30.938597 1395 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d52999bb-8492-473d-9071-736588a4056e-cilium-config-path\") on node \"172.24.4.127\" DevicePath \"\"" Dec 13 14:29:30.938728 kubelet[1395]: I1213 14:29:30.938716 1395 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d52999bb-8492-473d-9071-736588a4056e-cilium-run\") on node \"172.24.4.127\" DevicePath \"\"" Dec 13 14:29:30.938835 kubelet[1395]: I1213 14:29:30.938823 1395 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d52999bb-8492-473d-9071-736588a4056e-clustermesh-secrets\") on node \"172.24.4.127\" DevicePath \"\"" Dec 13 14:29:30.938932 kubelet[1395]: I1213 14:29:30.938921 1395 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d52999bb-8492-473d-9071-736588a4056e-etc-cni-netd\") on node \"172.24.4.127\" DevicePath \"\"" Dec 13 14:29:30.939076 kubelet[1395]: I1213 14:29:30.939064 1395 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d52999bb-8492-473d-9071-736588a4056e-hubble-tls\") on node \"172.24.4.127\" DevicePath \"\"" Dec 13 14:29:30.939178 kubelet[1395]: I1213 14:29:30.939166 1395 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d52999bb-8492-473d-9071-736588a4056e-lib-modules\") on node \"172.24.4.127\" DevicePath \"\"" Dec 13 14:29:30.939272 kubelet[1395]: I1213 14:29:30.939259 1395 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-2pkkh\" (UniqueName: \"kubernetes.io/projected/d52999bb-8492-473d-9071-736588a4056e-kube-api-access-2pkkh\") on node \"172.24.4.127\" DevicePath \"\"" Dec 13 14:29:31.040699 kubelet[1395]: I1213 14:29:31.040647 1395 scope.go:117] "RemoveContainer" containerID="744b23ac7dedd3d2eb9a93130ccb4a108d2f8bd71cc902154194a5d8dd51cb14" Dec 13 14:29:31.042750 env[1142]: time="2024-12-13T14:29:31.042625846Z" level=info msg="RemoveContainer for \"744b23ac7dedd3d2eb9a93130ccb4a108d2f8bd71cc902154194a5d8dd51cb14\"" Dec 13 14:29:31.046130 env[1142]: time="2024-12-13T14:29:31.046069368Z" level=info msg="RemoveContainer for \"744b23ac7dedd3d2eb9a93130ccb4a108d2f8bd71cc902154194a5d8dd51cb14\" returns successfully" Dec 13 14:29:31.048171 systemd[1]: Removed slice kubepods-burstable-podd52999bb_8492_473d_9071_736588a4056e.slice. Dec 13 14:29:31.145085 kubelet[1395]: I1213 14:29:31.145029 1395 topology_manager.go:215] "Topology Admit Handler" podUID="74e130d3-ef0c-4427-8715-7932c62afcae" podNamespace="kube-system" podName="cilium-jc447" Dec 13 14:29:31.145249 kubelet[1395]: E1213 14:29:31.145121 1395 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d52999bb-8492-473d-9071-736588a4056e" containerName="mount-cgroup" Dec 13 14:29:31.145249 kubelet[1395]: I1213 14:29:31.145168 1395 memory_manager.go:354] "RemoveStaleState removing state" podUID="d52999bb-8492-473d-9071-736588a4056e" containerName="mount-cgroup" Dec 13 14:29:31.145249 kubelet[1395]: E1213 14:29:31.145206 1395 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d52999bb-8492-473d-9071-736588a4056e" containerName="mount-cgroup" Dec 13 14:29:31.145249 kubelet[1395]: I1213 14:29:31.145243 1395 memory_manager.go:354] "RemoveStaleState removing state" podUID="d52999bb-8492-473d-9071-736588a4056e" containerName="mount-cgroup" Dec 13 14:29:31.156232 systemd[1]: Created slice kubepods-burstable-pod74e130d3_ef0c_4427_8715_7932c62afcae.slice. Dec 13 14:29:31.217930 kubelet[1395]: I1213 14:29:31.217684 1395 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-rnhxp" podStartSLOduration=1.818216589 podStartE2EDuration="5.217605111s" podCreationTimestamp="2024-12-13 14:29:26 +0000 UTC" firstStartedPulling="2024-12-13 14:29:27.331996265 +0000 UTC m=+89.512199738" lastFinishedPulling="2024-12-13 14:29:30.731384787 +0000 UTC m=+92.911588260" observedRunningTime="2024-12-13 14:29:31.175582382 +0000 UTC m=+93.355785805" watchObservedRunningTime="2024-12-13 14:29:31.217605111 +0000 UTC m=+93.397808584" Dec 13 14:29:31.241929 kubelet[1395]: I1213 14:29:31.241883 1395 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/74e130d3-ef0c-4427-8715-7932c62afcae-bpf-maps\") pod \"cilium-jc447\" (UID: \"74e130d3-ef0c-4427-8715-7932c62afcae\") " pod="kube-system/cilium-jc447" Dec 13 14:29:31.242215 kubelet[1395]: I1213 14:29:31.242194 1395 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/74e130d3-ef0c-4427-8715-7932c62afcae-etc-cni-netd\") pod \"cilium-jc447\" (UID: \"74e130d3-ef0c-4427-8715-7932c62afcae\") " pod="kube-system/cilium-jc447" Dec 13 14:29:31.242393 kubelet[1395]: I1213 14:29:31.242373 1395 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/74e130d3-ef0c-4427-8715-7932c62afcae-lib-modules\") pod \"cilium-jc447\" (UID: \"74e130d3-ef0c-4427-8715-7932c62afcae\") " pod="kube-system/cilium-jc447" Dec 13 14:29:31.242577 kubelet[1395]: I1213 14:29:31.242555 1395 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/74e130d3-ef0c-4427-8715-7932c62afcae-host-proc-sys-kernel\") pod \"cilium-jc447\" (UID: \"74e130d3-ef0c-4427-8715-7932c62afcae\") " pod="kube-system/cilium-jc447" Dec 13 14:29:31.242762 kubelet[1395]: I1213 14:29:31.242742 1395 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/74e130d3-ef0c-4427-8715-7932c62afcae-hubble-tls\") pod \"cilium-jc447\" (UID: \"74e130d3-ef0c-4427-8715-7932c62afcae\") " pod="kube-system/cilium-jc447" Dec 13 14:29:31.242944 kubelet[1395]: I1213 14:29:31.242924 1395 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/74e130d3-ef0c-4427-8715-7932c62afcae-cilium-run\") pod \"cilium-jc447\" (UID: \"74e130d3-ef0c-4427-8715-7932c62afcae\") " pod="kube-system/cilium-jc447" Dec 13 14:29:31.243123 kubelet[1395]: I1213 14:29:31.243100 1395 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/74e130d3-ef0c-4427-8715-7932c62afcae-hostproc\") pod \"cilium-jc447\" (UID: \"74e130d3-ef0c-4427-8715-7932c62afcae\") " pod="kube-system/cilium-jc447" Dec 13 14:29:31.243307 kubelet[1395]: I1213 14:29:31.243281 1395 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/74e130d3-ef0c-4427-8715-7932c62afcae-cilium-cgroup\") pod \"cilium-jc447\" (UID: \"74e130d3-ef0c-4427-8715-7932c62afcae\") " pod="kube-system/cilium-jc447" Dec 13 14:29:31.243490 kubelet[1395]: I1213 14:29:31.243468 1395 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/74e130d3-ef0c-4427-8715-7932c62afcae-clustermesh-secrets\") pod \"cilium-jc447\" (UID: \"74e130d3-ef0c-4427-8715-7932c62afcae\") " pod="kube-system/cilium-jc447" Dec 13 14:29:31.243682 kubelet[1395]: I1213 14:29:31.243640 1395 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/74e130d3-ef0c-4427-8715-7932c62afcae-host-proc-sys-net\") pod \"cilium-jc447\" (UID: \"74e130d3-ef0c-4427-8715-7932c62afcae\") " pod="kube-system/cilium-jc447" Dec 13 14:29:31.243892 kubelet[1395]: I1213 14:29:31.243855 1395 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/74e130d3-ef0c-4427-8715-7932c62afcae-cilium-ipsec-secrets\") pod \"cilium-jc447\" (UID: \"74e130d3-ef0c-4427-8715-7932c62afcae\") " pod="kube-system/cilium-jc447" Dec 13 14:29:31.244088 kubelet[1395]: I1213 14:29:31.244063 1395 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxfwf\" (UniqueName: \"kubernetes.io/projected/74e130d3-ef0c-4427-8715-7932c62afcae-kube-api-access-rxfwf\") pod \"cilium-jc447\" (UID: \"74e130d3-ef0c-4427-8715-7932c62afcae\") " pod="kube-system/cilium-jc447" Dec 13 14:29:31.244282 kubelet[1395]: I1213 14:29:31.244261 1395 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/74e130d3-ef0c-4427-8715-7932c62afcae-cni-path\") pod \"cilium-jc447\" (UID: \"74e130d3-ef0c-4427-8715-7932c62afcae\") " pod="kube-system/cilium-jc447" Dec 13 14:29:31.244490 kubelet[1395]: I1213 14:29:31.244468 1395 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/74e130d3-ef0c-4427-8715-7932c62afcae-xtables-lock\") pod \"cilium-jc447\" (UID: \"74e130d3-ef0c-4427-8715-7932c62afcae\") " pod="kube-system/cilium-jc447" Dec 13 14:29:31.244724 kubelet[1395]: I1213 14:29:31.244701 1395 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/74e130d3-ef0c-4427-8715-7932c62afcae-cilium-config-path\") pod \"cilium-jc447\" (UID: \"74e130d3-ef0c-4427-8715-7932c62afcae\") " pod="kube-system/cilium-jc447" Dec 13 14:29:31.314687 kubelet[1395]: E1213 14:29:31.314586 1395 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:29:31.453992 kubelet[1395]: I1213 14:29:31.453881 1395 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d52999bb-8492-473d-9071-736588a4056e" path="/var/lib/kubelet/pods/d52999bb-8492-473d-9071-736588a4056e/volumes" Dec 13 14:29:31.466324 env[1142]: time="2024-12-13T14:29:31.466173876Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jc447,Uid:74e130d3-ef0c-4427-8715-7932c62afcae,Namespace:kube-system,Attempt:0,}" Dec 13 14:29:31.494698 env[1142]: time="2024-12-13T14:29:31.494334031Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:29:31.494698 env[1142]: time="2024-12-13T14:29:31.494571102Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:29:31.497837 env[1142]: time="2024-12-13T14:29:31.497723853Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:29:31.498498 env[1142]: time="2024-12-13T14:29:31.498357231Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b7011604f379109adcf0f15229c34483e5dbc50b2f3081415795f6ddfc60c1e9 pid=3151 runtime=io.containerd.runc.v2 Dec 13 14:29:31.522882 systemd[1]: Started cri-containerd-b7011604f379109adcf0f15229c34483e5dbc50b2f3081415795f6ddfc60c1e9.scope. Dec 13 14:29:31.587614 env[1142]: time="2024-12-13T14:29:31.587534621Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jc447,Uid:74e130d3-ef0c-4427-8715-7932c62afcae,Namespace:kube-system,Attempt:0,} returns sandbox id \"b7011604f379109adcf0f15229c34483e5dbc50b2f3081415795f6ddfc60c1e9\"" Dec 13 14:29:31.592523 env[1142]: time="2024-12-13T14:29:31.592472571Z" level=info msg="CreateContainer within sandbox \"b7011604f379109adcf0f15229c34483e5dbc50b2f3081415795f6ddfc60c1e9\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 14:29:31.616349 env[1142]: time="2024-12-13T14:29:31.616093368Z" level=info msg="CreateContainer within sandbox \"b7011604f379109adcf0f15229c34483e5dbc50b2f3081415795f6ddfc60c1e9\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"f2e54bb78a12aa11364391aeee79929c9c95a09da37d25fcf05dc686d1f4858f\"" Dec 13 14:29:31.617455 env[1142]: time="2024-12-13T14:29:31.617345115Z" level=info msg="StartContainer for \"f2e54bb78a12aa11364391aeee79929c9c95a09da37d25fcf05dc686d1f4858f\"" Dec 13 14:29:31.643010 systemd[1]: Started cri-containerd-f2e54bb78a12aa11364391aeee79929c9c95a09da37d25fcf05dc686d1f4858f.scope. Dec 13 14:29:31.769037 systemd[1]: var-lib-kubelet-pods-d52999bb\x2d8492\x2d473d\x2d9071\x2d736588a4056e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d2pkkh.mount: Deactivated successfully. Dec 13 14:29:31.769252 systemd[1]: var-lib-kubelet-pods-d52999bb\x2d8492\x2d473d\x2d9071\x2d736588a4056e-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Dec 13 14:29:31.769398 systemd[1]: var-lib-kubelet-pods-d52999bb\x2d8492\x2d473d\x2d9071\x2d736588a4056e-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 14:29:31.833122 env[1142]: time="2024-12-13T14:29:31.833013450Z" level=info msg="StartContainer for \"f2e54bb78a12aa11364391aeee79929c9c95a09da37d25fcf05dc686d1f4858f\" returns successfully" Dec 13 14:29:31.879496 systemd[1]: cri-containerd-f2e54bb78a12aa11364391aeee79929c9c95a09da37d25fcf05dc686d1f4858f.scope: Deactivated successfully. Dec 13 14:29:31.927390 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f2e54bb78a12aa11364391aeee79929c9c95a09da37d25fcf05dc686d1f4858f-rootfs.mount: Deactivated successfully. Dec 13 14:29:31.945284 env[1142]: time="2024-12-13T14:29:31.945197875Z" level=info msg="shim disconnected" id=f2e54bb78a12aa11364391aeee79929c9c95a09da37d25fcf05dc686d1f4858f Dec 13 14:29:31.945586 env[1142]: time="2024-12-13T14:29:31.945286200Z" level=warning msg="cleaning up after shim disconnected" id=f2e54bb78a12aa11364391aeee79929c9c95a09da37d25fcf05dc686d1f4858f namespace=k8s.io Dec 13 14:29:31.945586 env[1142]: time="2024-12-13T14:29:31.945309983Z" level=info msg="cleaning up dead shim" Dec 13 14:29:31.960149 env[1142]: time="2024-12-13T14:29:31.960049488Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:29:31Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3236 runtime=io.containerd.runc.v2\n" Dec 13 14:29:32.058529 env[1142]: time="2024-12-13T14:29:32.058281912Z" level=info msg="CreateContainer within sandbox \"b7011604f379109adcf0f15229c34483e5dbc50b2f3081415795f6ddfc60c1e9\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 14:29:32.087646 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2182791864.mount: Deactivated successfully. Dec 13 14:29:32.109931 env[1142]: time="2024-12-13T14:29:32.109787460Z" level=info msg="CreateContainer within sandbox \"b7011604f379109adcf0f15229c34483e5dbc50b2f3081415795f6ddfc60c1e9\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"395618b0a0ca92556149868ff591994cfb5204461d6f4c9c22e2511d7b49a7a4\"" Dec 13 14:29:32.111041 env[1142]: time="2024-12-13T14:29:32.110991749Z" level=info msg="StartContainer for \"395618b0a0ca92556149868ff591994cfb5204461d6f4c9c22e2511d7b49a7a4\"" Dec 13 14:29:32.140987 systemd[1]: Started cri-containerd-395618b0a0ca92556149868ff591994cfb5204461d6f4c9c22e2511d7b49a7a4.scope. Dec 13 14:29:32.194485 env[1142]: time="2024-12-13T14:29:32.194405105Z" level=info msg="StartContainer for \"395618b0a0ca92556149868ff591994cfb5204461d6f4c9c22e2511d7b49a7a4\" returns successfully" Dec 13 14:29:32.218819 systemd[1]: cri-containerd-395618b0a0ca92556149868ff591994cfb5204461d6f4c9c22e2511d7b49a7a4.scope: Deactivated successfully. Dec 13 14:29:32.256973 env[1142]: time="2024-12-13T14:29:32.256888809Z" level=info msg="shim disconnected" id=395618b0a0ca92556149868ff591994cfb5204461d6f4c9c22e2511d7b49a7a4 Dec 13 14:29:32.257513 env[1142]: time="2024-12-13T14:29:32.257446937Z" level=warning msg="cleaning up after shim disconnected" id=395618b0a0ca92556149868ff591994cfb5204461d6f4c9c22e2511d7b49a7a4 namespace=k8s.io Dec 13 14:29:32.257722 env[1142]: time="2024-12-13T14:29:32.257650836Z" level=info msg="cleaning up dead shim" Dec 13 14:29:32.272014 env[1142]: time="2024-12-13T14:29:32.271915483Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:29:32Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3298 runtime=io.containerd.runc.v2\n" Dec 13 14:29:32.316806 kubelet[1395]: E1213 14:29:32.315745 1395 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:29:32.751243 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2324609273.mount: Deactivated successfully. Dec 13 14:29:33.068847 env[1142]: time="2024-12-13T14:29:33.068733748Z" level=info msg="CreateContainer within sandbox \"b7011604f379109adcf0f15229c34483e5dbc50b2f3081415795f6ddfc60c1e9\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 14:29:33.104356 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3247888936.mount: Deactivated successfully. Dec 13 14:29:33.122563 env[1142]: time="2024-12-13T14:29:33.122443224Z" level=info msg="CreateContainer within sandbox \"b7011604f379109adcf0f15229c34483e5dbc50b2f3081415795f6ddfc60c1e9\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"7ae33417af8ed5df3058ad89cff1e5c7b78fd015b51c5a9e9ad864f6d1058c89\"" Dec 13 14:29:33.123229 env[1142]: time="2024-12-13T14:29:33.123155199Z" level=info msg="StartContainer for \"7ae33417af8ed5df3058ad89cff1e5c7b78fd015b51c5a9e9ad864f6d1058c89\"" Dec 13 14:29:33.163572 systemd[1]: Started cri-containerd-7ae33417af8ed5df3058ad89cff1e5c7b78fd015b51c5a9e9ad864f6d1058c89.scope. Dec 13 14:29:33.200769 env[1142]: time="2024-12-13T14:29:33.200732130Z" level=info msg="StartContainer for \"7ae33417af8ed5df3058ad89cff1e5c7b78fd015b51c5a9e9ad864f6d1058c89\" returns successfully" Dec 13 14:29:33.218615 systemd[1]: cri-containerd-7ae33417af8ed5df3058ad89cff1e5c7b78fd015b51c5a9e9ad864f6d1058c89.scope: Deactivated successfully. Dec 13 14:29:33.265392 env[1142]: time="2024-12-13T14:29:33.265273384Z" level=info msg="shim disconnected" id=7ae33417af8ed5df3058ad89cff1e5c7b78fd015b51c5a9e9ad864f6d1058c89 Dec 13 14:29:33.266150 env[1142]: time="2024-12-13T14:29:33.266011086Z" level=warning msg="cleaning up after shim disconnected" id=7ae33417af8ed5df3058ad89cff1e5c7b78fd015b51c5a9e9ad864f6d1058c89 namespace=k8s.io Dec 13 14:29:33.266150 env[1142]: time="2024-12-13T14:29:33.266122142Z" level=info msg="cleaning up dead shim" Dec 13 14:29:33.286910 env[1142]: time="2024-12-13T14:29:33.286835088Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:29:33Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3358 runtime=io.containerd.runc.v2\n" Dec 13 14:29:33.316263 kubelet[1395]: E1213 14:29:33.316101 1395 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:29:33.750967 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7ae33417af8ed5df3058ad89cff1e5c7b78fd015b51c5a9e9ad864f6d1058c89-rootfs.mount: Deactivated successfully. Dec 13 14:29:33.964481 kubelet[1395]: W1213 14:29:33.964377 1395 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd52999bb_8492_473d_9071_736588a4056e.slice/cri-containerd-744b23ac7dedd3d2eb9a93130ccb4a108d2f8bd71cc902154194a5d8dd51cb14.scope WatchSource:0}: container "744b23ac7dedd3d2eb9a93130ccb4a108d2f8bd71cc902154194a5d8dd51cb14" in namespace "k8s.io": not found Dec 13 14:29:34.082254 env[1142]: time="2024-12-13T14:29:34.082152673Z" level=info msg="CreateContainer within sandbox \"b7011604f379109adcf0f15229c34483e5dbc50b2f3081415795f6ddfc60c1e9\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 14:29:34.123908 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1762718848.mount: Deactivated successfully. Dec 13 14:29:34.130137 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2648628239.mount: Deactivated successfully. Dec 13 14:29:34.140604 env[1142]: time="2024-12-13T14:29:34.140509315Z" level=info msg="CreateContainer within sandbox \"b7011604f379109adcf0f15229c34483e5dbc50b2f3081415795f6ddfc60c1e9\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"bf1a71980e44ce45bc290673a446cec7f1ec4696b71248972741df286f1dddd0\"" Dec 13 14:29:34.145803 env[1142]: time="2024-12-13T14:29:34.145734746Z" level=info msg="StartContainer for \"bf1a71980e44ce45bc290673a446cec7f1ec4696b71248972741df286f1dddd0\"" Dec 13 14:29:34.173534 systemd[1]: Started cri-containerd-bf1a71980e44ce45bc290673a446cec7f1ec4696b71248972741df286f1dddd0.scope. Dec 13 14:29:34.201340 systemd[1]: cri-containerd-bf1a71980e44ce45bc290673a446cec7f1ec4696b71248972741df286f1dddd0.scope: Deactivated successfully. Dec 13 14:29:34.202882 env[1142]: time="2024-12-13T14:29:34.202785899Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod74e130d3_ef0c_4427_8715_7932c62afcae.slice/cri-containerd-bf1a71980e44ce45bc290673a446cec7f1ec4696b71248972741df286f1dddd0.scope/memory.events\": no such file or directory" Dec 13 14:29:34.211331 env[1142]: time="2024-12-13T14:29:34.211198960Z" level=info msg="StartContainer for \"bf1a71980e44ce45bc290673a446cec7f1ec4696b71248972741df286f1dddd0\" returns successfully" Dec 13 14:29:34.244050 env[1142]: time="2024-12-13T14:29:34.243997255Z" level=info msg="shim disconnected" id=bf1a71980e44ce45bc290673a446cec7f1ec4696b71248972741df286f1dddd0 Dec 13 14:29:34.244050 env[1142]: time="2024-12-13T14:29:34.244050254Z" level=warning msg="cleaning up after shim disconnected" id=bf1a71980e44ce45bc290673a446cec7f1ec4696b71248972741df286f1dddd0 namespace=k8s.io Dec 13 14:29:34.244260 env[1142]: time="2024-12-13T14:29:34.244061765Z" level=info msg="cleaning up dead shim" Dec 13 14:29:34.254214 env[1142]: time="2024-12-13T14:29:34.254172715Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:29:34Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3413 runtime=io.containerd.runc.v2\n" Dec 13 14:29:34.316574 kubelet[1395]: E1213 14:29:34.316411 1395 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:29:34.367619 kubelet[1395]: E1213 14:29:34.365408 1395 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 14:29:34.751620 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bf1a71980e44ce45bc290673a446cec7f1ec4696b71248972741df286f1dddd0-rootfs.mount: Deactivated successfully. Dec 13 14:29:35.084726 env[1142]: time="2024-12-13T14:29:35.084588078Z" level=info msg="CreateContainer within sandbox \"b7011604f379109adcf0f15229c34483e5dbc50b2f3081415795f6ddfc60c1e9\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 14:29:35.122815 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1969746780.mount: Deactivated successfully. Dec 13 14:29:35.158708 env[1142]: time="2024-12-13T14:29:35.158553113Z" level=info msg="CreateContainer within sandbox \"b7011604f379109adcf0f15229c34483e5dbc50b2f3081415795f6ddfc60c1e9\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"359357d1aea78dd6f92f600c577aba9d5acb199bce4254ba2fc9cbb4504da927\"" Dec 13 14:29:35.159727 env[1142]: time="2024-12-13T14:29:35.159587837Z" level=info msg="StartContainer for \"359357d1aea78dd6f92f600c577aba9d5acb199bce4254ba2fc9cbb4504da927\"" Dec 13 14:29:35.192378 systemd[1]: Started cri-containerd-359357d1aea78dd6f92f600c577aba9d5acb199bce4254ba2fc9cbb4504da927.scope. Dec 13 14:29:35.270905 env[1142]: time="2024-12-13T14:29:35.270838848Z" level=info msg="StartContainer for \"359357d1aea78dd6f92f600c577aba9d5acb199bce4254ba2fc9cbb4504da927\" returns successfully" Dec 13 14:29:35.316745 kubelet[1395]: E1213 14:29:35.316649 1395 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:29:36.234879 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 14:29:36.306730 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm_base(ctr(aes-generic),ghash-generic)))) Dec 13 14:29:36.317018 kubelet[1395]: E1213 14:29:36.316949 1395 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:29:37.111029 kubelet[1395]: W1213 14:29:37.102346 1395 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod74e130d3_ef0c_4427_8715_7932c62afcae.slice/cri-containerd-f2e54bb78a12aa11364391aeee79929c9c95a09da37d25fcf05dc686d1f4858f.scope WatchSource:0}: task f2e54bb78a12aa11364391aeee79929c9c95a09da37d25fcf05dc686d1f4858f not found: not found Dec 13 14:29:37.317158 kubelet[1395]: E1213 14:29:37.317109 1395 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:29:38.319373 kubelet[1395]: E1213 14:29:38.319185 1395 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:29:39.234389 kubelet[1395]: E1213 14:29:39.234316 1395 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:29:39.320615 kubelet[1395]: E1213 14:29:39.320560 1395 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:29:39.545715 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Dec 13 14:29:39.548780 systemd-networkd[968]: lxc_health: Link UP Dec 13 14:29:39.552722 systemd-networkd[968]: lxc_health: Gained carrier Dec 13 14:29:40.219673 kubelet[1395]: W1213 14:29:40.219606 1395 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod74e130d3_ef0c_4427_8715_7932c62afcae.slice/cri-containerd-395618b0a0ca92556149868ff591994cfb5204461d6f4c9c22e2511d7b49a7a4.scope WatchSource:0}: task 395618b0a0ca92556149868ff591994cfb5204461d6f4c9c22e2511d7b49a7a4 not found: not found Dec 13 14:29:40.322179 kubelet[1395]: E1213 14:29:40.322117 1395 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:29:40.619401 systemd[1]: run-containerd-runc-k8s.io-359357d1aea78dd6f92f600c577aba9d5acb199bce4254ba2fc9cbb4504da927-runc.ZFMhfb.mount: Deactivated successfully. Dec 13 14:29:41.023001 systemd-networkd[968]: lxc_health: Gained IPv6LL Dec 13 14:29:41.322816 kubelet[1395]: E1213 14:29:41.322642 1395 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:29:41.491083 kubelet[1395]: I1213 14:29:41.491029 1395 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-jc447" podStartSLOduration=10.491009881 podStartE2EDuration="10.491009881s" podCreationTimestamp="2024-12-13 14:29:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:29:36.117024121 +0000 UTC m=+98.297227565" watchObservedRunningTime="2024-12-13 14:29:41.491009881 +0000 UTC m=+103.671213304" Dec 13 14:29:42.323648 kubelet[1395]: E1213 14:29:42.323545 1395 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:29:43.324116 kubelet[1395]: E1213 14:29:43.324052 1395 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:29:43.328139 kubelet[1395]: W1213 14:29:43.328081 1395 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod74e130d3_ef0c_4427_8715_7932c62afcae.slice/cri-containerd-7ae33417af8ed5df3058ad89cff1e5c7b78fd015b51c5a9e9ad864f6d1058c89.scope WatchSource:0}: task 7ae33417af8ed5df3058ad89cff1e5c7b78fd015b51c5a9e9ad864f6d1058c89 not found: not found Dec 13 14:29:44.324320 kubelet[1395]: E1213 14:29:44.324266 1395 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:29:45.325510 kubelet[1395]: E1213 14:29:45.325360 1395 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:29:46.326140 kubelet[1395]: E1213 14:29:46.325967 1395 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:29:46.439519 kubelet[1395]: W1213 14:29:46.439407 1395 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod74e130d3_ef0c_4427_8715_7932c62afcae.slice/cri-containerd-bf1a71980e44ce45bc290673a446cec7f1ec4696b71248972741df286f1dddd0.scope WatchSource:0}: task bf1a71980e44ce45bc290673a446cec7f1ec4696b71248972741df286f1dddd0 not found: not found Dec 13 14:29:47.326809 kubelet[1395]: E1213 14:29:47.326735 1395 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:29:48.327592 kubelet[1395]: E1213 14:29:48.327447 1395 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:29:49.327936 kubelet[1395]: E1213 14:29:49.327645 1395 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:29:50.328297 kubelet[1395]: E1213 14:29:50.328165 1395 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:29:51.328993 kubelet[1395]: E1213 14:29:51.328923 1395 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"