Dec 13 14:32:11.031230 kernel: Linux version 5.15.173-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri Dec 13 12:55:10 -00 2024 Dec 13 14:32:11.031253 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=8c474c3ec361ec863adbecaa85281a726e1b53f7863ecc4742be8c5f6d02a66e Dec 13 14:32:11.031265 kernel: BIOS-provided physical RAM map: Dec 13 14:32:11.031272 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Dec 13 14:32:11.031279 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Dec 13 14:32:11.031286 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Dec 13 14:32:11.031294 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdcfff] usable Dec 13 14:32:11.031301 kernel: BIOS-e820: [mem 0x000000007ffdd000-0x000000007fffffff] reserved Dec 13 14:32:11.031310 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Dec 13 14:32:11.031317 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Dec 13 14:32:11.031323 kernel: NX (Execute Disable) protection: active Dec 13 14:32:11.031330 kernel: SMBIOS 2.8 present. Dec 13 14:32:11.031337 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014 Dec 13 14:32:11.031343 kernel: Hypervisor detected: KVM Dec 13 14:32:11.031351 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 13 14:32:11.031361 kernel: kvm-clock: cpu 0, msr 3c19a001, primary cpu clock Dec 13 14:32:11.031368 kernel: kvm-clock: using sched offset of 13166528278 cycles Dec 13 14:32:11.031376 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 13 14:32:11.031384 kernel: tsc: Detected 1996.249 MHz processor Dec 13 14:32:11.031391 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 13 14:32:11.031399 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 13 14:32:11.031406 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Dec 13 14:32:11.031414 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 13 14:32:11.031423 kernel: ACPI: Early table checksum verification disabled Dec 13 14:32:11.031430 kernel: ACPI: RSDP 0x00000000000F5930 000014 (v00 BOCHS ) Dec 13 14:32:11.031438 kernel: ACPI: RSDT 0x000000007FFE1848 000030 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 14:32:11.031445 kernel: ACPI: FACP 0x000000007FFE172C 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 14:32:11.031453 kernel: ACPI: DSDT 0x000000007FFE0040 0016EC (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 14:32:11.031460 kernel: ACPI: FACS 0x000000007FFE0000 000040 Dec 13 14:32:11.031467 kernel: ACPI: APIC 0x000000007FFE17A0 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 14:32:11.031475 kernel: ACPI: WAET 0x000000007FFE1820 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 14:32:11.031482 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe172c-0x7ffe179f] Dec 13 14:32:11.031491 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe172b] Dec 13 14:32:11.031498 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Dec 13 14:32:11.031506 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17a0-0x7ffe181f] Dec 13 14:32:11.031513 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe1820-0x7ffe1847] Dec 13 14:32:11.031520 kernel: No NUMA configuration found Dec 13 14:32:11.031527 kernel: Faking a node at [mem 0x0000000000000000-0x000000007ffdcfff] Dec 13 14:32:11.031534 kernel: NODE_DATA(0) allocated [mem 0x7ffd7000-0x7ffdcfff] Dec 13 14:32:11.031542 kernel: Zone ranges: Dec 13 14:32:11.031554 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 13 14:32:11.031561 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdcfff] Dec 13 14:32:11.031569 kernel: Normal empty Dec 13 14:32:11.031576 kernel: Movable zone start for each node Dec 13 14:32:11.031584 kernel: Early memory node ranges Dec 13 14:32:11.031591 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Dec 13 14:32:11.031601 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdcfff] Dec 13 14:32:11.031608 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdcfff] Dec 13 14:32:11.031616 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 14:32:11.031624 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Dec 13 14:32:11.031631 kernel: On node 0, zone DMA32: 35 pages in unavailable ranges Dec 13 14:32:11.031639 kernel: ACPI: PM-Timer IO Port: 0x608 Dec 13 14:32:11.031647 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 13 14:32:11.031654 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 13 14:32:11.031662 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Dec 13 14:32:11.031671 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 13 14:32:11.031679 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 13 14:32:11.031686 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 13 14:32:11.031694 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 13 14:32:11.031702 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 13 14:32:11.031709 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Dec 13 14:32:11.031717 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Dec 13 14:32:11.031724 kernel: Booting paravirtualized kernel on KVM Dec 13 14:32:11.031732 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 13 14:32:11.031740 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 Dec 13 14:32:11.031750 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u1048576 Dec 13 14:32:11.031758 kernel: pcpu-alloc: s188696 r8192 d32488 u1048576 alloc=1*2097152 Dec 13 14:32:11.031765 kernel: pcpu-alloc: [0] 0 1 Dec 13 14:32:11.031773 kernel: kvm-guest: stealtime: cpu 0, msr 7dc1c0c0 Dec 13 14:32:11.031780 kernel: kvm-guest: PV spinlocks disabled, no host support Dec 13 14:32:11.031788 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515805 Dec 13 14:32:11.031795 kernel: Policy zone: DMA32 Dec 13 14:32:11.031804 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=8c474c3ec361ec863adbecaa85281a726e1b53f7863ecc4742be8c5f6d02a66e Dec 13 14:32:11.031814 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 14:32:11.031822 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 14:32:11.031830 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 13 14:32:11.031838 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 14:32:11.031846 kernel: Memory: 1973284K/2096620K available (12294K kernel code, 2275K rwdata, 13716K rodata, 47472K init, 4112K bss, 123076K reserved, 0K cma-reserved) Dec 13 14:32:11.031854 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 13 14:32:11.031861 kernel: ftrace: allocating 34549 entries in 135 pages Dec 13 14:32:11.031869 kernel: ftrace: allocated 135 pages with 4 groups Dec 13 14:32:11.031878 kernel: rcu: Hierarchical RCU implementation. Dec 13 14:32:11.031886 kernel: rcu: RCU event tracing is enabled. Dec 13 14:32:11.031929 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 13 14:32:11.031938 kernel: Rude variant of Tasks RCU enabled. Dec 13 14:32:11.031946 kernel: Tracing variant of Tasks RCU enabled. Dec 13 14:32:11.031954 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 14:32:11.031961 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 13 14:32:11.031969 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Dec 13 14:32:11.031977 kernel: Console: colour VGA+ 80x25 Dec 13 14:32:11.031988 kernel: printk: console [tty0] enabled Dec 13 14:32:11.031996 kernel: printk: console [ttyS0] enabled Dec 13 14:32:11.032003 kernel: ACPI: Core revision 20210730 Dec 13 14:32:11.032011 kernel: APIC: Switch to symmetric I/O mode setup Dec 13 14:32:11.032019 kernel: x2apic enabled Dec 13 14:32:11.032026 kernel: Switched APIC routing to physical x2apic. Dec 13 14:32:11.032034 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Dec 13 14:32:11.032042 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Dec 13 14:32:11.032049 kernel: Calibrating delay loop (skipped) preset value.. 3992.49 BogoMIPS (lpj=1996249) Dec 13 14:32:11.032057 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Dec 13 14:32:11.032068 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Dec 13 14:32:11.032076 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 13 14:32:11.032083 kernel: Spectre V2 : Mitigation: Retpolines Dec 13 14:32:11.032091 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Dec 13 14:32:11.032099 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Dec 13 14:32:11.032106 kernel: Speculative Store Bypass: Vulnerable Dec 13 14:32:11.032114 kernel: x86/fpu: x87 FPU will use FXSAVE Dec 13 14:32:11.032121 kernel: Freeing SMP alternatives memory: 32K Dec 13 14:32:11.032131 kernel: pid_max: default: 32768 minimum: 301 Dec 13 14:32:11.032141 kernel: LSM: Security Framework initializing Dec 13 14:32:11.032149 kernel: SELinux: Initializing. Dec 13 14:32:11.032157 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 13 14:32:11.032166 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 13 14:32:11.032174 kernel: smpboot: CPU0: AMD Intel Core i7 9xx (Nehalem Class Core i7) (family: 0x6, model: 0x1a, stepping: 0x3) Dec 13 14:32:11.032182 kernel: Performance Events: AMD PMU driver. Dec 13 14:32:11.032191 kernel: ... version: 0 Dec 13 14:32:11.032199 kernel: ... bit width: 48 Dec 13 14:32:11.032207 kernel: ... generic registers: 4 Dec 13 14:32:11.032224 kernel: ... value mask: 0000ffffffffffff Dec 13 14:32:11.032232 kernel: ... max period: 00007fffffffffff Dec 13 14:32:11.032242 kernel: ... fixed-purpose events: 0 Dec 13 14:32:11.032251 kernel: ... event mask: 000000000000000f Dec 13 14:32:11.032260 kernel: signal: max sigframe size: 1440 Dec 13 14:32:11.032268 kernel: rcu: Hierarchical SRCU implementation. Dec 13 14:32:11.032277 kernel: smp: Bringing up secondary CPUs ... Dec 13 14:32:11.032286 kernel: x86: Booting SMP configuration: Dec 13 14:32:11.032296 kernel: .... node #0, CPUs: #1 Dec 13 14:32:11.032305 kernel: kvm-clock: cpu 1, msr 3c19a041, secondary cpu clock Dec 13 14:32:11.032313 kernel: kvm-guest: stealtime: cpu 1, msr 7dd1c0c0 Dec 13 14:32:11.032322 kernel: smp: Brought up 1 node, 2 CPUs Dec 13 14:32:11.032330 kernel: smpboot: Max logical packages: 2 Dec 13 14:32:11.032339 kernel: smpboot: Total of 2 processors activated (7984.99 BogoMIPS) Dec 13 14:32:11.032347 kernel: devtmpfs: initialized Dec 13 14:32:11.032356 kernel: x86/mm: Memory block size: 128MB Dec 13 14:32:11.032365 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 14:32:11.032375 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 13 14:32:11.032384 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 14:32:11.032392 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 14:32:11.032401 kernel: audit: initializing netlink subsys (disabled) Dec 13 14:32:11.032410 kernel: audit: type=2000 audit(1734100330.815:1): state=initialized audit_enabled=0 res=1 Dec 13 14:32:11.032418 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 14:32:11.032426 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 13 14:32:11.032435 kernel: cpuidle: using governor menu Dec 13 14:32:11.032443 kernel: ACPI: bus type PCI registered Dec 13 14:32:11.032454 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 14:32:11.032463 kernel: dca service started, version 1.12.1 Dec 13 14:32:11.032471 kernel: PCI: Using configuration type 1 for base access Dec 13 14:32:11.032480 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 13 14:32:11.032489 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 14:32:11.032498 kernel: ACPI: Added _OSI(Module Device) Dec 13 14:32:11.032507 kernel: ACPI: Added _OSI(Processor Device) Dec 13 14:32:11.032516 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 14:32:11.032524 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 14:32:11.032535 kernel: ACPI: Added _OSI(Linux-Dell-Video) Dec 13 14:32:11.032544 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Dec 13 14:32:11.032552 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Dec 13 14:32:11.032561 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 14:32:11.032569 kernel: ACPI: Interpreter enabled Dec 13 14:32:11.032578 kernel: ACPI: PM: (supports S0 S3 S5) Dec 13 14:32:11.032586 kernel: ACPI: Using IOAPIC for interrupt routing Dec 13 14:32:11.032595 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 13 14:32:11.032604 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Dec 13 14:32:11.032615 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 13 14:32:11.032759 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Dec 13 14:32:11.032853 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. Dec 13 14:32:11.032867 kernel: acpiphp: Slot [3] registered Dec 13 14:32:11.032876 kernel: acpiphp: Slot [4] registered Dec 13 14:32:11.032884 kernel: acpiphp: Slot [5] registered Dec 13 14:32:11.032908 kernel: acpiphp: Slot [6] registered Dec 13 14:32:11.032921 kernel: acpiphp: Slot [7] registered Dec 13 14:32:11.032930 kernel: acpiphp: Slot [8] registered Dec 13 14:32:11.032938 kernel: acpiphp: Slot [9] registered Dec 13 14:32:11.032947 kernel: acpiphp: Slot [10] registered Dec 13 14:32:11.032955 kernel: acpiphp: Slot [11] registered Dec 13 14:32:11.032963 kernel: acpiphp: Slot [12] registered Dec 13 14:32:11.032973 kernel: acpiphp: Slot [13] registered Dec 13 14:32:11.032982 kernel: acpiphp: Slot [14] registered Dec 13 14:32:11.032990 kernel: acpiphp: Slot [15] registered Dec 13 14:32:11.032998 kernel: acpiphp: Slot [16] registered Dec 13 14:32:11.033008 kernel: acpiphp: Slot [17] registered Dec 13 14:32:11.033015 kernel: acpiphp: Slot [18] registered Dec 13 14:32:11.033023 kernel: acpiphp: Slot [19] registered Dec 13 14:32:11.033031 kernel: acpiphp: Slot [20] registered Dec 13 14:32:11.033039 kernel: acpiphp: Slot [21] registered Dec 13 14:32:11.033047 kernel: acpiphp: Slot [22] registered Dec 13 14:32:11.033055 kernel: acpiphp: Slot [23] registered Dec 13 14:32:11.033063 kernel: acpiphp: Slot [24] registered Dec 13 14:32:11.033071 kernel: acpiphp: Slot [25] registered Dec 13 14:32:11.033080 kernel: acpiphp: Slot [26] registered Dec 13 14:32:11.033088 kernel: acpiphp: Slot [27] registered Dec 13 14:32:11.033096 kernel: acpiphp: Slot [28] registered Dec 13 14:32:11.033104 kernel: acpiphp: Slot [29] registered Dec 13 14:32:11.033111 kernel: acpiphp: Slot [30] registered Dec 13 14:32:11.033119 kernel: acpiphp: Slot [31] registered Dec 13 14:32:11.033127 kernel: PCI host bridge to bus 0000:00 Dec 13 14:32:11.033226 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 13 14:32:11.033302 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 13 14:32:11.033381 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 13 14:32:11.033454 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Dec 13 14:32:11.033547 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Dec 13 14:32:11.033620 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 13 14:32:11.033718 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Dec 13 14:32:11.033812 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Dec 13 14:32:11.033929 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Dec 13 14:32:11.034020 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc120-0xc12f] Dec 13 14:32:11.034109 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Dec 13 14:32:11.034196 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Dec 13 14:32:11.034284 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Dec 13 14:32:11.034372 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Dec 13 14:32:11.034469 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Dec 13 14:32:11.034563 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Dec 13 14:32:11.034653 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Dec 13 14:32:11.034751 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Dec 13 14:32:11.034846 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Dec 13 14:32:11.038009 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Dec 13 14:32:11.038102 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfeb90000-0xfeb90fff] Dec 13 14:32:11.038190 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfeb80000-0xfeb8ffff pref] Dec 13 14:32:11.038271 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 13 14:32:11.038374 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Dec 13 14:32:11.038458 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc080-0xc0bf] Dec 13 14:32:11.038539 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfeb91000-0xfeb91fff] Dec 13 14:32:11.038619 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Dec 13 14:32:11.038700 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb00000-0xfeb7ffff pref] Dec 13 14:32:11.038794 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Dec 13 14:32:11.038875 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Dec 13 14:32:11.038976 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfeb92000-0xfeb92fff] Dec 13 14:32:11.039057 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Dec 13 14:32:11.039151 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 Dec 13 14:32:11.039237 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc0c0-0xc0ff] Dec 13 14:32:11.039322 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Dec 13 14:32:11.039420 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 Dec 13 14:32:11.039510 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc100-0xc11f] Dec 13 14:32:11.039597 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Dec 13 14:32:11.039610 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 13 14:32:11.039619 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 13 14:32:11.039628 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 13 14:32:11.039637 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 13 14:32:11.039646 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Dec 13 14:32:11.039659 kernel: iommu: Default domain type: Translated Dec 13 14:32:11.039667 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 13 14:32:11.039751 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Dec 13 14:32:11.039837 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 13 14:32:11.039942 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Dec 13 14:32:11.039957 kernel: vgaarb: loaded Dec 13 14:32:11.039965 kernel: pps_core: LinuxPPS API ver. 1 registered Dec 13 14:32:11.039975 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Dec 13 14:32:11.039985 kernel: PTP clock support registered Dec 13 14:32:11.039998 kernel: PCI: Using ACPI for IRQ routing Dec 13 14:32:11.040006 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 13 14:32:11.040014 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Dec 13 14:32:11.040022 kernel: e820: reserve RAM buffer [mem 0x7ffdd000-0x7fffffff] Dec 13 14:32:11.040030 kernel: clocksource: Switched to clocksource kvm-clock Dec 13 14:32:11.040038 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 14:32:11.040046 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 14:32:11.040054 kernel: pnp: PnP ACPI init Dec 13 14:32:11.040139 kernel: pnp 00:03: [dma 2] Dec 13 14:32:11.040154 kernel: pnp: PnP ACPI: found 5 devices Dec 13 14:32:11.040163 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 13 14:32:11.040171 kernel: NET: Registered PF_INET protocol family Dec 13 14:32:11.040179 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 14:32:11.040187 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Dec 13 14:32:11.040196 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 14:32:11.040204 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 13 14:32:11.040212 kernel: TCP bind hash table entries: 16384 (order: 6, 262144 bytes, linear) Dec 13 14:32:11.040222 kernel: TCP: Hash tables configured (established 16384 bind 16384) Dec 13 14:32:11.040230 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 13 14:32:11.040238 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 13 14:32:11.040246 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 14:32:11.040254 kernel: NET: Registered PF_XDP protocol family Dec 13 14:32:11.040330 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 13 14:32:11.040408 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 13 14:32:11.040482 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 13 14:32:11.040556 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Dec 13 14:32:11.040631 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Dec 13 14:32:11.040726 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Dec 13 14:32:11.040810 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Dec 13 14:32:11.040909 kernel: pci 0000:00:01.0: Activating ISA DMA hang workarounds Dec 13 14:32:11.040922 kernel: PCI: CLS 0 bytes, default 64 Dec 13 14:32:11.040930 kernel: Initialise system trusted keyrings Dec 13 14:32:11.040938 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Dec 13 14:32:11.040951 kernel: Key type asymmetric registered Dec 13 14:32:11.040959 kernel: Asymmetric key parser 'x509' registered Dec 13 14:32:11.040967 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Dec 13 14:32:11.040975 kernel: io scheduler mq-deadline registered Dec 13 14:32:11.040983 kernel: io scheduler kyber registered Dec 13 14:32:11.040991 kernel: io scheduler bfq registered Dec 13 14:32:11.040999 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 13 14:32:11.041008 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Dec 13 14:32:11.041016 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Dec 13 14:32:11.041024 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Dec 13 14:32:11.041034 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Dec 13 14:32:11.041042 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 14:32:11.041050 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 13 14:32:11.041058 kernel: random: crng init done Dec 13 14:32:11.041066 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 13 14:32:11.041074 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 13 14:32:11.041082 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 13 14:32:11.041172 kernel: rtc_cmos 00:04: RTC can wake from S4 Dec 13 14:32:11.041188 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 13 14:32:11.041286 kernel: rtc_cmos 00:04: registered as rtc0 Dec 13 14:32:11.041375 kernel: rtc_cmos 00:04: setting system clock to 2024-12-13T14:32:10 UTC (1734100330) Dec 13 14:32:11.041458 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Dec 13 14:32:11.041471 kernel: NET: Registered PF_INET6 protocol family Dec 13 14:32:11.041480 kernel: Segment Routing with IPv6 Dec 13 14:32:11.041489 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 14:32:11.041498 kernel: NET: Registered PF_PACKET protocol family Dec 13 14:32:11.041519 kernel: Key type dns_resolver registered Dec 13 14:32:11.041531 kernel: IPI shorthand broadcast: enabled Dec 13 14:32:11.041539 kernel: sched_clock: Marking stable (764609297, 117475132)->(912721088, -30636659) Dec 13 14:32:11.041547 kernel: registered taskstats version 1 Dec 13 14:32:11.041555 kernel: Loading compiled-in X.509 certificates Dec 13 14:32:11.041563 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.173-flatcar: e1d88c9e01f5bb2adeb5b99325e46e5ca8dff115' Dec 13 14:32:11.041572 kernel: Key type .fscrypt registered Dec 13 14:32:11.041580 kernel: Key type fscrypt-provisioning registered Dec 13 14:32:11.041588 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 14:32:11.041598 kernel: ima: Allocated hash algorithm: sha1 Dec 13 14:32:11.041606 kernel: ima: No architecture policies found Dec 13 14:32:11.041614 kernel: clk: Disabling unused clocks Dec 13 14:32:11.041622 kernel: Freeing unused kernel image (initmem) memory: 47472K Dec 13 14:32:11.041630 kernel: Write protecting the kernel read-only data: 28672k Dec 13 14:32:11.041638 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Dec 13 14:32:11.041646 kernel: Freeing unused kernel image (rodata/data gap) memory: 620K Dec 13 14:32:11.041655 kernel: Run /init as init process Dec 13 14:32:11.041663 kernel: with arguments: Dec 13 14:32:11.041674 kernel: /init Dec 13 14:32:11.041682 kernel: with environment: Dec 13 14:32:11.041689 kernel: HOME=/ Dec 13 14:32:11.041697 kernel: TERM=linux Dec 13 14:32:11.041705 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 14:32:11.041716 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 14:32:11.041728 systemd[1]: Detected virtualization kvm. Dec 13 14:32:11.041737 systemd[1]: Detected architecture x86-64. Dec 13 14:32:11.041748 systemd[1]: Running in initrd. Dec 13 14:32:11.041757 systemd[1]: No hostname configured, using default hostname. Dec 13 14:32:11.041766 systemd[1]: Hostname set to . Dec 13 14:32:11.041775 systemd[1]: Initializing machine ID from VM UUID. Dec 13 14:32:11.041783 systemd[1]: Queued start job for default target initrd.target. Dec 13 14:32:11.041792 systemd[1]: Started systemd-ask-password-console.path. Dec 13 14:32:11.041800 systemd[1]: Reached target cryptsetup.target. Dec 13 14:32:11.041809 systemd[1]: Reached target paths.target. Dec 13 14:32:11.041820 systemd[1]: Reached target slices.target. Dec 13 14:32:11.041829 systemd[1]: Reached target swap.target. Dec 13 14:32:11.041837 systemd[1]: Reached target timers.target. Dec 13 14:32:11.041846 systemd[1]: Listening on iscsid.socket. Dec 13 14:32:11.041854 systemd[1]: Listening on iscsiuio.socket. Dec 13 14:32:11.041863 systemd[1]: Listening on systemd-journald-audit.socket. Dec 13 14:32:11.041872 systemd[1]: Listening on systemd-journald-dev-log.socket. Dec 13 14:32:11.041882 systemd[1]: Listening on systemd-journald.socket. Dec 13 14:32:11.045938 systemd[1]: Listening on systemd-networkd.socket. Dec 13 14:32:11.045952 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 14:32:11.045961 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 14:32:11.045971 systemd[1]: Reached target sockets.target. Dec 13 14:32:11.045997 systemd[1]: Starting kmod-static-nodes.service... Dec 13 14:32:11.046009 systemd[1]: Finished network-cleanup.service. Dec 13 14:32:11.046021 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 14:32:11.046031 systemd[1]: Starting systemd-journald.service... Dec 13 14:32:11.046040 systemd[1]: Starting systemd-modules-load.service... Dec 13 14:32:11.046049 systemd[1]: Starting systemd-resolved.service... Dec 13 14:32:11.046058 systemd[1]: Starting systemd-vconsole-setup.service... Dec 13 14:32:11.046066 systemd[1]: Finished kmod-static-nodes.service. Dec 13 14:32:11.046079 systemd-journald[184]: Journal started Dec 13 14:32:11.046198 systemd-journald[184]: Runtime Journal (/run/log/journal/9e976cb5ad4642fea30be342de678085) is 4.9M, max 39.5M, 34.5M free. Dec 13 14:32:11.007267 systemd-modules-load[185]: Inserted module 'overlay' Dec 13 14:32:11.075287 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 14:32:11.075312 kernel: audit: type=1130 audit(1734100331.068:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:11.075330 systemd[1]: Started systemd-journald.service. Dec 13 14:32:11.075345 kernel: Bridge firewalling registered Dec 13 14:32:11.068000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:11.054688 systemd-resolved[186]: Positive Trust Anchors: Dec 13 14:32:11.054698 systemd-resolved[186]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 14:32:11.054736 systemd-resolved[186]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 14:32:11.057387 systemd-resolved[186]: Defaulting to hostname 'linux'. Dec 13 14:32:11.074473 systemd-modules-load[185]: Inserted module 'br_netfilter' Dec 13 14:32:11.079000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:11.080804 systemd[1]: Started systemd-resolved.service. Dec 13 14:32:11.110185 kernel: audit: type=1130 audit(1734100331.079:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:11.110209 kernel: audit: type=1130 audit(1734100331.094:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:11.110222 kernel: SCSI subsystem initialized Dec 13 14:32:11.110233 kernel: audit: type=1130 audit(1734100331.102:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:11.094000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:11.102000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:11.095958 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 14:32:11.103556 systemd[1]: Finished systemd-vconsole-setup.service. Dec 13 14:32:11.109000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:11.110965 systemd[1]: Reached target nss-lookup.target. Dec 13 14:32:11.115920 kernel: audit: type=1130 audit(1734100331.109:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:11.115983 systemd[1]: Starting dracut-cmdline-ask.service... Dec 13 14:32:11.117483 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Dec 13 14:32:11.120446 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 14:32:11.120471 kernel: device-mapper: uevent: version 1.0.3 Dec 13 14:32:11.127550 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Dec 13 14:32:11.129953 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Dec 13 14:32:11.129000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:11.134205 kernel: audit: type=1130 audit(1734100331.129:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:11.137155 systemd[1]: Finished dracut-cmdline-ask.service. Dec 13 14:32:11.137302 systemd-modules-load[185]: Inserted module 'dm_multipath' Dec 13 14:32:11.137000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:11.139107 systemd[1]: Finished systemd-modules-load.service. Dec 13 14:32:11.150941 kernel: audit: type=1130 audit(1734100331.137:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:11.150966 kernel: audit: type=1130 audit(1734100331.141:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:11.141000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:11.146452 systemd[1]: Starting dracut-cmdline.service... Dec 13 14:32:11.150345 systemd[1]: Starting systemd-sysctl.service... Dec 13 14:32:11.158605 systemd[1]: Finished systemd-sysctl.service. Dec 13 14:32:11.160227 dracut-cmdline[205]: dracut-dracut-053 Dec 13 14:32:11.158000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:11.164932 kernel: audit: type=1130 audit(1734100331.158:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:11.165026 dracut-cmdline[205]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=8c474c3ec361ec863adbecaa85281a726e1b53f7863ecc4742be8c5f6d02a66e Dec 13 14:32:11.233917 kernel: Loading iSCSI transport class v2.0-870. Dec 13 14:32:11.254917 kernel: iscsi: registered transport (tcp) Dec 13 14:32:11.280966 kernel: iscsi: registered transport (qla4xxx) Dec 13 14:32:11.281032 kernel: QLogic iSCSI HBA Driver Dec 13 14:32:11.331918 systemd[1]: Finished dracut-cmdline.service. Dec 13 14:32:11.331000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:11.333403 systemd[1]: Starting dracut-pre-udev.service... Dec 13 14:32:11.404014 kernel: raid6: sse2x4 gen() 13142 MB/s Dec 13 14:32:11.420982 kernel: raid6: sse2x4 xor() 5013 MB/s Dec 13 14:32:11.438158 kernel: raid6: sse2x2 gen() 14348 MB/s Dec 13 14:32:11.454978 kernel: raid6: sse2x2 xor() 8329 MB/s Dec 13 14:32:11.471957 kernel: raid6: sse2x1 gen() 10803 MB/s Dec 13 14:32:11.489778 kernel: raid6: sse2x1 xor() 6715 MB/s Dec 13 14:32:11.489853 kernel: raid6: using algorithm sse2x2 gen() 14348 MB/s Dec 13 14:32:11.489882 kernel: raid6: .... xor() 8329 MB/s, rmw enabled Dec 13 14:32:11.490676 kernel: raid6: using ssse3x2 recovery algorithm Dec 13 14:32:11.506174 kernel: xor: measuring software checksum speed Dec 13 14:32:11.506251 kernel: prefetch64-sse : 18320 MB/sec Dec 13 14:32:11.507208 kernel: generic_sse : 16672 MB/sec Dec 13 14:32:11.507267 kernel: xor: using function: prefetch64-sse (18320 MB/sec) Dec 13 14:32:11.623030 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Dec 13 14:32:11.639000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:11.639041 systemd[1]: Finished dracut-pre-udev.service. Dec 13 14:32:11.640000 audit: BPF prog-id=7 op=LOAD Dec 13 14:32:11.640000 audit: BPF prog-id=8 op=LOAD Dec 13 14:32:11.642477 systemd[1]: Starting systemd-udevd.service... Dec 13 14:32:11.667402 systemd-udevd[385]: Using default interface naming scheme 'v252'. Dec 13 14:32:11.672358 systemd[1]: Started systemd-udevd.service. Dec 13 14:32:11.676000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:11.680736 systemd[1]: Starting dracut-pre-trigger.service... Dec 13 14:32:11.695646 dracut-pre-trigger[401]: rd.md=0: removing MD RAID activation Dec 13 14:32:11.745289 systemd[1]: Finished dracut-pre-trigger.service. Dec 13 14:32:11.745000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:11.748310 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 14:32:11.808278 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 14:32:11.807000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:11.874929 kernel: virtio_blk virtio2: [vda] 41943040 512-byte logical blocks (21.5 GB/20.0 GiB) Dec 13 14:32:11.960508 kernel: libata version 3.00 loaded. Dec 13 14:32:11.960538 kernel: ata_piix 0000:00:01.1: version 2.13 Dec 13 14:32:11.960684 kernel: scsi host0: ata_piix Dec 13 14:32:11.960827 kernel: scsi host1: ata_piix Dec 13 14:32:11.960962 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc120 irq 14 Dec 13 14:32:11.960977 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc128 irq 15 Dec 13 14:32:11.960992 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 14:32:11.961004 kernel: GPT:17805311 != 41943039 Dec 13 14:32:11.961015 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 14:32:11.961026 kernel: GPT:17805311 != 41943039 Dec 13 14:32:11.961036 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 14:32:11.961046 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 14:32:12.090972 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (440) Dec 13 14:32:12.110913 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Dec 13 14:32:12.112312 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Dec 13 14:32:12.124819 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Dec 13 14:32:12.140805 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Dec 13 14:32:12.150671 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Dec 13 14:32:12.152167 systemd[1]: Starting disk-uuid.service... Dec 13 14:32:12.165975 disk-uuid[460]: Primary Header is updated. Dec 13 14:32:12.165975 disk-uuid[460]: Secondary Entries is updated. Dec 13 14:32:12.165975 disk-uuid[460]: Secondary Header is updated. Dec 13 14:32:12.171956 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 14:32:13.197957 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 14:32:13.198074 disk-uuid[461]: The operation has completed successfully. Dec 13 14:32:13.266571 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 14:32:13.268512 systemd[1]: Finished disk-uuid.service. Dec 13 14:32:13.269000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:13.270000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:13.281866 systemd[1]: Starting verity-setup.service... Dec 13 14:32:13.305933 kernel: device-mapper: verity: sha256 using implementation "sha256-ssse3" Dec 13 14:32:14.535558 systemd[1]: Found device dev-mapper-usr.device. Dec 13 14:32:14.539681 systemd[1]: Mounting sysusr-usr.mount... Dec 13 14:32:14.546418 systemd[1]: Finished verity-setup.service. Dec 13 14:32:14.546000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:14.772915 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Dec 13 14:32:14.774046 systemd[1]: Mounted sysusr-usr.mount. Dec 13 14:32:14.775485 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Dec 13 14:32:14.777202 systemd[1]: Starting ignition-setup.service... Dec 13 14:32:14.780104 systemd[1]: Starting parse-ip-for-networkd.service... Dec 13 14:32:14.794453 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 14:32:14.794518 kernel: BTRFS info (device vda6): using free space tree Dec 13 14:32:14.794537 kernel: BTRFS info (device vda6): has skinny extents Dec 13 14:32:14.825457 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 14:32:14.838000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:14.839081 systemd[1]: Finished ignition-setup.service. Dec 13 14:32:14.840587 systemd[1]: Starting ignition-fetch-offline.service... Dec 13 14:32:14.911029 systemd[1]: Finished parse-ip-for-networkd.service. Dec 13 14:32:14.911000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:14.913000 audit: BPF prog-id=9 op=LOAD Dec 13 14:32:14.915638 systemd[1]: Starting systemd-networkd.service... Dec 13 14:32:14.958169 systemd-networkd[631]: lo: Link UP Dec 13 14:32:14.958182 systemd-networkd[631]: lo: Gained carrier Dec 13 14:32:14.959000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:14.959550 systemd-networkd[631]: Enumeration completed Dec 13 14:32:14.959668 systemd[1]: Started systemd-networkd.service. Dec 13 14:32:14.960397 systemd[1]: Reached target network.target. Dec 13 14:32:14.960762 systemd-networkd[631]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 14:32:14.963399 systemd[1]: Starting iscsiuio.service... Dec 13 14:32:14.968218 systemd-networkd[631]: eth0: Link UP Dec 13 14:32:14.969316 systemd-networkd[631]: eth0: Gained carrier Dec 13 14:32:14.979423 systemd[1]: Started iscsiuio.service. Dec 13 14:32:14.985370 kernel: kauditd_printk_skb: 14 callbacks suppressed Dec 13 14:32:14.985428 kernel: audit: type=1130 audit(1734100334.979:25): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:14.979000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:14.982590 systemd[1]: Starting iscsid.service... Dec 13 14:32:14.985639 systemd-networkd[631]: eth0: DHCPv4 address 172.24.4.94/24, gateway 172.24.4.1 acquired from 172.24.4.1 Dec 13 14:32:14.992206 iscsid[637]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Dec 13 14:32:14.992206 iscsid[637]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Dec 13 14:32:14.992206 iscsid[637]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Dec 13 14:32:14.992206 iscsid[637]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Dec 13 14:32:14.992206 iscsid[637]: If using hardware iscsi like qla4xxx this message can be ignored. Dec 13 14:32:14.992206 iscsid[637]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Dec 13 14:32:14.992206 iscsid[637]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Dec 13 14:32:15.011745 kernel: audit: type=1130 audit(1734100334.992:26): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:14.992000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:14.992449 systemd[1]: Started iscsid.service. Dec 13 14:32:15.017983 kernel: audit: type=1130 audit(1734100335.012:27): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:15.012000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:14.994224 systemd[1]: Starting dracut-initqueue.service... Dec 13 14:32:15.013364 systemd[1]: Finished dracut-initqueue.service. Dec 13 14:32:15.013985 systemd[1]: Reached target remote-fs-pre.target. Dec 13 14:32:15.018451 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 14:32:15.019467 systemd[1]: Reached target remote-fs.target. Dec 13 14:32:15.021388 systemd[1]: Starting dracut-pre-mount.service... Dec 13 14:32:15.032000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:15.032869 systemd[1]: Finished dracut-pre-mount.service. Dec 13 14:32:15.037920 kernel: audit: type=1130 audit(1734100335.032:28): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:16.102873 ignition[573]: Ignition 2.14.0 Dec 13 14:32:16.102937 ignition[573]: Stage: fetch-offline Dec 13 14:32:16.103104 ignition[573]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:32:16.108430 systemd[1]: Finished ignition-fetch-offline.service. Dec 13 14:32:16.103157 ignition[573]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Dec 13 14:32:16.105406 ignition[573]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 14:32:16.123218 kernel: audit: type=1130 audit(1734100336.110:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:16.110000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:16.113156 systemd[1]: Starting ignition-fetch.service... Dec 13 14:32:16.105656 ignition[573]: parsed url from cmdline: "" Dec 13 14:32:16.105665 ignition[573]: no config URL provided Dec 13 14:32:16.105678 ignition[573]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 14:32:16.105697 ignition[573]: no config at "/usr/lib/ignition/user.ign" Dec 13 14:32:16.105711 ignition[573]: failed to fetch config: resource requires networking Dec 13 14:32:16.106389 ignition[573]: Ignition finished successfully Dec 13 14:32:16.139063 ignition[655]: Ignition 2.14.0 Dec 13 14:32:16.139097 ignition[655]: Stage: fetch Dec 13 14:32:16.139439 ignition[655]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:32:16.139511 ignition[655]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Dec 13 14:32:16.142596 ignition[655]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 14:32:16.142837 ignition[655]: parsed url from cmdline: "" Dec 13 14:32:16.142848 ignition[655]: no config URL provided Dec 13 14:32:16.142861 ignition[655]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 14:32:16.142881 ignition[655]: no config at "/usr/lib/ignition/user.ign" Dec 13 14:32:16.145039 ignition[655]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Dec 13 14:32:16.145155 ignition[655]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Dec 13 14:32:16.145322 ignition[655]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Dec 13 14:32:16.352675 ignition[655]: GET result: OK Dec 13 14:32:16.352885 ignition[655]: parsing config with SHA512: 7059d9bb2880f945790d4bfc67bd88d4605b92e53751e95c9d4de64fd334cd199f437556a19dbbcac1da2aea17e5accc68629ab88a1a61b1adf402346c8025fa Dec 13 14:32:16.364692 unknown[655]: fetched base config from "system" Dec 13 14:32:16.364721 unknown[655]: fetched base config from "system" Dec 13 14:32:16.364735 unknown[655]: fetched user config from "openstack" Dec 13 14:32:16.365589 ignition[655]: fetch: fetch complete Dec 13 14:32:16.365603 ignition[655]: fetch: fetch passed Dec 13 14:32:16.365685 ignition[655]: Ignition finished successfully Dec 13 14:32:16.368049 systemd[1]: Finished ignition-fetch.service. Dec 13 14:32:16.378304 kernel: audit: type=1130 audit(1734100336.367:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:16.367000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:16.369692 systemd[1]: Starting ignition-kargs.service... Dec 13 14:32:16.389094 ignition[661]: Ignition 2.14.0 Dec 13 14:32:16.389108 ignition[661]: Stage: kargs Dec 13 14:32:16.389250 ignition[661]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:32:16.393110 systemd[1]: Finished ignition-kargs.service. Dec 13 14:32:16.389274 ignition[661]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Dec 13 14:32:16.390278 ignition[661]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 14:32:16.404559 kernel: audit: type=1130 audit(1734100336.393:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:16.393000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:16.395753 systemd[1]: Starting ignition-disks.service... Dec 13 14:32:16.391200 ignition[661]: kargs: kargs passed Dec 13 14:32:16.391247 ignition[661]: Ignition finished successfully Dec 13 14:32:16.410922 ignition[666]: Ignition 2.14.0 Dec 13 14:32:16.410936 ignition[666]: Stage: disks Dec 13 14:32:16.411072 ignition[666]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:32:16.411098 ignition[666]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Dec 13 14:32:16.414801 systemd[1]: Finished ignition-disks.service. Dec 13 14:32:16.412113 ignition[666]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 14:32:16.426271 kernel: audit: type=1130 audit(1734100336.416:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:16.416000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:16.413730 ignition[666]: disks: disks passed Dec 13 14:32:16.417591 systemd[1]: Reached target initrd-root-device.target. Dec 13 14:32:16.413789 ignition[666]: Ignition finished successfully Dec 13 14:32:16.426698 systemd[1]: Reached target local-fs-pre.target. Dec 13 14:32:16.428081 systemd[1]: Reached target local-fs.target. Dec 13 14:32:16.429537 systemd[1]: Reached target sysinit.target. Dec 13 14:32:16.431053 systemd[1]: Reached target basic.target. Dec 13 14:32:16.433414 systemd[1]: Starting systemd-fsck-root.service... Dec 13 14:32:16.467498 systemd-fsck[674]: ROOT: clean, 621/1628000 files, 124058/1617920 blocks Dec 13 14:32:16.496970 systemd[1]: Finished systemd-fsck-root.service. Dec 13 14:32:16.498000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:16.500232 systemd[1]: Mounting sysroot.mount... Dec 13 14:32:16.509403 kernel: audit: type=1130 audit(1734100336.498:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:16.528941 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Dec 13 14:32:16.530562 systemd[1]: Mounted sysroot.mount. Dec 13 14:32:16.531879 systemd[1]: Reached target initrd-root-fs.target. Dec 13 14:32:16.536874 systemd[1]: Mounting sysroot-usr.mount... Dec 13 14:32:16.539108 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Dec 13 14:32:16.540711 systemd[1]: Starting flatcar-openstack-hostname.service... Dec 13 14:32:16.546236 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 14:32:16.546303 systemd[1]: Reached target ignition-diskful.target. Dec 13 14:32:16.552064 systemd[1]: Mounted sysroot-usr.mount. Dec 13 14:32:16.565971 systemd[1]: Mounting sysroot-usr-share-oem.mount... Dec 13 14:32:16.573076 systemd[1]: Starting initrd-setup-root.service... Dec 13 14:32:16.601873 initrd-setup-root[686]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 14:32:16.604522 systemd-networkd[631]: eth0: Gained IPv6LL Dec 13 14:32:16.611943 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (681) Dec 13 14:32:16.623834 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 14:32:16.623917 kernel: BTRFS info (device vda6): using free space tree Dec 13 14:32:16.623942 kernel: BTRFS info (device vda6): has skinny extents Dec 13 14:32:16.625156 initrd-setup-root[694]: cut: /sysroot/etc/group: No such file or directory Dec 13 14:32:16.635972 initrd-setup-root[718]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 14:32:16.645980 initrd-setup-root[728]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 14:32:16.650273 systemd[1]: Mounted sysroot-usr-share-oem.mount. Dec 13 14:32:16.776030 systemd[1]: Finished initrd-setup-root.service. Dec 13 14:32:16.776000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:16.779074 systemd[1]: Starting ignition-mount.service... Dec 13 14:32:16.788030 kernel: audit: type=1130 audit(1734100336.776:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:16.793083 systemd[1]: Starting sysroot-boot.service... Dec 13 14:32:16.804590 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Dec 13 14:32:16.806382 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Dec 13 14:32:16.838854 ignition[749]: INFO : Ignition 2.14.0 Dec 13 14:32:16.838854 ignition[749]: INFO : Stage: mount Dec 13 14:32:16.840247 ignition[749]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:32:16.840247 ignition[749]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Dec 13 14:32:16.840247 ignition[749]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 14:32:16.845192 ignition[749]: INFO : mount: mount passed Dec 13 14:32:16.845192 ignition[749]: INFO : Ignition finished successfully Dec 13 14:32:16.844090 systemd[1]: Finished sysroot-boot.service. Dec 13 14:32:16.846000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:16.848000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:16.848268 systemd[1]: Finished ignition-mount.service. Dec 13 14:32:16.856926 coreos-metadata[680]: Dec 13 14:32:16.856 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Dec 13 14:32:16.877046 coreos-metadata[680]: Dec 13 14:32:16.876 INFO Fetch successful Dec 13 14:32:16.877807 coreos-metadata[680]: Dec 13 14:32:16.877 INFO wrote hostname ci-3510-3-6-f-a8c495e5df.novalocal to /sysroot/etc/hostname Dec 13 14:32:16.880988 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Dec 13 14:32:16.881110 systemd[1]: Finished flatcar-openstack-hostname.service. Dec 13 14:32:16.882000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:16.882000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:16.884639 systemd[1]: Starting ignition-files.service... Dec 13 14:32:16.898068 systemd[1]: Mounting sysroot-usr-share-oem.mount... Dec 13 14:32:16.907954 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (757) Dec 13 14:32:16.911371 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 14:32:16.911431 kernel: BTRFS info (device vda6): using free space tree Dec 13 14:32:16.911459 kernel: BTRFS info (device vda6): has skinny extents Dec 13 14:32:16.925570 systemd[1]: Mounted sysroot-usr-share-oem.mount. Dec 13 14:32:16.947839 ignition[776]: INFO : Ignition 2.14.0 Dec 13 14:32:16.947839 ignition[776]: INFO : Stage: files Dec 13 14:32:16.950651 ignition[776]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:32:16.950651 ignition[776]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Dec 13 14:32:16.950651 ignition[776]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 14:32:16.960241 ignition[776]: DEBUG : files: compiled without relabeling support, skipping Dec 13 14:32:16.963038 ignition[776]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 14:32:16.964751 ignition[776]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 14:32:16.969871 ignition[776]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 14:32:16.971662 ignition[776]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 14:32:16.971662 ignition[776]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 14:32:16.971393 unknown[776]: wrote ssh authorized keys file for user: core Dec 13 14:32:16.976790 ignition[776]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Dec 13 14:32:16.976790 ignition[776]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 14:32:16.976790 ignition[776]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 14:32:16.976790 ignition[776]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 14:32:16.976790 ignition[776]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Dec 13 14:32:16.976790 ignition[776]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Dec 13 14:32:16.976790 ignition[776]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Dec 13 14:32:16.976790 ignition[776]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Dec 13 14:32:17.415233 ignition[776]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Dec 13 14:32:19.087300 ignition[776]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Dec 13 14:32:19.087300 ignition[776]: INFO : files: op(7): [started] processing unit "coreos-metadata-sshkeys@.service" Dec 13 14:32:19.087300 ignition[776]: INFO : files: op(7): [finished] processing unit "coreos-metadata-sshkeys@.service" Dec 13 14:32:19.087300 ignition[776]: INFO : files: op(8): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Dec 13 14:32:19.093229 ignition[776]: INFO : files: op(8): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Dec 13 14:32:19.094371 ignition[776]: INFO : files: createResultFile: createFiles: op(9): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 14:32:19.094371 ignition[776]: INFO : files: createResultFile: createFiles: op(9): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 14:32:19.094371 ignition[776]: INFO : files: files passed Dec 13 14:32:19.094371 ignition[776]: INFO : Ignition finished successfully Dec 13 14:32:19.097000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:19.096102 systemd[1]: Finished ignition-files.service. Dec 13 14:32:19.099186 systemd[1]: Starting initrd-setup-root-after-ignition.service... Dec 13 14:32:19.102124 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Dec 13 14:32:19.104920 systemd[1]: Starting ignition-quench.service... Dec 13 14:32:19.109342 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 14:32:19.110066 systemd[1]: Finished ignition-quench.service. Dec 13 14:32:19.110000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:19.110000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:19.113933 initrd-setup-root-after-ignition[801]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 14:32:19.115561 systemd[1]: Finished initrd-setup-root-after-ignition.service. Dec 13 14:32:19.115000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:19.116823 systemd[1]: Reached target ignition-complete.target. Dec 13 14:32:19.119018 systemd[1]: Starting initrd-parse-etc.service... Dec 13 14:32:19.138294 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 14:32:19.138430 systemd[1]: Finished initrd-parse-etc.service. Dec 13 14:32:19.139000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:19.139000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:19.140343 systemd[1]: Reached target initrd-fs.target. Dec 13 14:32:19.141690 systemd[1]: Reached target initrd.target. Dec 13 14:32:19.143252 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Dec 13 14:32:19.144243 systemd[1]: Starting dracut-pre-pivot.service... Dec 13 14:32:19.160534 systemd[1]: Finished dracut-pre-pivot.service. Dec 13 14:32:19.160000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:19.163473 systemd[1]: Starting initrd-cleanup.service... Dec 13 14:32:19.181961 systemd[1]: Stopped target nss-lookup.target. Dec 13 14:32:19.184475 systemd[1]: Stopped target remote-cryptsetup.target. Dec 13 14:32:19.187287 systemd[1]: Stopped target timers.target. Dec 13 14:32:19.189274 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 14:32:19.189552 systemd[1]: Stopped dracut-pre-pivot.service. Dec 13 14:32:19.190000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:19.192023 systemd[1]: Stopped target initrd.target. Dec 13 14:32:19.193673 systemd[1]: Stopped target basic.target. Dec 13 14:32:19.195649 systemd[1]: Stopped target ignition-complete.target. Dec 13 14:32:19.197568 systemd[1]: Stopped target ignition-diskful.target. Dec 13 14:32:19.199636 systemd[1]: Stopped target initrd-root-device.target. Dec 13 14:32:19.201737 systemd[1]: Stopped target remote-fs.target. Dec 13 14:32:19.203795 systemd[1]: Stopped target remote-fs-pre.target. Dec 13 14:32:19.205681 systemd[1]: Stopped target sysinit.target. Dec 13 14:32:19.207709 systemd[1]: Stopped target local-fs.target. Dec 13 14:32:19.209582 systemd[1]: Stopped target local-fs-pre.target. Dec 13 14:32:19.211633 systemd[1]: Stopped target swap.target. Dec 13 14:32:19.213362 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 14:32:19.214000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:19.213657 systemd[1]: Stopped dracut-pre-mount.service. Dec 13 14:32:19.215601 systemd[1]: Stopped target cryptsetup.target. Dec 13 14:32:19.218000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:19.217377 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 14:32:19.220000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:19.217671 systemd[1]: Stopped dracut-initqueue.service. Dec 13 14:32:19.222000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:19.219749 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 14:32:19.220090 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Dec 13 14:32:19.221635 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 14:32:19.221934 systemd[1]: Stopped ignition-files.service. Dec 13 14:32:19.225301 systemd[1]: Stopping ignition-mount.service... Dec 13 14:32:19.234334 systemd[1]: Stopping iscsiuio.service... Dec 13 14:32:19.241270 systemd[1]: Stopping sysroot-boot.service... Dec 13 14:32:19.250000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:19.250000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:19.253000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:19.254000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:19.255000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:19.255000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:19.258041 ignition[814]: INFO : Ignition 2.14.0 Dec 13 14:32:19.258041 ignition[814]: INFO : Stage: umount Dec 13 14:32:19.258041 ignition[814]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:32:19.258041 ignition[814]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Dec 13 14:32:19.258041 ignition[814]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 14:32:19.258041 ignition[814]: INFO : umount: umount passed Dec 13 14:32:19.258041 ignition[814]: INFO : Ignition finished successfully Dec 13 14:32:19.250313 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 14:32:19.250559 systemd[1]: Stopped systemd-udev-trigger.service. Dec 13 14:32:19.263000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:19.264000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:19.251164 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 14:32:19.251270 systemd[1]: Stopped dracut-pre-trigger.service. Dec 13 14:32:19.253348 systemd[1]: iscsiuio.service: Deactivated successfully. Dec 13 14:32:19.253453 systemd[1]: Stopped iscsiuio.service. Dec 13 14:32:19.254545 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 14:32:19.254630 systemd[1]: Stopped ignition-mount.service. Dec 13 14:32:19.255451 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 14:32:19.255558 systemd[1]: Stopped ignition-disks.service. Dec 13 14:32:19.256053 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 14:32:19.273000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:19.256090 systemd[1]: Stopped ignition-kargs.service. Dec 13 14:32:19.256540 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 13 14:32:19.256577 systemd[1]: Stopped ignition-fetch.service. Dec 13 14:32:19.264093 systemd[1]: Stopped target network.target. Dec 13 14:32:19.277000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:19.277000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:19.264516 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 14:32:19.264578 systemd[1]: Stopped ignition-fetch-offline.service. Dec 13 14:32:19.279000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:19.265194 systemd[1]: Stopped target paths.target. Dec 13 14:32:19.265791 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 14:32:19.270969 systemd[1]: Stopped systemd-ask-password-console.path. Dec 13 14:32:19.271549 systemd[1]: Stopped target slices.target. Dec 13 14:32:19.271986 systemd[1]: Stopped target sockets.target. Dec 13 14:32:19.272704 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 14:32:19.272733 systemd[1]: Closed iscsid.socket. Dec 13 14:32:19.273209 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 14:32:19.273249 systemd[1]: Closed iscsiuio.socket. Dec 13 14:32:19.273701 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 14:32:19.273746 systemd[1]: Stopped ignition-setup.service. Dec 13 14:32:19.292000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:19.274624 systemd[1]: Stopping systemd-networkd.service... Dec 13 14:32:19.293000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:19.275168 systemd[1]: Stopping systemd-resolved.service... Dec 13 14:32:19.295000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:19.276594 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 14:32:19.277162 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 14:32:19.277240 systemd[1]: Finished initrd-cleanup.service. Dec 13 14:32:19.277974 systemd-networkd[631]: eth0: DHCPv6 lease lost Dec 13 14:32:19.302000 audit: BPF prog-id=9 op=UNLOAD Dec 13 14:32:19.279399 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 14:32:19.303000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:19.307000 audit: BPF prog-id=6 op=UNLOAD Dec 13 14:32:19.279475 systemd[1]: Stopped systemd-networkd.service. Dec 13 14:32:19.309000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:19.282720 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 14:32:19.282751 systemd[1]: Closed systemd-networkd.socket. Dec 13 14:32:19.312000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:19.285351 systemd[1]: Stopping network-cleanup.service... Dec 13 14:32:19.286067 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 14:32:19.286126 systemd[1]: Stopped parse-ip-for-networkd.service. Dec 13 14:32:19.315000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:19.293947 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 14:32:19.316000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:19.294011 systemd[1]: Stopped systemd-sysctl.service. Dec 13 14:32:19.317000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:19.295178 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 14:32:19.323000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:19.295234 systemd[1]: Stopped systemd-modules-load.service. Dec 13 14:32:19.327000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:19.296128 systemd[1]: Stopping systemd-udevd.service... Dec 13 14:32:19.328000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:19.302968 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 13 14:32:19.303732 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 14:32:19.303868 systemd[1]: Stopped systemd-resolved.service. Dec 13 14:32:19.331000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:19.331000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:19.308276 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 14:32:19.308468 systemd[1]: Stopped systemd-udevd.service. Dec 13 14:32:19.311989 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 14:32:19.312128 systemd[1]: Stopped network-cleanup.service. Dec 13 14:32:19.313429 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 14:32:19.313481 systemd[1]: Closed systemd-udevd-control.socket. Dec 13 14:32:19.314345 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 14:32:19.314392 systemd[1]: Closed systemd-udevd-kernel.socket. Dec 13 14:32:19.315355 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 14:32:19.315417 systemd[1]: Stopped dracut-pre-udev.service. Dec 13 14:32:19.316579 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 14:32:19.316641 systemd[1]: Stopped dracut-cmdline.service. Dec 13 14:32:19.317651 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 14:32:19.317707 systemd[1]: Stopped dracut-cmdline-ask.service. Dec 13 14:32:19.319710 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Dec 13 14:32:19.323988 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 14:32:19.324072 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Dec 13 14:32:19.324973 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 14:32:19.325032 systemd[1]: Stopped kmod-static-nodes.service. Dec 13 14:32:19.328528 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 14:32:19.328588 systemd[1]: Stopped systemd-vconsole-setup.service. Dec 13 14:32:19.331042 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Dec 13 14:32:19.331723 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 14:32:19.331850 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Dec 13 14:32:19.976343 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 14:32:19.976639 systemd[1]: Stopped sysroot-boot.service. Dec 13 14:32:19.978000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:19.980066 systemd[1]: Reached target initrd-switch-root.target. Dec 13 14:32:19.982222 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 14:32:20.002937 kernel: kauditd_printk_skb: 45 callbacks suppressed Dec 13 14:32:20.003024 kernel: audit: type=1131 audit(1734100339.983:80): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:19.983000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:19.982384 systemd[1]: Stopped initrd-setup-root.service. Dec 13 14:32:19.987325 systemd[1]: Starting initrd-switch-root.service... Dec 13 14:32:20.039195 systemd[1]: Switching root. Dec 13 14:32:20.076597 iscsid[637]: iscsid shutting down. Dec 13 14:32:20.077920 systemd-journald[184]: Received SIGTERM from PID 1 (n/a). Dec 13 14:32:20.078018 systemd-journald[184]: Journal stopped Dec 13 14:32:26.015598 kernel: SELinux: Class mctp_socket not defined in policy. Dec 13 14:32:26.015658 kernel: SELinux: Class anon_inode not defined in policy. Dec 13 14:32:26.015676 kernel: SELinux: the above unknown classes and permissions will be allowed Dec 13 14:32:26.015688 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 14:32:26.015705 kernel: SELinux: policy capability open_perms=1 Dec 13 14:32:26.015717 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 14:32:26.015730 kernel: SELinux: policy capability always_check_network=0 Dec 13 14:32:26.015743 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 14:32:26.015757 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 14:32:26.015769 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 14:32:26.015780 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 14:32:26.015792 kernel: audit: type=1403 audit(1734100340.368:81): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 14:32:26.015811 systemd[1]: Successfully loaded SELinux policy in 94.421ms. Dec 13 14:32:26.015831 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 22.038ms. Dec 13 14:32:26.015845 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 14:32:26.015859 systemd[1]: Detected virtualization kvm. Dec 13 14:32:26.015880 systemd[1]: Detected architecture x86-64. Dec 13 14:32:26.017969 systemd[1]: Detected first boot. Dec 13 14:32:26.018021 systemd[1]: Hostname set to . Dec 13 14:32:26.018046 systemd[1]: Initializing machine ID from VM UUID. Dec 13 14:32:26.018064 kernel: audit: type=1400 audit(1734100340.490:82): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 14:32:26.018080 kernel: audit: type=1400 audit(1734100340.490:83): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 14:32:26.018092 kernel: audit: type=1334 audit(1734100340.490:84): prog-id=10 op=LOAD Dec 13 14:32:26.018108 kernel: audit: type=1334 audit(1734100340.490:85): prog-id=10 op=UNLOAD Dec 13 14:32:26.018121 kernel: audit: type=1334 audit(1734100340.499:86): prog-id=11 op=LOAD Dec 13 14:32:26.018133 kernel: audit: type=1334 audit(1734100340.499:87): prog-id=11 op=UNLOAD Dec 13 14:32:26.018146 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Dec 13 14:32:26.018159 kernel: audit: type=1400 audit(1734100340.643:88): avc: denied { associate } for pid=847 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Dec 13 14:32:26.018173 kernel: audit: type=1300 audit(1734100340.643:88): arch=c000003e syscall=188 success=yes exit=0 a0=c00014d8a2 a1=c0000cede0 a2=c0000d70c0 a3=32 items=0 ppid=830 pid=847 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:32:26.018185 systemd[1]: Populated /etc with preset unit settings. Dec 13 14:32:26.018199 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 14:32:26.018216 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 14:32:26.018232 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:32:26.018244 kernel: kauditd_printk_skb: 7 callbacks suppressed Dec 13 14:32:26.018256 kernel: audit: type=1334 audit(1734100345.727:90): prog-id=12 op=LOAD Dec 13 14:32:26.018268 kernel: audit: type=1334 audit(1734100345.727:91): prog-id=3 op=UNLOAD Dec 13 14:32:26.018280 kernel: audit: type=1334 audit(1734100345.730:92): prog-id=13 op=LOAD Dec 13 14:32:26.018292 kernel: audit: type=1334 audit(1734100345.734:93): prog-id=14 op=LOAD Dec 13 14:32:26.018304 kernel: audit: type=1334 audit(1734100345.734:94): prog-id=4 op=UNLOAD Dec 13 14:32:26.018318 systemd[1]: iscsid.service: Deactivated successfully. Dec 13 14:32:26.018331 kernel: audit: type=1334 audit(1734100345.734:95): prog-id=5 op=UNLOAD Dec 13 14:32:26.018343 kernel: audit: type=1131 audit(1734100345.735:96): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:26.018356 systemd[1]: Stopped iscsid.service. Dec 13 14:32:26.018368 kernel: audit: type=1131 audit(1734100345.763:97): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:26.018382 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 13 14:32:26.018395 systemd[1]: Stopped initrd-switch-root.service. Dec 13 14:32:26.018409 kernel: audit: type=1130 audit(1734100345.776:98): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:26.018423 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 13 14:32:26.018436 kernel: audit: type=1131 audit(1734100345.776:99): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:26.018450 systemd[1]: Created slice system-addon\x2dconfig.slice. Dec 13 14:32:26.018464 systemd[1]: Created slice system-addon\x2drun.slice. Dec 13 14:32:26.018477 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Dec 13 14:32:26.018489 systemd[1]: Created slice system-getty.slice. Dec 13 14:32:26.018502 systemd[1]: Created slice system-modprobe.slice. Dec 13 14:32:26.018514 systemd[1]: Created slice system-serial\x2dgetty.slice. Dec 13 14:32:26.018527 systemd[1]: Created slice system-system\x2dcloudinit.slice. Dec 13 14:32:26.018540 systemd[1]: Created slice system-systemd\x2dfsck.slice. Dec 13 14:32:26.018552 systemd[1]: Created slice user.slice. Dec 13 14:32:26.018566 systemd[1]: Started systemd-ask-password-console.path. Dec 13 14:32:26.018579 systemd[1]: Started systemd-ask-password-wall.path. Dec 13 14:32:26.018591 systemd[1]: Set up automount boot.automount. Dec 13 14:32:26.018604 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Dec 13 14:32:26.018616 systemd[1]: Stopped target initrd-switch-root.target. Dec 13 14:32:26.018628 systemd[1]: Stopped target initrd-fs.target. Dec 13 14:32:26.018641 systemd[1]: Stopped target initrd-root-fs.target. Dec 13 14:32:26.018653 systemd[1]: Reached target integritysetup.target. Dec 13 14:32:26.018665 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 14:32:26.018680 systemd[1]: Reached target remote-fs.target. Dec 13 14:32:26.018692 systemd[1]: Reached target slices.target. Dec 13 14:32:26.018705 systemd[1]: Reached target swap.target. Dec 13 14:32:26.018717 systemd[1]: Reached target torcx.target. Dec 13 14:32:26.018730 systemd[1]: Reached target veritysetup.target. Dec 13 14:32:26.018743 systemd[1]: Listening on systemd-coredump.socket. Dec 13 14:32:26.018755 systemd[1]: Listening on systemd-initctl.socket. Dec 13 14:32:26.018767 systemd[1]: Listening on systemd-networkd.socket. Dec 13 14:32:26.018780 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 14:32:26.018793 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 14:32:26.018808 systemd[1]: Listening on systemd-userdbd.socket. Dec 13 14:32:26.018820 systemd[1]: Mounting dev-hugepages.mount... Dec 13 14:32:26.018833 systemd[1]: Mounting dev-mqueue.mount... Dec 13 14:32:26.018845 systemd[1]: Mounting media.mount... Dec 13 14:32:26.018858 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:32:26.018870 systemd[1]: Mounting sys-kernel-debug.mount... Dec 13 14:32:26.018882 systemd[1]: Mounting sys-kernel-tracing.mount... Dec 13 14:32:26.018993 systemd[1]: Mounting tmp.mount... Dec 13 14:32:26.019019 systemd[1]: Starting flatcar-tmpfiles.service... Dec 13 14:32:26.019039 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:32:26.019053 systemd[1]: Starting kmod-static-nodes.service... Dec 13 14:32:26.019066 systemd[1]: Starting modprobe@configfs.service... Dec 13 14:32:26.019078 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:32:26.019090 systemd[1]: Starting modprobe@drm.service... Dec 13 14:32:26.019102 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:32:26.019114 systemd[1]: Starting modprobe@fuse.service... Dec 13 14:32:26.019126 systemd[1]: Starting modprobe@loop.service... Dec 13 14:32:26.019140 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 14:32:26.019156 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 13 14:32:26.019169 systemd[1]: Stopped systemd-fsck-root.service. Dec 13 14:32:26.019181 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 13 14:32:26.019194 systemd[1]: Stopped systemd-fsck-usr.service. Dec 13 14:32:26.019206 systemd[1]: Stopped systemd-journald.service. Dec 13 14:32:26.019219 systemd[1]: Starting systemd-journald.service... Dec 13 14:32:26.019232 systemd[1]: Starting systemd-modules-load.service... Dec 13 14:32:26.019244 systemd[1]: Starting systemd-network-generator.service... Dec 13 14:32:26.019256 systemd[1]: Starting systemd-remount-fs.service... Dec 13 14:32:26.019271 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 14:32:26.019285 systemd[1]: verity-setup.service: Deactivated successfully. Dec 13 14:32:26.019296 kernel: loop: module loaded Dec 13 14:32:26.019308 systemd[1]: Stopped verity-setup.service. Dec 13 14:32:26.019321 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:32:26.019333 systemd[1]: Mounted dev-hugepages.mount. Dec 13 14:32:26.019346 systemd[1]: Mounted dev-mqueue.mount. Dec 13 14:32:26.019358 systemd[1]: Mounted media.mount. Dec 13 14:32:26.019370 systemd[1]: Mounted sys-kernel-debug.mount. Dec 13 14:32:26.019384 kernel: fuse: init (API version 7.34) Dec 13 14:32:26.019396 systemd[1]: Mounted sys-kernel-tracing.mount. Dec 13 14:32:26.019408 systemd[1]: Mounted tmp.mount. Dec 13 14:32:26.019421 systemd[1]: Finished kmod-static-nodes.service. Dec 13 14:32:26.019433 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 14:32:26.019448 systemd[1]: Finished modprobe@configfs.service. Dec 13 14:32:26.019461 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:32:26.019474 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:32:26.019489 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 14:32:26.019501 systemd[1]: Finished modprobe@drm.service. Dec 13 14:32:26.019513 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:32:26.019526 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:32:26.019539 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 14:32:26.019551 systemd[1]: Finished modprobe@fuse.service. Dec 13 14:32:26.019566 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:32:26.019579 systemd[1]: Finished modprobe@loop.service. Dec 13 14:32:26.019598 systemd-journald[920]: Journal started Dec 13 14:32:26.019662 systemd-journald[920]: Runtime Journal (/run/log/journal/9e976cb5ad4642fea30be342de678085) is 4.9M, max 39.5M, 34.5M free. Dec 13 14:32:20.368000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 14:32:20.490000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 14:32:20.490000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 14:32:20.490000 audit: BPF prog-id=10 op=LOAD Dec 13 14:32:26.024572 systemd[1]: Finished systemd-modules-load.service. Dec 13 14:32:26.024604 systemd[1]: Started systemd-journald.service. Dec 13 14:32:20.490000 audit: BPF prog-id=10 op=UNLOAD Dec 13 14:32:20.499000 audit: BPF prog-id=11 op=LOAD Dec 13 14:32:20.499000 audit: BPF prog-id=11 op=UNLOAD Dec 13 14:32:20.643000 audit[847]: AVC avc: denied { associate } for pid=847 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Dec 13 14:32:20.643000 audit[847]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c00014d8a2 a1=c0000cede0 a2=c0000d70c0 a3=32 items=0 ppid=830 pid=847 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:32:20.643000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Dec 13 14:32:20.645000 audit[847]: AVC avc: denied { associate } for pid=847 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Dec 13 14:32:20.645000 audit[847]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c00014d979 a2=1ed a3=0 items=2 ppid=830 pid=847 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:32:20.645000 audit: CWD cwd="/" Dec 13 14:32:20.645000 audit: PATH item=0 name=(null) inode=2 dev=00:1a mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:20.645000 audit: PATH item=1 name=(null) inode=3 dev=00:1a mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:20.645000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Dec 13 14:32:25.727000 audit: BPF prog-id=12 op=LOAD Dec 13 14:32:25.727000 audit: BPF prog-id=3 op=UNLOAD Dec 13 14:32:25.730000 audit: BPF prog-id=13 op=LOAD Dec 13 14:32:25.734000 audit: BPF prog-id=14 op=LOAD Dec 13 14:32:25.734000 audit: BPF prog-id=4 op=UNLOAD Dec 13 14:32:25.734000 audit: BPF prog-id=5 op=UNLOAD Dec 13 14:32:25.735000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:25.763000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:25.776000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:25.776000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:25.799000 audit: BPF prog-id=12 op=UNLOAD Dec 13 14:32:25.937000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:25.940000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:25.944000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:25.944000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:25.945000 audit: BPF prog-id=15 op=LOAD Dec 13 14:32:25.946000 audit: BPF prog-id=16 op=LOAD Dec 13 14:32:25.946000 audit: BPF prog-id=17 op=LOAD Dec 13 14:32:25.946000 audit: BPF prog-id=13 op=UNLOAD Dec 13 14:32:25.946000 audit: BPF prog-id=14 op=UNLOAD Dec 13 14:32:25.971000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:25.991000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:25.995000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:25.995000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:26.000000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:26.000000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:26.005000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:26.005000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:26.008000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:26.008000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:26.012000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:26.012000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:26.013000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Dec 13 14:32:26.013000 audit[920]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=5 a1=7fff48010820 a2=4000 a3=7fff480108bc items=0 ppid=1 pid=920 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:32:26.013000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Dec 13 14:32:26.016000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:26.016000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:26.020000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:26.022000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:26.023000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:26.025000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:25.726157 systemd[1]: Queued start job for default target multi-user.target. Dec 13 14:32:20.640818 /usr/lib/systemd/system-generators/torcx-generator[847]: time="2024-12-13T14:32:20Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 14:32:25.726173 systemd[1]: Unnecessary job was removed for dev-vda6.device. Dec 13 14:32:20.641762 /usr/lib/systemd/system-generators/torcx-generator[847]: time="2024-12-13T14:32:20Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Dec 13 14:32:25.736109 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 13 14:32:20.641785 /usr/lib/systemd/system-generators/torcx-generator[847]: time="2024-12-13T14:32:20Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Dec 13 14:32:26.023844 systemd[1]: Finished systemd-network-generator.service. Dec 13 14:32:20.641825 /usr/lib/systemd/system-generators/torcx-generator[847]: time="2024-12-13T14:32:20Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Dec 13 14:32:26.026313 systemd[1]: Finished systemd-remount-fs.service. Dec 13 14:32:20.641838 /usr/lib/systemd/system-generators/torcx-generator[847]: time="2024-12-13T14:32:20Z" level=debug msg="skipped missing lower profile" missing profile=oem Dec 13 14:32:26.027197 systemd[1]: Reached target network-pre.target. Dec 13 14:32:20.641872 /usr/lib/systemd/system-generators/torcx-generator[847]: time="2024-12-13T14:32:20Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Dec 13 14:32:26.031134 systemd[1]: Mounting sys-fs-fuse-connections.mount... Dec 13 14:32:20.641903 /usr/lib/systemd/system-generators/torcx-generator[847]: time="2024-12-13T14:32:20Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Dec 13 14:32:26.033663 systemd[1]: Mounting sys-kernel-config.mount... Dec 13 14:32:20.642137 /usr/lib/systemd/system-generators/torcx-generator[847]: time="2024-12-13T14:32:20Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Dec 13 14:32:26.034395 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 14:32:20.642179 /usr/lib/systemd/system-generators/torcx-generator[847]: time="2024-12-13T14:32:20Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Dec 13 14:32:20.642194 /usr/lib/systemd/system-generators/torcx-generator[847]: time="2024-12-13T14:32:20Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Dec 13 14:32:20.643058 /usr/lib/systemd/system-generators/torcx-generator[847]: time="2024-12-13T14:32:20Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Dec 13 14:32:20.643096 /usr/lib/systemd/system-generators/torcx-generator[847]: time="2024-12-13T14:32:20Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Dec 13 14:32:20.643117 /usr/lib/systemd/system-generators/torcx-generator[847]: time="2024-12-13T14:32:20Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.6: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.6 Dec 13 14:32:20.643134 /usr/lib/systemd/system-generators/torcx-generator[847]: time="2024-12-13T14:32:20Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Dec 13 14:32:20.643153 /usr/lib/systemd/system-generators/torcx-generator[847]: time="2024-12-13T14:32:20Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.6: no such file or directory" path=/var/lib/torcx/store/3510.3.6 Dec 13 14:32:20.643169 /usr/lib/systemd/system-generators/torcx-generator[847]: time="2024-12-13T14:32:20Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Dec 13 14:32:25.265369 /usr/lib/systemd/system-generators/torcx-generator[847]: time="2024-12-13T14:32:25Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 14:32:25.265754 /usr/lib/systemd/system-generators/torcx-generator[847]: time="2024-12-13T14:32:25Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 14:32:25.265931 /usr/lib/systemd/system-generators/torcx-generator[847]: time="2024-12-13T14:32:25Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 14:32:25.266148 /usr/lib/systemd/system-generators/torcx-generator[847]: time="2024-12-13T14:32:25Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 14:32:25.266217 /usr/lib/systemd/system-generators/torcx-generator[847]: time="2024-12-13T14:32:25Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Dec 13 14:32:25.266315 /usr/lib/systemd/system-generators/torcx-generator[847]: time="2024-12-13T14:32:25Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Dec 13 14:32:26.039858 systemd[1]: Starting systemd-hwdb-update.service... Dec 13 14:32:26.041459 systemd[1]: Starting systemd-journal-flush.service... Dec 13 14:32:26.042005 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:32:26.044164 systemd[1]: Starting systemd-random-seed.service... Dec 13 14:32:26.044687 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 14:32:26.045764 systemd[1]: Starting systemd-sysctl.service... Dec 13 14:32:26.049317 systemd[1]: Mounted sys-fs-fuse-connections.mount. Dec 13 14:32:26.050021 systemd[1]: Mounted sys-kernel-config.mount. Dec 13 14:32:26.061021 systemd-journald[920]: Time spent on flushing to /var/log/journal/9e976cb5ad4642fea30be342de678085 is 43.048ms for 1096 entries. Dec 13 14:32:26.061021 systemd-journald[920]: System Journal (/var/log/journal/9e976cb5ad4642fea30be342de678085) is 8.0M, max 584.8M, 576.8M free. Dec 13 14:32:26.131713 systemd-journald[920]: Received client request to flush runtime journal. Dec 13 14:32:26.087000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:26.100000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:26.107000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:26.108000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:26.087981 systemd[1]: Finished systemd-random-seed.service. Dec 13 14:32:26.088595 systemd[1]: Reached target first-boot-complete.target. Dec 13 14:32:26.133136 udevadm[957]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Dec 13 14:32:26.100412 systemd[1]: Finished systemd-sysctl.service. Dec 13 14:32:26.108324 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 14:32:26.109091 systemd[1]: Finished flatcar-tmpfiles.service. Dec 13 14:32:26.110842 systemd[1]: Starting systemd-sysusers.service... Dec 13 14:32:26.112469 systemd[1]: Starting systemd-udev-settle.service... Dec 13 14:32:26.132000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:26.132821 systemd[1]: Finished systemd-journal-flush.service. Dec 13 14:32:26.160700 systemd[1]: Finished systemd-sysusers.service. Dec 13 14:32:26.160000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:26.162579 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Dec 13 14:32:26.214792 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Dec 13 14:32:26.214000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:26.825055 systemd[1]: Finished systemd-hwdb-update.service. Dec 13 14:32:26.825000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:26.826000 audit: BPF prog-id=18 op=LOAD Dec 13 14:32:26.828000 audit: BPF prog-id=19 op=LOAD Dec 13 14:32:26.828000 audit: BPF prog-id=7 op=UNLOAD Dec 13 14:32:26.828000 audit: BPF prog-id=8 op=UNLOAD Dec 13 14:32:26.830593 systemd[1]: Starting systemd-udevd.service... Dec 13 14:32:26.882385 systemd-udevd[961]: Using default interface naming scheme 'v252'. Dec 13 14:32:26.925000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:26.927000 audit: BPF prog-id=20 op=LOAD Dec 13 14:32:26.922092 systemd[1]: Started systemd-udevd.service. Dec 13 14:32:26.932548 systemd[1]: Starting systemd-networkd.service... Dec 13 14:32:26.953000 audit: BPF prog-id=21 op=LOAD Dec 13 14:32:26.955000 audit: BPF prog-id=22 op=LOAD Dec 13 14:32:26.955000 audit: BPF prog-id=23 op=LOAD Dec 13 14:32:26.957722 systemd[1]: Starting systemd-userdbd.service... Dec 13 14:32:26.998691 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Dec 13 14:32:27.008000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:27.008732 systemd[1]: Started systemd-userdbd.service. Dec 13 14:32:27.074949 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Dec 13 14:32:27.086523 kernel: ACPI: button: Power Button [PWRF] Dec 13 14:32:27.112000 audit[972]: AVC avc: denied { confidentiality } for pid=972 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Dec 13 14:32:27.121180 systemd-networkd[977]: lo: Link UP Dec 13 14:32:27.121189 systemd-networkd[977]: lo: Gained carrier Dec 13 14:32:27.121676 systemd-networkd[977]: Enumeration completed Dec 13 14:32:27.121000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:27.121807 systemd-networkd[977]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 14:32:27.121818 systemd[1]: Started systemd-networkd.service. Dec 13 14:32:27.112000 audit[972]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=558337752080 a1=337fc a2=7fb6848c8bc5 a3=5 items=110 ppid=961 pid=972 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:32:27.112000 audit: CWD cwd="/" Dec 13 14:32:27.112000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:27.112000 audit: PATH item=1 name=(null) inode=13277 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:27.112000 audit: PATH item=2 name=(null) inode=13277 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:27.112000 audit: PATH item=3 name=(null) inode=13278 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:27.112000 audit: PATH item=4 name=(null) inode=13277 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:27.112000 audit: PATH item=5 name=(null) inode=13279 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:27.112000 audit: PATH item=6 name=(null) inode=13277 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:27.112000 audit: PATH item=7 name=(null) inode=13280 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:27.112000 audit: PATH item=8 name=(null) inode=13280 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:27.112000 audit: PATH item=9 name=(null) inode=13281 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:27.112000 audit: PATH item=10 name=(null) inode=13280 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:27.112000 audit: PATH item=11 name=(null) inode=13282 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:27.112000 audit: PATH item=12 name=(null) inode=13280 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:27.112000 audit: PATH item=13 name=(null) inode=13283 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:27.112000 audit: PATH item=14 name=(null) inode=13280 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:27.112000 audit: PATH item=15 name=(null) inode=13284 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:27.112000 audit: PATH item=16 name=(null) inode=13280 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:27.112000 audit: PATH item=17 name=(null) inode=13285 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:27.112000 audit: PATH item=18 name=(null) inode=13277 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:27.112000 audit: PATH item=19 name=(null) inode=13286 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:27.112000 audit: PATH item=20 name=(null) inode=13286 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:27.112000 audit: PATH item=21 name=(null) inode=13287 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:27.112000 audit: PATH item=22 name=(null) inode=13286 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:27.112000 audit: PATH item=23 name=(null) inode=13288 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:27.112000 audit: PATH item=24 name=(null) inode=13286 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:27.112000 audit: PATH item=25 name=(null) inode=13289 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:27.112000 audit: PATH item=26 name=(null) inode=13286 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:27.112000 audit: PATH item=27 name=(null) inode=13290 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:27.112000 audit: PATH item=28 name=(null) inode=13286 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:27.124471 systemd-networkd[977]: eth0: Link UP Dec 13 14:32:27.124476 systemd-networkd[977]: eth0: Gained carrier Dec 13 14:32:27.112000 audit: PATH item=29 name=(null) inode=13291 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:27.112000 audit: PATH item=30 name=(null) inode=13277 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:27.112000 audit: PATH item=31 name=(null) inode=13292 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:27.112000 audit: PATH item=32 name=(null) inode=13292 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:27.112000 audit: PATH item=33 name=(null) inode=13293 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:27.112000 audit: PATH item=34 name=(null) inode=13292 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:27.112000 audit: PATH item=35 name=(null) inode=13294 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:27.112000 audit: PATH item=36 name=(null) inode=13292 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:27.112000 audit: PATH item=37 name=(null) inode=13295 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:27.112000 audit: PATH item=38 name=(null) inode=13292 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:27.112000 audit: PATH item=39 name=(null) inode=13296 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:27.112000 audit: PATH item=40 name=(null) inode=13292 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:27.112000 audit: PATH item=41 name=(null) inode=13297 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:27.112000 audit: PATH item=42 name=(null) inode=13277 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:27.112000 audit: PATH item=43 name=(null) inode=13298 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:27.112000 audit: PATH item=44 name=(null) inode=13298 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:27.112000 audit: PATH item=45 name=(null) inode=13299 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:27.112000 audit: PATH item=46 name=(null) inode=13298 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:27.112000 audit: PATH item=47 name=(null) inode=13300 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:27.112000 audit: PATH item=48 name=(null) inode=13298 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:27.112000 audit: PATH item=49 name=(null) inode=13301 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:27.112000 audit: PATH item=50 name=(null) inode=13298 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:27.112000 audit: PATH item=51 name=(null) inode=13302 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:27.112000 audit: PATH item=52 name=(null) inode=13298 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:27.112000 audit: PATH item=53 name=(null) inode=13303 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:27.112000 audit: PATH item=54 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:27.112000 audit: PATH item=55 name=(null) inode=13304 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:27.112000 audit: PATH item=56 name=(null) inode=13304 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:27.112000 audit: PATH item=57 name=(null) inode=13305 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:27.112000 audit: PATH item=58 name=(null) inode=13304 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:27.112000 audit: PATH item=59 name=(null) inode=13306 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:27.112000 audit: PATH item=60 name=(null) inode=13304 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:27.112000 audit: PATH item=61 name=(null) inode=13307 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:27.112000 audit: PATH item=62 name=(null) inode=13307 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:27.112000 audit: PATH item=63 name=(null) inode=13308 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:27.112000 audit: PATH item=64 name=(null) inode=13307 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:27.112000 audit: PATH item=65 name=(null) inode=13309 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:27.112000 audit: PATH item=66 name=(null) inode=13307 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:27.112000 audit: PATH item=67 name=(null) inode=13310 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:27.112000 audit: PATH item=68 name=(null) inode=13307 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:27.112000 audit: PATH item=69 name=(null) inode=13311 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:27.112000 audit: PATH item=70 name=(null) inode=13307 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:27.112000 audit: PATH item=71 name=(null) inode=13312 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:27.112000 audit: PATH item=72 name=(null) inode=13304 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:27.112000 audit: PATH item=73 name=(null) inode=14337 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:27.112000 audit: PATH item=74 name=(null) inode=14337 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:27.112000 audit: PATH item=75 name=(null) inode=14338 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:27.112000 audit: PATH item=76 name=(null) inode=14337 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:27.112000 audit: PATH item=77 name=(null) inode=14339 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:27.112000 audit: PATH item=78 name=(null) inode=14337 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:27.112000 audit: PATH item=79 name=(null) inode=14340 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:27.112000 audit: PATH item=80 name=(null) inode=14337 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:27.112000 audit: PATH item=81 name=(null) inode=14341 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:27.112000 audit: PATH item=82 name=(null) inode=14337 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:27.112000 audit: PATH item=83 name=(null) inode=14342 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:27.112000 audit: PATH item=84 name=(null) inode=13304 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:27.112000 audit: PATH item=85 name=(null) inode=14343 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:27.112000 audit: PATH item=86 name=(null) inode=14343 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:27.112000 audit: PATH item=87 name=(null) inode=14344 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:27.112000 audit: PATH item=88 name=(null) inode=14343 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:27.112000 audit: PATH item=89 name=(null) inode=14345 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:27.112000 audit: PATH item=90 name=(null) inode=14343 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:27.112000 audit: PATH item=91 name=(null) inode=14346 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:27.112000 audit: PATH item=92 name=(null) inode=14343 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:27.112000 audit: PATH item=93 name=(null) inode=14347 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:27.112000 audit: PATH item=94 name=(null) inode=14343 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:27.112000 audit: PATH item=95 name=(null) inode=14348 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:27.112000 audit: PATH item=96 name=(null) inode=13304 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:27.112000 audit: PATH item=97 name=(null) inode=14349 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:27.112000 audit: PATH item=98 name=(null) inode=14349 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:27.112000 audit: PATH item=99 name=(null) inode=14350 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:27.112000 audit: PATH item=100 name=(null) inode=14349 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:27.112000 audit: PATH item=101 name=(null) inode=14351 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:27.112000 audit: PATH item=102 name=(null) inode=14349 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:27.112000 audit: PATH item=103 name=(null) inode=14352 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:27.112000 audit: PATH item=104 name=(null) inode=14349 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:27.112000 audit: PATH item=105 name=(null) inode=14353 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:27.112000 audit: PATH item=106 name=(null) inode=14349 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:27.112000 audit: PATH item=107 name=(null) inode=14354 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:27.112000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:27.112000 audit: PATH item=109 name=(null) inode=14355 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:27.112000 audit: PROCTITLE proctitle="(udev-worker)" Dec 13 14:32:27.137048 systemd-networkd[977]: eth0: DHCPv4 address 172.24.4.94/24, gateway 172.24.4.1 acquired from 172.24.4.1 Dec 13 14:32:27.144944 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Dec 13 14:32:27.154958 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Dec 13 14:32:27.159921 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 14:32:27.168171 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Dec 13 14:32:27.207490 systemd[1]: Finished systemd-udev-settle.service. Dec 13 14:32:27.207000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:27.209693 systemd[1]: Starting lvm2-activation-early.service... Dec 13 14:32:27.243517 lvm[990]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 14:32:27.285193 systemd[1]: Finished lvm2-activation-early.service. Dec 13 14:32:27.285000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:27.286685 systemd[1]: Reached target cryptsetup.target. Dec 13 14:32:27.290223 systemd[1]: Starting lvm2-activation.service... Dec 13 14:32:27.299159 lvm[991]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 14:32:27.342126 systemd[1]: Finished lvm2-activation.service. Dec 13 14:32:27.343523 systemd[1]: Reached target local-fs-pre.target. Dec 13 14:32:27.343000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:27.344649 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 14:32:27.344714 systemd[1]: Reached target local-fs.target. Dec 13 14:32:27.345782 systemd[1]: Reached target machines.target. Dec 13 14:32:27.349408 systemd[1]: Starting ldconfig.service... Dec 13 14:32:27.351693 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:32:27.351791 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:32:27.353986 systemd[1]: Starting systemd-boot-update.service... Dec 13 14:32:27.358101 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Dec 13 14:32:27.364117 systemd[1]: Starting systemd-machine-id-commit.service... Dec 13 14:32:27.368624 systemd[1]: Starting systemd-sysext.service... Dec 13 14:32:27.390371 systemd[1]: boot.automount: Got automount request for /boot, triggered by 993 (bootctl) Dec 13 14:32:27.394713 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Dec 13 14:32:27.406371 systemd[1]: Unmounting usr-share-oem.mount... Dec 13 14:32:27.412000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:27.411807 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Dec 13 14:32:27.416859 systemd[1]: usr-share-oem.mount: Deactivated successfully. Dec 13 14:32:27.417260 systemd[1]: Unmounted usr-share-oem.mount. Dec 13 14:32:27.446918 kernel: loop0: detected capacity change from 0 to 205544 Dec 13 14:32:27.511114 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 14:32:27.515000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:27.514848 systemd[1]: Finished systemd-machine-id-commit.service. Dec 13 14:32:27.549235 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 14:32:27.586997 kernel: loop1: detected capacity change from 0 to 205544 Dec 13 14:32:27.633967 (sd-sysext)[1005]: Using extensions 'kubernetes'. Dec 13 14:32:27.636229 (sd-sysext)[1005]: Merged extensions into '/usr'. Dec 13 14:32:27.681860 systemd-fsck[1002]: fsck.fat 4.2 (2021-01-31) Dec 13 14:32:27.681860 systemd-fsck[1002]: /dev/vda1: 789 files, 119291/258078 clusters Dec 13 14:32:27.688497 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Dec 13 14:32:27.688000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:27.691864 systemd[1]: Mounting boot.mount... Dec 13 14:32:27.692529 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:32:27.699435 systemd[1]: Mounting usr-share-oem.mount... Dec 13 14:32:27.700291 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:32:27.702348 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:32:27.704289 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:32:27.708582 systemd[1]: Starting modprobe@loop.service... Dec 13 14:32:27.709417 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:32:27.709575 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:32:27.709714 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:32:27.716483 systemd[1]: Mounted boot.mount. Dec 13 14:32:27.717958 systemd[1]: Mounted usr-share-oem.mount. Dec 13 14:32:27.718716 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:32:27.718873 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:32:27.718000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:27.718000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:27.719694 systemd[1]: Finished systemd-sysext.service. Dec 13 14:32:27.723000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:27.724588 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:32:27.724722 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:32:27.725000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:27.725000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:27.727152 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:32:27.727294 systemd[1]: Finished modprobe@loop.service. Dec 13 14:32:27.727000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:27.727000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:27.734121 systemd[1]: Starting ensure-sysext.service... Dec 13 14:32:27.736102 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:32:27.736150 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 14:32:27.739072 systemd[1]: Starting systemd-tmpfiles-setup.service... Dec 13 14:32:27.746643 systemd[1]: Reloading. Dec 13 14:32:27.755830 systemd-tmpfiles[1013]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Dec 13 14:32:27.759052 systemd-tmpfiles[1013]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 14:32:27.767229 systemd-tmpfiles[1013]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 14:32:27.849915 /usr/lib/systemd/system-generators/torcx-generator[1035]: time="2024-12-13T14:32:27Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 14:32:27.850315 /usr/lib/systemd/system-generators/torcx-generator[1035]: time="2024-12-13T14:32:27Z" level=info msg="torcx already run" Dec 13 14:32:27.983844 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 14:32:27.983865 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 14:32:28.013230 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:32:28.084000 audit: BPF prog-id=24 op=LOAD Dec 13 14:32:28.084000 audit: BPF prog-id=21 op=UNLOAD Dec 13 14:32:28.085000 audit: BPF prog-id=25 op=LOAD Dec 13 14:32:28.085000 audit: BPF prog-id=26 op=LOAD Dec 13 14:32:28.085000 audit: BPF prog-id=22 op=UNLOAD Dec 13 14:32:28.085000 audit: BPF prog-id=23 op=UNLOAD Dec 13 14:32:28.088000 audit: BPF prog-id=27 op=LOAD Dec 13 14:32:28.088000 audit: BPF prog-id=15 op=UNLOAD Dec 13 14:32:28.089000 audit: BPF prog-id=28 op=LOAD Dec 13 14:32:28.089000 audit: BPF prog-id=29 op=LOAD Dec 13 14:32:28.089000 audit: BPF prog-id=16 op=UNLOAD Dec 13 14:32:28.089000 audit: BPF prog-id=17 op=UNLOAD Dec 13 14:32:28.089000 audit: BPF prog-id=30 op=LOAD Dec 13 14:32:28.089000 audit: BPF prog-id=20 op=UNLOAD Dec 13 14:32:28.091000 audit: BPF prog-id=31 op=LOAD Dec 13 14:32:28.091000 audit: BPF prog-id=32 op=LOAD Dec 13 14:32:28.091000 audit: BPF prog-id=18 op=UNLOAD Dec 13 14:32:28.091000 audit: BPF prog-id=19 op=UNLOAD Dec 13 14:32:28.096208 systemd[1]: Finished systemd-boot-update.service. Dec 13 14:32:28.095000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:28.098122 systemd[1]: Finished systemd-tmpfiles-setup.service. Dec 13 14:32:28.097000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:28.104124 systemd[1]: Starting audit-rules.service... Dec 13 14:32:28.105696 systemd[1]: Starting clean-ca-certificates.service... Dec 13 14:32:28.108496 systemd[1]: Starting systemd-journal-catalog-update.service... Dec 13 14:32:28.109000 audit: BPF prog-id=33 op=LOAD Dec 13 14:32:28.112025 systemd[1]: Starting systemd-resolved.service... Dec 13 14:32:28.116000 audit: BPF prog-id=34 op=LOAD Dec 13 14:32:28.118671 systemd[1]: Starting systemd-timesyncd.service... Dec 13 14:32:28.121590 systemd[1]: Starting systemd-update-utmp.service... Dec 13 14:32:28.135324 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:32:28.148000 audit[1086]: SYSTEM_BOOT pid=1086 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Dec 13 14:32:28.149688 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:32:28.153538 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:32:28.157999 systemd[1]: Starting modprobe@loop.service... Dec 13 14:32:28.158575 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:32:28.158733 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:32:28.160204 systemd[1]: Finished clean-ca-certificates.service. Dec 13 14:32:28.160000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:28.161410 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:32:28.161606 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:32:28.161000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:28.161000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:28.166466 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 14:32:28.167970 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:32:28.168123 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:32:28.168000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:28.168000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:28.172731 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:32:28.172857 systemd[1]: Finished modprobe@loop.service. Dec 13 14:32:28.173000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:28.173000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:28.175239 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:32:28.178183 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:32:28.180949 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:32:28.182189 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:32:28.182353 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:32:28.182502 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 14:32:28.184630 systemd[1]: Finished systemd-update-utmp.service. Dec 13 14:32:28.184000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:28.186009 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:32:28.186155 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:32:28.186000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:28.186000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:28.192818 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:32:28.194258 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:32:28.196934 ldconfig[992]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 14:32:28.198085 systemd[1]: Starting modprobe@drm.service... Dec 13 14:32:28.199857 systemd[1]: Starting modprobe@loop.service... Dec 13 14:32:28.201066 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:32:28.201199 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:32:28.204257 systemd[1]: Starting systemd-networkd-wait-online.service... Dec 13 14:32:28.205028 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 14:32:28.208219 systemd[1]: Finished ensure-sysext.service. Dec 13 14:32:28.207000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:28.211268 systemd[1]: Finished systemd-journal-catalog-update.service. Dec 13 14:32:28.210000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:28.217601 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:32:28.217742 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:32:28.217000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:28.217000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:28.218382 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:32:28.218858 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:32:28.218000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:28.218000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:28.220000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:28.220000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:28.222000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:28.222000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:28.219019 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:32:28.221085 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:32:28.221206 systemd[1]: Finished modprobe@loop.service. Dec 13 14:32:28.221759 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 14:32:28.222655 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 14:32:28.222770 systemd[1]: Finished modprobe@drm.service. Dec 13 14:32:28.234478 systemd[1]: Finished ldconfig.service. Dec 13 14:32:28.234000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:28.236339 systemd[1]: Starting systemd-update-done.service... Dec 13 14:32:28.249392 systemd[1]: Finished systemd-update-done.service. Dec 13 14:32:28.249000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:28.263000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Dec 13 14:32:28.263000 audit[1111]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd392d3700 a2=420 a3=0 items=0 ppid=1080 pid=1111 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:32:28.263000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 13 14:32:28.265703 augenrules[1111]: No rules Dec 13 14:32:28.266340 systemd[1]: Finished audit-rules.service. Dec 13 14:32:28.282587 systemd[1]: Started systemd-timesyncd.service. Dec 13 14:32:28.283233 systemd[1]: Reached target time-set.target. Dec 13 14:32:28.285719 systemd-resolved[1083]: Positive Trust Anchors: Dec 13 14:32:28.285737 systemd-resolved[1083]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 14:32:28.285772 systemd-resolved[1083]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 14:32:28.293814 systemd-resolved[1083]: Using system hostname 'ci-3510-3-6-f-a8c495e5df.novalocal'. Dec 13 14:32:28.295403 systemd[1]: Started systemd-resolved.service. Dec 13 14:32:28.295956 systemd[1]: Reached target network.target. Dec 13 14:32:28.296369 systemd[1]: Reached target nss-lookup.target. Dec 13 14:32:29.290088 systemd-timesyncd[1085]: Contacted time server 51.68.44.27:123 (0.flatcar.pool.ntp.org). Dec 13 14:32:29.290156 systemd[1]: Reached target sysinit.target. Dec 13 14:32:29.290441 systemd-timesyncd[1085]: Initial clock synchronization to Fri 2024-12-13 14:32:29.290008 UTC. Dec 13 14:32:29.290683 systemd[1]: Started motdgen.path. Dec 13 14:32:29.291115 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Dec 13 14:32:29.291784 systemd[1]: Started logrotate.timer. Dec 13 14:32:29.292326 systemd[1]: Started mdadm.timer. Dec 13 14:32:29.292720 systemd[1]: Started systemd-tmpfiles-clean.timer. Dec 13 14:32:29.293149 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 14:32:29.293183 systemd[1]: Reached target paths.target. Dec 13 14:32:29.293590 systemd[1]: Reached target timers.target. Dec 13 14:32:29.294345 systemd[1]: Listening on dbus.socket. Dec 13 14:32:29.295488 systemd-resolved[1083]: Clock change detected. Flushing caches. Dec 13 14:32:29.296193 systemd[1]: Starting docker.socket... Dec 13 14:32:29.299935 systemd[1]: Listening on sshd.socket. Dec 13 14:32:29.300481 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:32:29.300949 systemd[1]: Listening on docker.socket. Dec 13 14:32:29.301546 systemd[1]: Reached target sockets.target. Dec 13 14:32:29.301960 systemd[1]: Reached target basic.target. Dec 13 14:32:29.302435 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 14:32:29.302467 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 14:32:29.303540 systemd[1]: Starting containerd.service... Dec 13 14:32:29.304968 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Dec 13 14:32:29.307070 systemd[1]: Starting dbus.service... Dec 13 14:32:29.310743 systemd[1]: Starting enable-oem-cloudinit.service... Dec 13 14:32:29.315899 systemd[1]: Starting extend-filesystems.service... Dec 13 14:32:29.318600 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Dec 13 14:32:29.321449 systemd[1]: Starting motdgen.service... Dec 13 14:32:29.325398 systemd[1]: Starting ssh-key-proc-cmdline.service... Dec 13 14:32:29.327222 systemd[1]: Starting sshd-keygen.service... Dec 13 14:32:29.332710 systemd[1]: Starting systemd-logind.service... Dec 13 14:32:29.334190 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:32:29.334290 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 14:32:29.334770 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 14:32:29.343887 jq[1133]: true Dec 13 14:32:29.337039 systemd[1]: Starting update-engine.service... Dec 13 14:32:29.339339 systemd[1]: Starting update-ssh-keys-after-ignition.service... Dec 13 14:32:29.342856 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 14:32:29.343049 systemd[1]: Finished ssh-key-proc-cmdline.service. Dec 13 14:32:29.354542 jq[1124]: false Dec 13 14:32:29.350715 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 14:32:29.350901 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Dec 13 14:32:29.369948 jq[1136]: true Dec 13 14:32:29.392329 dbus-daemon[1121]: [system] SELinux support is enabled Dec 13 14:32:29.392548 systemd[1]: Started dbus.service. Dec 13 14:32:29.395211 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 14:32:29.395258 systemd[1]: Reached target system-config.target. Dec 13 14:32:29.395758 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 14:32:29.395782 systemd[1]: Reached target user-config.target. Dec 13 14:32:29.399135 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 14:32:29.399337 systemd[1]: Finished motdgen.service. Dec 13 14:32:29.404189 extend-filesystems[1125]: Found loop1 Dec 13 14:32:29.404928 extend-filesystems[1125]: Found vda Dec 13 14:32:29.404928 extend-filesystems[1125]: Found vda1 Dec 13 14:32:29.404928 extend-filesystems[1125]: Found vda2 Dec 13 14:32:29.404928 extend-filesystems[1125]: Found vda3 Dec 13 14:32:29.407020 extend-filesystems[1125]: Found usr Dec 13 14:32:29.407503 extend-filesystems[1125]: Found vda4 Dec 13 14:32:29.407503 extend-filesystems[1125]: Found vda6 Dec 13 14:32:29.407503 extend-filesystems[1125]: Found vda7 Dec 13 14:32:29.407503 extend-filesystems[1125]: Found vda9 Dec 13 14:32:29.407503 extend-filesystems[1125]: Checking size of /dev/vda9 Dec 13 14:32:29.410771 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:32:29.410799 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:32:29.441164 extend-filesystems[1125]: Resized partition /dev/vda9 Dec 13 14:32:29.461340 extend-filesystems[1171]: resize2fs 1.46.5 (30-Dec-2021) Dec 13 14:32:29.473463 env[1138]: time="2024-12-13T14:32:29.473393151Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Dec 13 14:32:29.508831 systemd-logind[1130]: Watching system buttons on /dev/input/event1 (Power Button) Dec 13 14:32:29.509299 systemd-logind[1130]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 13 14:32:29.510459 systemd-logind[1130]: New seat seat0. Dec 13 14:32:29.512530 systemd[1]: Started systemd-logind.service. Dec 13 14:32:29.518592 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 4635643 blocks Dec 13 14:32:29.518663 bash[1167]: Updated "/home/core/.ssh/authorized_keys" Dec 13 14:32:29.518867 systemd[1]: Finished update-ssh-keys-after-ignition.service. Dec 13 14:32:29.527443 update_engine[1131]: I1213 14:32:29.526532 1131 main.cc:92] Flatcar Update Engine starting Dec 13 14:32:29.535033 systemd[1]: Started update-engine.service. Dec 13 14:32:29.537838 update_engine[1131]: I1213 14:32:29.537647 1131 update_check_scheduler.cc:74] Next update check in 4m38s Dec 13 14:32:29.538367 systemd[1]: Started locksmithd.service. Dec 13 14:32:29.542229 env[1138]: time="2024-12-13T14:32:29.542173840Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 14:32:29.542432 env[1138]: time="2024-12-13T14:32:29.542407608Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:32:29.545150 env[1138]: time="2024-12-13T14:32:29.545103354Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.173-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 14:32:29.545150 env[1138]: time="2024-12-13T14:32:29.545141125Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:32:29.545374 env[1138]: time="2024-12-13T14:32:29.545345598Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 14:32:29.545437 env[1138]: time="2024-12-13T14:32:29.545372529Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 14:32:29.545437 env[1138]: time="2024-12-13T14:32:29.545395171Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Dec 13 14:32:29.545437 env[1138]: time="2024-12-13T14:32:29.545408196Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 14:32:29.545519 env[1138]: time="2024-12-13T14:32:29.545493195Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:32:29.545756 env[1138]: time="2024-12-13T14:32:29.545731572Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:32:29.545884 env[1138]: time="2024-12-13T14:32:29.545856496Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 14:32:29.545925 env[1138]: time="2024-12-13T14:32:29.545881273Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 14:32:29.545955 env[1138]: time="2024-12-13T14:32:29.545938811Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Dec 13 14:32:29.545992 env[1138]: time="2024-12-13T14:32:29.545955312Z" level=info msg="metadata content store policy set" policy=shared Dec 13 14:32:29.615290 kernel: EXT4-fs (vda9): resized filesystem to 4635643 Dec 13 14:32:29.687634 extend-filesystems[1171]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Dec 13 14:32:29.687634 extend-filesystems[1171]: old_desc_blocks = 1, new_desc_blocks = 3 Dec 13 14:32:29.687634 extend-filesystems[1171]: The filesystem on /dev/vda9 is now 4635643 (4k) blocks long. Dec 13 14:32:29.689892 extend-filesystems[1125]: Resized filesystem in /dev/vda9 Dec 13 14:32:29.689768 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 14:32:29.690207 systemd[1]: Finished extend-filesystems.service. Dec 13 14:32:29.694976 env[1138]: time="2024-12-13T14:32:29.694815028Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 14:32:29.695090 env[1138]: time="2024-12-13T14:32:29.695040190Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 14:32:29.695175 env[1138]: time="2024-12-13T14:32:29.695137874Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 14:32:29.695363 env[1138]: time="2024-12-13T14:32:29.695323432Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 14:32:29.695568 env[1138]: time="2024-12-13T14:32:29.695431885Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 14:32:29.695662 env[1138]: time="2024-12-13T14:32:29.695585854Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 14:32:29.695697 env[1138]: time="2024-12-13T14:32:29.695676203Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 14:32:29.695791 env[1138]: time="2024-12-13T14:32:29.695755752Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 14:32:29.695878 env[1138]: time="2024-12-13T14:32:29.695840792Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Dec 13 14:32:29.695964 env[1138]: time="2024-12-13T14:32:29.695893260Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 14:32:29.696069 env[1138]: time="2024-12-13T14:32:29.695994300Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 14:32:29.696150 env[1138]: time="2024-12-13T14:32:29.696082826Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 14:32:29.697098 env[1138]: time="2024-12-13T14:32:29.697053446Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 14:32:29.697527 env[1138]: time="2024-12-13T14:32:29.697442787Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 14:32:29.698602 env[1138]: time="2024-12-13T14:32:29.698522391Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 14:32:29.698728 env[1138]: time="2024-12-13T14:32:29.698687922Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 14:32:29.698818 env[1138]: time="2024-12-13T14:32:29.698777259Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 14:32:29.699040 env[1138]: time="2024-12-13T14:32:29.699000929Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 14:32:29.699164 env[1138]: time="2024-12-13T14:32:29.699092150Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 14:32:29.699202 env[1138]: time="2024-12-13T14:32:29.699181257Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 14:32:29.699288 env[1138]: time="2024-12-13T14:32:29.699223156Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 14:32:29.699356 env[1138]: time="2024-12-13T14:32:29.699319075Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 14:32:29.699438 env[1138]: time="2024-12-13T14:32:29.699402482Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 14:32:29.699518 env[1138]: time="2024-12-13T14:32:29.699449350Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 14:32:29.699551 env[1138]: time="2024-12-13T14:32:29.699530021Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 14:32:29.699662 env[1138]: time="2024-12-13T14:32:29.699625290Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 14:32:29.700297 env[1138]: time="2024-12-13T14:32:29.700230124Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 14:32:29.700337 env[1138]: time="2024-12-13T14:32:29.700313300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 14:32:29.700424 env[1138]: time="2024-12-13T14:32:29.700350941Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 14:32:29.700457 env[1138]: time="2024-12-13T14:32:29.700438685Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 14:32:29.700521 env[1138]: time="2024-12-13T14:32:29.700481045Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Dec 13 14:32:29.700562 env[1138]: time="2024-12-13T14:32:29.700522563Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 14:32:29.700618 env[1138]: time="2024-12-13T14:32:29.700572707Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Dec 13 14:32:29.700754 env[1138]: time="2024-12-13T14:32:29.700717037Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 14:32:29.701468 env[1138]: time="2024-12-13T14:32:29.701325388Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 14:32:29.705032 env[1138]: time="2024-12-13T14:32:29.701497972Z" level=info msg="Connect containerd service" Dec 13 14:32:29.705032 env[1138]: time="2024-12-13T14:32:29.701660256Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 14:32:29.705032 env[1138]: time="2024-12-13T14:32:29.704177057Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 14:32:29.706105 env[1138]: time="2024-12-13T14:32:29.706019502Z" level=info msg="Start subscribing containerd event" Dec 13 14:32:29.706169 env[1138]: time="2024-12-13T14:32:29.706136512Z" level=info msg="Start recovering state" Dec 13 14:32:29.706532 env[1138]: time="2024-12-13T14:32:29.706490045Z" level=info msg="Start event monitor" Dec 13 14:32:29.706588 env[1138]: time="2024-12-13T14:32:29.706565646Z" level=info msg="Start snapshots syncer" Dec 13 14:32:29.706623 env[1138]: time="2024-12-13T14:32:29.706592607Z" level=info msg="Start cni network conf syncer for default" Dec 13 14:32:29.706623 env[1138]: time="2024-12-13T14:32:29.706613987Z" level=info msg="Start streaming server" Dec 13 14:32:29.706973 env[1138]: time="2024-12-13T14:32:29.706929429Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 14:32:29.707322 env[1138]: time="2024-12-13T14:32:29.707232788Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 14:32:29.715180 env[1138]: time="2024-12-13T14:32:29.715147822Z" level=info msg="containerd successfully booted in 0.242635s" Dec 13 14:32:29.715335 systemd[1]: Started containerd.service. Dec 13 14:32:29.813130 locksmithd[1175]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 14:32:29.949597 systemd-networkd[977]: eth0: Gained IPv6LL Dec 13 14:32:29.952720 systemd[1]: Finished systemd-networkd-wait-online.service. Dec 13 14:32:29.954725 systemd[1]: Reached target network-online.target. Dec 13 14:32:29.960204 systemd[1]: Starting kubelet.service... Dec 13 14:32:30.403217 systemd[1]: Created slice system-sshd.slice. Dec 13 14:32:30.596110 sshd_keygen[1148]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 14:32:30.619651 systemd[1]: Finished sshd-keygen.service. Dec 13 14:32:30.621864 systemd[1]: Starting issuegen.service... Dec 13 14:32:30.623477 systemd[1]: Started sshd@0-172.24.4.94:22-172.24.4.1:53576.service. Dec 13 14:32:30.631187 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 14:32:30.631365 systemd[1]: Finished issuegen.service. Dec 13 14:32:30.633348 systemd[1]: Starting systemd-user-sessions.service... Dec 13 14:32:30.645019 systemd[1]: Finished systemd-user-sessions.service. Dec 13 14:32:30.647114 systemd[1]: Started getty@tty1.service. Dec 13 14:32:30.648829 systemd[1]: Started serial-getty@ttyS0.service. Dec 13 14:32:30.649588 systemd[1]: Reached target getty.target. Dec 13 14:32:31.870764 sshd[1196]: Accepted publickey for core from 172.24.4.1 port 53576 ssh2: RSA SHA256:2ngTm68CMx36X1xnKPqUJq9w0RJJht3bhOuOq01A7tI Dec 13 14:32:31.874177 sshd[1196]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:32:31.903877 systemd[1]: Created slice user-500.slice. Dec 13 14:32:31.909621 systemd[1]: Starting user-runtime-dir@500.service... Dec 13 14:32:31.917636 systemd-logind[1130]: New session 1 of user core. Dec 13 14:32:31.929705 systemd[1]: Finished user-runtime-dir@500.service. Dec 13 14:32:31.938949 systemd[1]: Starting user@500.service... Dec 13 14:32:31.946065 (systemd)[1204]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:32:32.049068 systemd[1204]: Queued start job for default target default.target. Dec 13 14:32:32.049696 systemd[1204]: Reached target paths.target. Dec 13 14:32:32.049720 systemd[1204]: Reached target sockets.target. Dec 13 14:32:32.049735 systemd[1204]: Reached target timers.target. Dec 13 14:32:32.049753 systemd[1204]: Reached target basic.target. Dec 13 14:32:32.049883 systemd[1]: Started user@500.service. Dec 13 14:32:32.051392 systemd[1]: Started session-1.scope. Dec 13 14:32:32.052212 systemd[1204]: Reached target default.target. Dec 13 14:32:32.052386 systemd[1204]: Startup finished in 91ms. Dec 13 14:32:32.344421 systemd[1]: Started kubelet.service. Dec 13 14:32:32.944504 systemd[1]: Started sshd@1-172.24.4.94:22-172.24.4.1:53582.service. Dec 13 14:32:33.894425 kubelet[1213]: E1213 14:32:33.894293 1213 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:32:33.898786 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:32:33.899314 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:32:33.899962 systemd[1]: kubelet.service: Consumed 2.038s CPU time. Dec 13 14:32:34.484772 sshd[1222]: Accepted publickey for core from 172.24.4.1 port 53582 ssh2: RSA SHA256:2ngTm68CMx36X1xnKPqUJq9w0RJJht3bhOuOq01A7tI Dec 13 14:32:34.487794 sshd[1222]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:32:34.499846 systemd-logind[1130]: New session 2 of user core. Dec 13 14:32:34.500724 systemd[1]: Started session-2.scope. Dec 13 14:32:35.128855 sshd[1222]: pam_unix(sshd:session): session closed for user core Dec 13 14:32:35.136899 systemd[1]: Started sshd@2-172.24.4.94:22-172.24.4.1:36994.service. Dec 13 14:32:35.141416 systemd[1]: sshd@1-172.24.4.94:22-172.24.4.1:53582.service: Deactivated successfully. Dec 13 14:32:35.142932 systemd[1]: session-2.scope: Deactivated successfully. Dec 13 14:32:35.147881 systemd-logind[1130]: Session 2 logged out. Waiting for processes to exit. Dec 13 14:32:35.151189 systemd-logind[1130]: Removed session 2. Dec 13 14:32:36.426898 coreos-metadata[1120]: Dec 13 14:32:36.426 WARN failed to locate config-drive, using the metadata service API instead Dec 13 14:32:36.515222 coreos-metadata[1120]: Dec 13 14:32:36.515 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Dec 13 14:32:36.551346 sshd[1227]: Accepted publickey for core from 172.24.4.1 port 36994 ssh2: RSA SHA256:2ngTm68CMx36X1xnKPqUJq9w0RJJht3bhOuOq01A7tI Dec 13 14:32:36.554659 sshd[1227]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:32:36.567567 systemd-logind[1130]: New session 3 of user core. Dec 13 14:32:36.568647 systemd[1]: Started session-3.scope. Dec 13 14:32:36.880323 coreos-metadata[1120]: Dec 13 14:32:36.879 INFO Fetch successful Dec 13 14:32:36.880323 coreos-metadata[1120]: Dec 13 14:32:36.879 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Dec 13 14:32:36.890099 coreos-metadata[1120]: Dec 13 14:32:36.889 INFO Fetch successful Dec 13 14:32:36.896503 unknown[1120]: wrote ssh authorized keys file for user: core Dec 13 14:32:36.935244 update-ssh-keys[1233]: Updated "/home/core/.ssh/authorized_keys" Dec 13 14:32:36.937064 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Dec 13 14:32:36.937928 systemd[1]: Reached target multi-user.target. Dec 13 14:32:36.940683 systemd[1]: Starting systemd-update-utmp-runlevel.service... Dec 13 14:32:36.959959 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Dec 13 14:32:36.960340 systemd[1]: Finished systemd-update-utmp-runlevel.service. Dec 13 14:32:36.961775 systemd[1]: Startup finished in 1.043s (kernel) + 9.462s (initrd) + 15.717s (userspace) = 26.223s. Dec 13 14:32:37.279197 sshd[1227]: pam_unix(sshd:session): session closed for user core Dec 13 14:32:37.286200 systemd[1]: sshd@2-172.24.4.94:22-172.24.4.1:36994.service: Deactivated successfully. Dec 13 14:32:37.288126 systemd[1]: session-3.scope: Deactivated successfully. Dec 13 14:32:37.289660 systemd-logind[1130]: Session 3 logged out. Waiting for processes to exit. Dec 13 14:32:37.291978 systemd-logind[1130]: Removed session 3. Dec 13 14:32:44.151435 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 14:32:44.151888 systemd[1]: Stopped kubelet.service. Dec 13 14:32:44.151974 systemd[1]: kubelet.service: Consumed 2.038s CPU time. Dec 13 14:32:44.155057 systemd[1]: Starting kubelet.service... Dec 13 14:32:44.508081 systemd[1]: Started kubelet.service. Dec 13 14:32:44.967386 kubelet[1241]: E1213 14:32:44.966946 1241 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:32:44.978011 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:32:44.978573 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:32:47.286120 systemd[1]: Started sshd@3-172.24.4.94:22-172.24.4.1:48272.service. Dec 13 14:32:48.696969 sshd[1248]: Accepted publickey for core from 172.24.4.1 port 48272 ssh2: RSA SHA256:2ngTm68CMx36X1xnKPqUJq9w0RJJht3bhOuOq01A7tI Dec 13 14:32:48.700018 sshd[1248]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:32:48.710874 systemd-logind[1130]: New session 4 of user core. Dec 13 14:32:48.711710 systemd[1]: Started session-4.scope. Dec 13 14:32:49.290065 sshd[1248]: pam_unix(sshd:session): session closed for user core Dec 13 14:32:49.296002 systemd[1]: Started sshd@4-172.24.4.94:22-172.24.4.1:48284.service. Dec 13 14:32:49.300999 systemd[1]: sshd@3-172.24.4.94:22-172.24.4.1:48272.service: Deactivated successfully. Dec 13 14:32:49.302701 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 14:32:49.305190 systemd-logind[1130]: Session 4 logged out. Waiting for processes to exit. Dec 13 14:32:49.307618 systemd-logind[1130]: Removed session 4. Dec 13 14:32:50.765165 sshd[1253]: Accepted publickey for core from 172.24.4.1 port 48284 ssh2: RSA SHA256:2ngTm68CMx36X1xnKPqUJq9w0RJJht3bhOuOq01A7tI Dec 13 14:32:50.768417 sshd[1253]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:32:50.778986 systemd[1]: Started session-5.scope. Dec 13 14:32:50.780202 systemd-logind[1130]: New session 5 of user core. Dec 13 14:32:51.409139 sshd[1253]: pam_unix(sshd:session): session closed for user core Dec 13 14:32:51.416147 systemd[1]: Started sshd@5-172.24.4.94:22-172.24.4.1:48292.service. Dec 13 14:32:51.420703 systemd[1]: sshd@4-172.24.4.94:22-172.24.4.1:48284.service: Deactivated successfully. Dec 13 14:32:51.422360 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 14:32:51.425916 systemd-logind[1130]: Session 5 logged out. Waiting for processes to exit. Dec 13 14:32:51.428626 systemd-logind[1130]: Removed session 5. Dec 13 14:32:52.776168 sshd[1259]: Accepted publickey for core from 172.24.4.1 port 48292 ssh2: RSA SHA256:2ngTm68CMx36X1xnKPqUJq9w0RJJht3bhOuOq01A7tI Dec 13 14:32:52.779627 sshd[1259]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:32:52.790914 systemd-logind[1130]: New session 6 of user core. Dec 13 14:32:52.791790 systemd[1]: Started session-6.scope. Dec 13 14:32:53.419534 sshd[1259]: pam_unix(sshd:session): session closed for user core Dec 13 14:32:53.430495 systemd[1]: Started sshd@6-172.24.4.94:22-172.24.4.1:48306.service. Dec 13 14:32:53.431747 systemd[1]: sshd@5-172.24.4.94:22-172.24.4.1:48292.service: Deactivated successfully. Dec 13 14:32:53.433215 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 14:32:53.437738 systemd-logind[1130]: Session 6 logged out. Waiting for processes to exit. Dec 13 14:32:53.440176 systemd-logind[1130]: Removed session 6. Dec 13 14:32:54.619410 sshd[1265]: Accepted publickey for core from 172.24.4.1 port 48306 ssh2: RSA SHA256:2ngTm68CMx36X1xnKPqUJq9w0RJJht3bhOuOq01A7tI Dec 13 14:32:54.623051 sshd[1265]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:32:54.633556 systemd-logind[1130]: New session 7 of user core. Dec 13 14:32:54.634460 systemd[1]: Started session-7.scope. Dec 13 14:32:55.144855 sudo[1269]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 14:32:55.145392 sudo[1269]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Dec 13 14:32:55.147000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 14:32:55.147566 systemd[1]: Stopped kubelet.service. Dec 13 14:32:55.150362 systemd[1]: Starting kubelet.service... Dec 13 14:32:55.192006 systemd[1]: Starting coreos-metadata.service... Dec 13 14:32:55.503580 systemd[1]: Started kubelet.service. Dec 13 14:32:55.810595 kubelet[1280]: E1213 14:32:55.809856 1280 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:32:55.814385 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:32:55.814710 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:33:02.257743 coreos-metadata[1275]: Dec 13 14:33:02.257 WARN failed to locate config-drive, using the metadata service API instead Dec 13 14:33:02.350009 coreos-metadata[1275]: Dec 13 14:33:02.349 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Dec 13 14:33:02.701696 coreos-metadata[1275]: Dec 13 14:33:02.701 INFO Fetch successful Dec 13 14:33:02.702077 coreos-metadata[1275]: Dec 13 14:33:02.701 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Dec 13 14:33:02.721136 coreos-metadata[1275]: Dec 13 14:33:02.720 INFO Fetch successful Dec 13 14:33:02.721574 coreos-metadata[1275]: Dec 13 14:33:02.721 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Dec 13 14:33:02.734114 coreos-metadata[1275]: Dec 13 14:33:02.733 INFO Fetch successful Dec 13 14:33:02.734451 coreos-metadata[1275]: Dec 13 14:33:02.734 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Dec 13 14:33:02.744910 coreos-metadata[1275]: Dec 13 14:33:02.744 INFO Fetch successful Dec 13 14:33:02.745209 coreos-metadata[1275]: Dec 13 14:33:02.745 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Dec 13 14:33:02.754545 coreos-metadata[1275]: Dec 13 14:33:02.754 INFO Fetch successful Dec 13 14:33:02.772996 systemd[1]: Finished coreos-metadata.service. Dec 13 14:33:04.073451 systemd[1]: Stopped kubelet.service. Dec 13 14:33:04.081958 systemd[1]: Starting kubelet.service... Dec 13 14:33:04.144630 systemd[1]: Reloading. Dec 13 14:33:04.266869 /usr/lib/systemd/system-generators/torcx-generator[1342]: time="2024-12-13T14:33:04Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 14:33:04.270365 /usr/lib/systemd/system-generators/torcx-generator[1342]: time="2024-12-13T14:33:04Z" level=info msg="torcx already run" Dec 13 14:33:05.085817 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 14:33:05.086109 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 14:33:05.110887 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:33:05.996751 systemd[1]: Started kubelet.service. Dec 13 14:33:06.005787 systemd[1]: Stopping kubelet.service... Dec 13 14:33:06.007458 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 14:33:06.007877 systemd[1]: Stopped kubelet.service. Dec 13 14:33:06.011919 systemd[1]: Starting kubelet.service... Dec 13 14:33:06.130746 systemd[1]: Started kubelet.service. Dec 13 14:33:06.219225 kubelet[1393]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 14:33:06.219225 kubelet[1393]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 14:33:06.219225 kubelet[1393]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 14:33:06.219637 kubelet[1393]: I1213 14:33:06.219313 1393 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 14:33:07.603591 kubelet[1393]: I1213 14:33:07.603533 1393 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Dec 13 14:33:07.603591 kubelet[1393]: I1213 14:33:07.603570 1393 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 14:33:07.604345 kubelet[1393]: I1213 14:33:07.603861 1393 server.go:929] "Client rotation is on, will bootstrap in background" Dec 13 14:33:07.636153 kubelet[1393]: I1213 14:33:07.636085 1393 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 14:33:07.653439 kubelet[1393]: E1213 14:33:07.653373 1393 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Dec 13 14:33:07.653439 kubelet[1393]: I1213 14:33:07.653420 1393 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Dec 13 14:33:07.659081 kubelet[1393]: I1213 14:33:07.659032 1393 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 14:33:07.662641 kubelet[1393]: I1213 14:33:07.662599 1393 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Dec 13 14:33:07.663331 kubelet[1393]: I1213 14:33:07.663223 1393 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 14:33:07.663849 kubelet[1393]: I1213 14:33:07.663479 1393 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172.24.4.94","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 13 14:33:07.664126 kubelet[1393]: I1213 14:33:07.664102 1393 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 14:33:07.664299 kubelet[1393]: I1213 14:33:07.664243 1393 container_manager_linux.go:300] "Creating device plugin manager" Dec 13 14:33:07.664624 kubelet[1393]: I1213 14:33:07.664600 1393 state_mem.go:36] "Initialized new in-memory state store" Dec 13 14:33:07.681201 kubelet[1393]: I1213 14:33:07.681159 1393 kubelet.go:408] "Attempting to sync node with API server" Dec 13 14:33:07.681532 kubelet[1393]: I1213 14:33:07.681509 1393 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 14:33:07.681751 kubelet[1393]: I1213 14:33:07.681727 1393 kubelet.go:314] "Adding apiserver pod source" Dec 13 14:33:07.681912 kubelet[1393]: I1213 14:33:07.681890 1393 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 14:33:07.692588 kubelet[1393]: E1213 14:33:07.692494 1393 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:33:07.692753 kubelet[1393]: E1213 14:33:07.692626 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:33:07.702750 kubelet[1393]: I1213 14:33:07.702704 1393 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Dec 13 14:33:07.709609 kubelet[1393]: I1213 14:33:07.709568 1393 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 14:33:07.710163 kubelet[1393]: W1213 14:33:07.710136 1393 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 14:33:07.712609 kubelet[1393]: I1213 14:33:07.712544 1393 server.go:1269] "Started kubelet" Dec 13 14:33:07.715331 kubelet[1393]: W1213 14:33:07.715234 1393 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Dec 13 14:33:07.715481 kubelet[1393]: E1213 14:33:07.715381 1393 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Dec 13 14:33:07.715877 kubelet[1393]: W1213 14:33:07.715819 1393 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "172.24.4.94" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Dec 13 14:33:07.716105 kubelet[1393]: E1213 14:33:07.715885 1393 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"172.24.4.94\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Dec 13 14:33:07.717833 kubelet[1393]: I1213 14:33:07.716609 1393 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 14:33:07.725561 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Dec 13 14:33:07.725743 kubelet[1393]: I1213 14:33:07.725719 1393 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 14:33:07.726597 kubelet[1393]: I1213 14:33:07.726495 1393 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 14:33:07.727752 kubelet[1393]: I1213 14:33:07.727717 1393 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 14:33:07.738314 kubelet[1393]: I1213 14:33:07.738210 1393 volume_manager.go:289] "Starting Kubelet Volume Manager" Dec 13 14:33:07.738687 kubelet[1393]: I1213 14:33:07.738595 1393 server.go:460] "Adding debug handlers to kubelet server" Dec 13 14:33:07.739437 kubelet[1393]: I1213 14:33:07.739355 1393 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 13 14:33:07.739579 kubelet[1393]: I1213 14:33:07.739563 1393 reconciler.go:26] "Reconciler: start to sync state" Dec 13 14:33:07.743144 kubelet[1393]: E1213 14:33:07.743016 1393 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 14:33:07.743144 kubelet[1393]: E1213 14:33:07.738734 1393 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.24.4.94\" not found" Dec 13 14:33:07.743781 kubelet[1393]: I1213 14:33:07.743744 1393 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 13 14:33:07.749034 kubelet[1393]: I1213 14:33:07.748933 1393 factory.go:221] Registration of the systemd container factory successfully Dec 13 14:33:07.751595 kubelet[1393]: I1213 14:33:07.751452 1393 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 14:33:07.757122 kubelet[1393]: I1213 14:33:07.757045 1393 factory.go:221] Registration of the containerd container factory successfully Dec 13 14:33:07.768742 kubelet[1393]: E1213 14:33:07.768696 1393 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"172.24.4.94\" not found" node="172.24.4.94" Dec 13 14:33:07.788732 kubelet[1393]: I1213 14:33:07.788696 1393 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 14:33:07.788732 kubelet[1393]: I1213 14:33:07.788722 1393 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 14:33:07.788920 kubelet[1393]: I1213 14:33:07.788768 1393 state_mem.go:36] "Initialized new in-memory state store" Dec 13 14:33:07.795974 kubelet[1393]: I1213 14:33:07.795934 1393 policy_none.go:49] "None policy: Start" Dec 13 14:33:07.797546 kubelet[1393]: I1213 14:33:07.797509 1393 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 14:33:07.797665 kubelet[1393]: I1213 14:33:07.797561 1393 state_mem.go:35] "Initializing new in-memory state store" Dec 13 14:33:07.813121 systemd[1]: Created slice kubepods.slice. Dec 13 14:33:07.824109 systemd[1]: Created slice kubepods-burstable.slice. Dec 13 14:33:07.843470 kubelet[1393]: E1213 14:33:07.843413 1393 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.24.4.94\" not found" Dec 13 14:33:07.845537 systemd[1]: Created slice kubepods-besteffort.slice. Dec 13 14:33:07.854035 kubelet[1393]: I1213 14:33:07.853929 1393 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 14:33:07.854372 kubelet[1393]: I1213 14:33:07.854359 1393 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 13 14:33:07.854495 kubelet[1393]: I1213 14:33:07.854452 1393 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 14:33:07.855314 kubelet[1393]: I1213 14:33:07.855301 1393 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 14:33:07.861290 kubelet[1393]: E1213 14:33:07.860549 1393 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172.24.4.94\" not found" Dec 13 14:33:07.917096 kubelet[1393]: I1213 14:33:07.917017 1393 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 14:33:07.918166 kubelet[1393]: I1213 14:33:07.918115 1393 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 14:33:07.918166 kubelet[1393]: I1213 14:33:07.918156 1393 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 14:33:07.918451 kubelet[1393]: I1213 14:33:07.918182 1393 kubelet.go:2321] "Starting kubelet main sync loop" Dec 13 14:33:07.918451 kubelet[1393]: E1213 14:33:07.918234 1393 kubelet.go:2345] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Dec 13 14:33:07.956844 kubelet[1393]: I1213 14:33:07.956790 1393 kubelet_node_status.go:72] "Attempting to register node" node="172.24.4.94" Dec 13 14:33:07.971616 kubelet[1393]: I1213 14:33:07.971574 1393 kubelet_node_status.go:75] "Successfully registered node" node="172.24.4.94" Dec 13 14:33:07.971938 kubelet[1393]: E1213 14:33:07.971876 1393 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"172.24.4.94\": node \"172.24.4.94\" not found" Dec 13 14:33:07.993204 kubelet[1393]: E1213 14:33:07.993155 1393 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.24.4.94\" not found" Dec 13 14:33:08.094021 kubelet[1393]: E1213 14:33:08.093970 1393 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.24.4.94\" not found" Dec 13 14:33:08.195668 kubelet[1393]: E1213 14:33:08.195564 1393 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.24.4.94\" not found" Dec 13 14:33:08.201760 sudo[1269]: pam_unix(sudo:session): session closed for user root Dec 13 14:33:08.296335 kubelet[1393]: E1213 14:33:08.296171 1393 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.24.4.94\" not found" Dec 13 14:33:08.397372 kubelet[1393]: E1213 14:33:08.397221 1393 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.24.4.94\" not found" Dec 13 14:33:08.400178 sshd[1265]: pam_unix(sshd:session): session closed for user core Dec 13 14:33:08.406042 systemd[1]: sshd@6-172.24.4.94:22-172.24.4.1:48306.service: Deactivated successfully. Dec 13 14:33:08.407886 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 14:33:08.408227 systemd[1]: session-7.scope: Consumed 1.109s CPU time. Dec 13 14:33:08.409379 systemd-logind[1130]: Session 7 logged out. Waiting for processes to exit. Dec 13 14:33:08.412124 systemd-logind[1130]: Removed session 7. Dec 13 14:33:08.497639 kubelet[1393]: E1213 14:33:08.497444 1393 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.24.4.94\" not found" Dec 13 14:33:08.598545 kubelet[1393]: E1213 14:33:08.598486 1393 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.24.4.94\" not found" Dec 13 14:33:08.606358 kubelet[1393]: I1213 14:33:08.606149 1393 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Dec 13 14:33:08.607021 kubelet[1393]: W1213 14:33:08.606534 1393 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Dec 13 14:33:08.607021 kubelet[1393]: W1213 14:33:08.606607 1393 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Dec 13 14:33:08.693634 kubelet[1393]: E1213 14:33:08.693568 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:33:08.698974 kubelet[1393]: E1213 14:33:08.698902 1393 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.24.4.94\" not found" Dec 13 14:33:08.800879 kubelet[1393]: E1213 14:33:08.799884 1393 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.24.4.94\" not found" Dec 13 14:33:08.900661 kubelet[1393]: E1213 14:33:08.900466 1393 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.24.4.94\" not found" Dec 13 14:33:09.001152 kubelet[1393]: E1213 14:33:09.001001 1393 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.24.4.94\" not found" Dec 13 14:33:09.102652 kubelet[1393]: E1213 14:33:09.101876 1393 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.24.4.94\" not found" Dec 13 14:33:09.202609 kubelet[1393]: E1213 14:33:09.202550 1393 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.24.4.94\" not found" Dec 13 14:33:09.303833 kubelet[1393]: I1213 14:33:09.303788 1393 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Dec 13 14:33:09.304955 env[1138]: time="2024-12-13T14:33:09.304762771Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 14:33:09.305614 kubelet[1393]: I1213 14:33:09.305141 1393 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Dec 13 14:33:09.694410 kubelet[1393]: E1213 14:33:09.694351 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:33:09.695216 kubelet[1393]: I1213 14:33:09.694435 1393 apiserver.go:52] "Watching apiserver" Dec 13 14:33:09.718984 systemd[1]: Created slice kubepods-burstable-pod70f97af7_716e_4242_ad8d_e0906d610939.slice. Dec 13 14:33:09.741190 systemd[1]: Created slice kubepods-besteffort-pod58986787_76c3_47a5_9f87_eb025121ea15.slice. Dec 13 14:33:09.742760 kubelet[1393]: I1213 14:33:09.742716 1393 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 13 14:33:09.755061 kubelet[1393]: I1213 14:33:09.754999 1393 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/70f97af7-716e-4242-ad8d-e0906d610939-host-proc-sys-net\") pod \"cilium-q4ksm\" (UID: \"70f97af7-716e-4242-ad8d-e0906d610939\") " pod="kube-system/cilium-q4ksm" Dec 13 14:33:09.755447 kubelet[1393]: I1213 14:33:09.755408 1393 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/70f97af7-716e-4242-ad8d-e0906d610939-host-proc-sys-kernel\") pod \"cilium-q4ksm\" (UID: \"70f97af7-716e-4242-ad8d-e0906d610939\") " pod="kube-system/cilium-q4ksm" Dec 13 14:33:09.755689 kubelet[1393]: I1213 14:33:09.755652 1393 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/70f97af7-716e-4242-ad8d-e0906d610939-clustermesh-secrets\") pod \"cilium-q4ksm\" (UID: \"70f97af7-716e-4242-ad8d-e0906d610939\") " pod="kube-system/cilium-q4ksm" Dec 13 14:33:09.755911 kubelet[1393]: I1213 14:33:09.755871 1393 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7dbsn\" (UniqueName: \"kubernetes.io/projected/70f97af7-716e-4242-ad8d-e0906d610939-kube-api-access-7dbsn\") pod \"cilium-q4ksm\" (UID: \"70f97af7-716e-4242-ad8d-e0906d610939\") " pod="kube-system/cilium-q4ksm" Dec 13 14:33:09.756119 kubelet[1393]: I1213 14:33:09.756083 1393 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjbsl\" (UniqueName: \"kubernetes.io/projected/58986787-76c3-47a5-9f87-eb025121ea15-kube-api-access-jjbsl\") pod \"kube-proxy-xqzfs\" (UID: \"58986787-76c3-47a5-9f87-eb025121ea15\") " pod="kube-system/kube-proxy-xqzfs" Dec 13 14:33:09.756379 kubelet[1393]: I1213 14:33:09.756343 1393 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/70f97af7-716e-4242-ad8d-e0906d610939-cilium-run\") pod \"cilium-q4ksm\" (UID: \"70f97af7-716e-4242-ad8d-e0906d610939\") " pod="kube-system/cilium-q4ksm" Dec 13 14:33:09.756604 kubelet[1393]: I1213 14:33:09.756569 1393 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/70f97af7-716e-4242-ad8d-e0906d610939-hostproc\") pod \"cilium-q4ksm\" (UID: \"70f97af7-716e-4242-ad8d-e0906d610939\") " pod="kube-system/cilium-q4ksm" Dec 13 14:33:09.756820 kubelet[1393]: I1213 14:33:09.756786 1393 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/70f97af7-716e-4242-ad8d-e0906d610939-xtables-lock\") pod \"cilium-q4ksm\" (UID: \"70f97af7-716e-4242-ad8d-e0906d610939\") " pod="kube-system/cilium-q4ksm" Dec 13 14:33:09.757017 kubelet[1393]: I1213 14:33:09.756985 1393 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/58986787-76c3-47a5-9f87-eb025121ea15-kube-proxy\") pod \"kube-proxy-xqzfs\" (UID: \"58986787-76c3-47a5-9f87-eb025121ea15\") " pod="kube-system/kube-proxy-xqzfs" Dec 13 14:33:09.757207 kubelet[1393]: I1213 14:33:09.757175 1393 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/70f97af7-716e-4242-ad8d-e0906d610939-bpf-maps\") pod \"cilium-q4ksm\" (UID: \"70f97af7-716e-4242-ad8d-e0906d610939\") " pod="kube-system/cilium-q4ksm" Dec 13 14:33:09.757466 kubelet[1393]: I1213 14:33:09.757423 1393 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/70f97af7-716e-4242-ad8d-e0906d610939-cilium-config-path\") pod \"cilium-q4ksm\" (UID: \"70f97af7-716e-4242-ad8d-e0906d610939\") " pod="kube-system/cilium-q4ksm" Dec 13 14:33:09.757723 kubelet[1393]: I1213 14:33:09.757685 1393 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/70f97af7-716e-4242-ad8d-e0906d610939-hubble-tls\") pod \"cilium-q4ksm\" (UID: \"70f97af7-716e-4242-ad8d-e0906d610939\") " pod="kube-system/cilium-q4ksm" Dec 13 14:33:09.757941 kubelet[1393]: I1213 14:33:09.757907 1393 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/70f97af7-716e-4242-ad8d-e0906d610939-lib-modules\") pod \"cilium-q4ksm\" (UID: \"70f97af7-716e-4242-ad8d-e0906d610939\") " pod="kube-system/cilium-q4ksm" Dec 13 14:33:09.758153 kubelet[1393]: I1213 14:33:09.758120 1393 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/58986787-76c3-47a5-9f87-eb025121ea15-xtables-lock\") pod \"kube-proxy-xqzfs\" (UID: \"58986787-76c3-47a5-9f87-eb025121ea15\") " pod="kube-system/kube-proxy-xqzfs" Dec 13 14:33:09.758402 kubelet[1393]: I1213 14:33:09.758362 1393 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/58986787-76c3-47a5-9f87-eb025121ea15-lib-modules\") pod \"kube-proxy-xqzfs\" (UID: \"58986787-76c3-47a5-9f87-eb025121ea15\") " pod="kube-system/kube-proxy-xqzfs" Dec 13 14:33:09.758620 kubelet[1393]: I1213 14:33:09.758587 1393 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/70f97af7-716e-4242-ad8d-e0906d610939-cilium-cgroup\") pod \"cilium-q4ksm\" (UID: \"70f97af7-716e-4242-ad8d-e0906d610939\") " pod="kube-system/cilium-q4ksm" Dec 13 14:33:09.758811 kubelet[1393]: I1213 14:33:09.758779 1393 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/70f97af7-716e-4242-ad8d-e0906d610939-cni-path\") pod \"cilium-q4ksm\" (UID: \"70f97af7-716e-4242-ad8d-e0906d610939\") " pod="kube-system/cilium-q4ksm" Dec 13 14:33:09.759063 kubelet[1393]: I1213 14:33:09.759028 1393 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/70f97af7-716e-4242-ad8d-e0906d610939-etc-cni-netd\") pod \"cilium-q4ksm\" (UID: \"70f97af7-716e-4242-ad8d-e0906d610939\") " pod="kube-system/cilium-q4ksm" Dec 13 14:33:09.864616 kubelet[1393]: I1213 14:33:09.864550 1393 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Dec 13 14:33:10.037944 env[1138]: time="2024-12-13T14:33:10.035543970Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-q4ksm,Uid:70f97af7-716e-4242-ad8d-e0906d610939,Namespace:kube-system,Attempt:0,}" Dec 13 14:33:10.051843 env[1138]: time="2024-12-13T14:33:10.051770147Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xqzfs,Uid:58986787-76c3-47a5-9f87-eb025121ea15,Namespace:kube-system,Attempt:0,}" Dec 13 14:33:10.696378 kubelet[1393]: E1213 14:33:10.696141 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:33:10.948553 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2043427143.mount: Deactivated successfully. Dec 13 14:33:10.969048 env[1138]: time="2024-12-13T14:33:10.968922387Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:33:10.977577 env[1138]: time="2024-12-13T14:33:10.977512968Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:33:10.979665 env[1138]: time="2024-12-13T14:33:10.979610130Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:33:10.981801 env[1138]: time="2024-12-13T14:33:10.981727400Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:33:10.987107 env[1138]: time="2024-12-13T14:33:10.987035979Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:33:10.992244 env[1138]: time="2024-12-13T14:33:10.992189907Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:33:10.998162 env[1138]: time="2024-12-13T14:33:10.998087198Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:33:10.999967 env[1138]: time="2024-12-13T14:33:10.999897468Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:33:11.136482 env[1138]: time="2024-12-13T14:33:11.136007613Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:33:11.136482 env[1138]: time="2024-12-13T14:33:11.136094658Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:33:11.136482 env[1138]: time="2024-12-13T14:33:11.136139682Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:33:11.136955 env[1138]: time="2024-12-13T14:33:11.136604710Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4e775b4add3f7b60daf4106c7a5221678edbf995f5889e2d23102f1737a66133 pid=1445 runtime=io.containerd.runc.v2 Dec 13 14:33:11.188505 systemd[1]: Started cri-containerd-4e775b4add3f7b60daf4106c7a5221678edbf995f5889e2d23102f1737a66133.scope. Dec 13 14:33:11.227828 env[1138]: time="2024-12-13T14:33:11.227654522Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-q4ksm,Uid:70f97af7-716e-4242-ad8d-e0906d610939,Namespace:kube-system,Attempt:0,} returns sandbox id \"4e775b4add3f7b60daf4106c7a5221678edbf995f5889e2d23102f1737a66133\"" Dec 13 14:33:11.231170 env[1138]: time="2024-12-13T14:33:11.231125837Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Dec 13 14:33:11.554888 env[1138]: time="2024-12-13T14:33:11.554194737Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:33:11.554888 env[1138]: time="2024-12-13T14:33:11.554360240Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:33:11.554888 env[1138]: time="2024-12-13T14:33:11.554394525Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:33:11.555833 env[1138]: time="2024-12-13T14:33:11.554797175Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0993c48255d03e48384905b387b4c75e4e120175dc2d88da9353836fa10cc544 pid=1487 runtime=io.containerd.runc.v2 Dec 13 14:33:11.584882 systemd[1]: Started cri-containerd-0993c48255d03e48384905b387b4c75e4e120175dc2d88da9353836fa10cc544.scope. Dec 13 14:33:11.650858 env[1138]: time="2024-12-13T14:33:11.650732084Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xqzfs,Uid:58986787-76c3-47a5-9f87-eb025121ea15,Namespace:kube-system,Attempt:0,} returns sandbox id \"0993c48255d03e48384905b387b4c75e4e120175dc2d88da9353836fa10cc544\"" Dec 13 14:33:11.696727 kubelet[1393]: E1213 14:33:11.696654 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:33:12.697375 kubelet[1393]: E1213 14:33:12.697243 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:33:13.699213 kubelet[1393]: E1213 14:33:13.699135 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:33:14.700809 kubelet[1393]: E1213 14:33:14.700704 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:33:14.742315 update_engine[1131]: I1213 14:33:14.742128 1131 update_attempter.cc:509] Updating boot flags... Dec 13 14:33:15.701614 kubelet[1393]: E1213 14:33:15.701484 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:33:16.702414 kubelet[1393]: E1213 14:33:16.702322 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:33:17.703128 kubelet[1393]: E1213 14:33:17.703075 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:33:18.707443 kubelet[1393]: E1213 14:33:18.707371 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:33:19.708970 kubelet[1393]: E1213 14:33:19.708895 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:33:20.710230 kubelet[1393]: E1213 14:33:20.710113 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:33:20.940606 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3260544194.mount: Deactivated successfully. Dec 13 14:33:21.711163 kubelet[1393]: E1213 14:33:21.711045 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:33:22.712100 kubelet[1393]: E1213 14:33:22.711968 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:33:23.713914 kubelet[1393]: E1213 14:33:23.713847 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:33:24.715811 kubelet[1393]: E1213 14:33:24.715738 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:33:25.716437 kubelet[1393]: E1213 14:33:25.716339 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:33:25.964530 env[1138]: time="2024-12-13T14:33:25.964389821Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:33:25.970565 env[1138]: time="2024-12-13T14:33:25.970382492Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:33:25.977204 env[1138]: time="2024-12-13T14:33:25.977092822Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:33:25.979521 env[1138]: time="2024-12-13T14:33:25.979406043Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Dec 13 14:33:25.985883 env[1138]: time="2024-12-13T14:33:25.985779600Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.4\"" Dec 13 14:33:25.988916 env[1138]: time="2024-12-13T14:33:25.988798137Z" level=info msg="CreateContainer within sandbox \"4e775b4add3f7b60daf4106c7a5221678edbf995f5889e2d23102f1737a66133\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 14:33:26.097847 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1835287679.mount: Deactivated successfully. Dec 13 14:33:26.139294 env[1138]: time="2024-12-13T14:33:26.139182144Z" level=info msg="CreateContainer within sandbox \"4e775b4add3f7b60daf4106c7a5221678edbf995f5889e2d23102f1737a66133\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"024c6bd607f69499795abd8d77f7bb9f887400a7fe1497c2172a56b40e72551e\"" Dec 13 14:33:26.141676 env[1138]: time="2024-12-13T14:33:26.141612434Z" level=info msg="StartContainer for \"024c6bd607f69499795abd8d77f7bb9f887400a7fe1497c2172a56b40e72551e\"" Dec 13 14:33:26.203339 systemd[1]: Started cri-containerd-024c6bd607f69499795abd8d77f7bb9f887400a7fe1497c2172a56b40e72551e.scope. Dec 13 14:33:26.268108 env[1138]: time="2024-12-13T14:33:26.267918522Z" level=info msg="StartContainer for \"024c6bd607f69499795abd8d77f7bb9f887400a7fe1497c2172a56b40e72551e\" returns successfully" Dec 13 14:33:26.278828 systemd[1]: cri-containerd-024c6bd607f69499795abd8d77f7bb9f887400a7fe1497c2172a56b40e72551e.scope: Deactivated successfully. Dec 13 14:33:26.865522 kubelet[1393]: E1213 14:33:26.717424 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:33:26.920507 env[1138]: time="2024-12-13T14:33:26.920376220Z" level=info msg="shim disconnected" id=024c6bd607f69499795abd8d77f7bb9f887400a7fe1497c2172a56b40e72551e Dec 13 14:33:26.920887 env[1138]: time="2024-12-13T14:33:26.920843610Z" level=warning msg="cleaning up after shim disconnected" id=024c6bd607f69499795abd8d77f7bb9f887400a7fe1497c2172a56b40e72551e namespace=k8s.io Dec 13 14:33:26.921047 env[1138]: time="2024-12-13T14:33:26.921013168Z" level=info msg="cleaning up dead shim" Dec 13 14:33:26.944109 env[1138]: time="2024-12-13T14:33:26.944021002Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:33:26Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1591 runtime=io.containerd.runc.v2\n" Dec 13 14:33:27.059659 env[1138]: time="2024-12-13T14:33:27.059432535Z" level=info msg="CreateContainer within sandbox \"4e775b4add3f7b60daf4106c7a5221678edbf995f5889e2d23102f1737a66133\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 14:33:27.089810 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-024c6bd607f69499795abd8d77f7bb9f887400a7fe1497c2172a56b40e72551e-rootfs.mount: Deactivated successfully. Dec 13 14:33:27.123194 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount570001498.mount: Deactivated successfully. Dec 13 14:33:27.144293 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1146871574.mount: Deactivated successfully. Dec 13 14:33:27.161763 env[1138]: time="2024-12-13T14:33:27.161661417Z" level=info msg="CreateContainer within sandbox \"4e775b4add3f7b60daf4106c7a5221678edbf995f5889e2d23102f1737a66133\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"4c01a9643e3c01c1566133ffc4a17226750c4f6a90ceaaf1819ccf718beaf47e\"" Dec 13 14:33:27.163377 env[1138]: time="2024-12-13T14:33:27.163304876Z" level=info msg="StartContainer for \"4c01a9643e3c01c1566133ffc4a17226750c4f6a90ceaaf1819ccf718beaf47e\"" Dec 13 14:33:27.193521 systemd[1]: Started cri-containerd-4c01a9643e3c01c1566133ffc4a17226750c4f6a90ceaaf1819ccf718beaf47e.scope. Dec 13 14:33:27.250622 env[1138]: time="2024-12-13T14:33:27.250490393Z" level=info msg="StartContainer for \"4c01a9643e3c01c1566133ffc4a17226750c4f6a90ceaaf1819ccf718beaf47e\" returns successfully" Dec 13 14:33:27.264523 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 14:33:27.264820 systemd[1]: Stopped systemd-sysctl.service. Dec 13 14:33:27.265212 systemd[1]: Stopping systemd-sysctl.service... Dec 13 14:33:27.269238 systemd[1]: Starting systemd-sysctl.service... Dec 13 14:33:27.272880 systemd[1]: cri-containerd-4c01a9643e3c01c1566133ffc4a17226750c4f6a90ceaaf1819ccf718beaf47e.scope: Deactivated successfully. Dec 13 14:33:27.284188 systemd[1]: Finished systemd-sysctl.service. Dec 13 14:33:27.514831 env[1138]: time="2024-12-13T14:33:27.514756131Z" level=info msg="shim disconnected" id=4c01a9643e3c01c1566133ffc4a17226750c4f6a90ceaaf1819ccf718beaf47e Dec 13 14:33:27.515669 env[1138]: time="2024-12-13T14:33:27.515644652Z" level=warning msg="cleaning up after shim disconnected" id=4c01a9643e3c01c1566133ffc4a17226750c4f6a90ceaaf1819ccf718beaf47e namespace=k8s.io Dec 13 14:33:27.515983 env[1138]: time="2024-12-13T14:33:27.515964332Z" level=info msg="cleaning up dead shim" Dec 13 14:33:27.544944 env[1138]: time="2024-12-13T14:33:27.544696392Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:33:27Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1659 runtime=io.containerd.runc.v2\n" Dec 13 14:33:27.682776 kubelet[1393]: E1213 14:33:27.682709 1393 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:33:27.718460 kubelet[1393]: E1213 14:33:27.718379 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:33:28.061743 env[1138]: time="2024-12-13T14:33:28.061694745Z" level=info msg="CreateContainer within sandbox \"4e775b4add3f7b60daf4106c7a5221678edbf995f5889e2d23102f1737a66133\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 14:33:28.182988 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1608480139.mount: Deactivated successfully. Dec 13 14:33:28.208701 env[1138]: time="2024-12-13T14:33:28.208614196Z" level=info msg="CreateContainer within sandbox \"4e775b4add3f7b60daf4106c7a5221678edbf995f5889e2d23102f1737a66133\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"660c2c8b5089ab70927249bca0d6225f709c47e6986c349a3580789570e7ca90\"" Dec 13 14:33:28.209513 env[1138]: time="2024-12-13T14:33:28.209472639Z" level=info msg="StartContainer for \"660c2c8b5089ab70927249bca0d6225f709c47e6986c349a3580789570e7ca90\"" Dec 13 14:33:28.264476 systemd[1]: Started cri-containerd-660c2c8b5089ab70927249bca0d6225f709c47e6986c349a3580789570e7ca90.scope. Dec 13 14:33:28.312944 systemd[1]: cri-containerd-660c2c8b5089ab70927249bca0d6225f709c47e6986c349a3580789570e7ca90.scope: Deactivated successfully. Dec 13 14:33:28.321191 env[1138]: time="2024-12-13T14:33:28.321075412Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod70f97af7_716e_4242_ad8d_e0906d610939.slice/cri-containerd-660c2c8b5089ab70927249bca0d6225f709c47e6986c349a3580789570e7ca90.scope/memory.events\": no such file or directory" Dec 13 14:33:28.336094 env[1138]: time="2024-12-13T14:33:28.336020797Z" level=info msg="StartContainer for \"660c2c8b5089ab70927249bca0d6225f709c47e6986c349a3580789570e7ca90\" returns successfully" Dec 13 14:33:28.719384 kubelet[1393]: E1213 14:33:28.719286 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:33:28.786141 env[1138]: time="2024-12-13T14:33:28.786043547Z" level=info msg="shim disconnected" id=660c2c8b5089ab70927249bca0d6225f709c47e6986c349a3580789570e7ca90 Dec 13 14:33:28.786520 env[1138]: time="2024-12-13T14:33:28.786475028Z" level=warning msg="cleaning up after shim disconnected" id=660c2c8b5089ab70927249bca0d6225f709c47e6986c349a3580789570e7ca90 namespace=k8s.io Dec 13 14:33:28.786679 env[1138]: time="2024-12-13T14:33:28.786645538Z" level=info msg="cleaning up dead shim" Dec 13 14:33:28.810794 env[1138]: time="2024-12-13T14:33:28.810721547Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:33:28Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1718 runtime=io.containerd.runc.v2\n" Dec 13 14:33:29.063994 env[1138]: time="2024-12-13T14:33:29.063842511Z" level=info msg="CreateContainer within sandbox \"4e775b4add3f7b60daf4106c7a5221678edbf995f5889e2d23102f1737a66133\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 14:33:29.087399 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-660c2c8b5089ab70927249bca0d6225f709c47e6986c349a3580789570e7ca90-rootfs.mount: Deactivated successfully. Dec 13 14:33:29.106718 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3873111432.mount: Deactivated successfully. Dec 13 14:33:29.237295 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount969750190.mount: Deactivated successfully. Dec 13 14:33:29.315419 env[1138]: time="2024-12-13T14:33:29.315170764Z" level=info msg="CreateContainer within sandbox \"4e775b4add3f7b60daf4106c7a5221678edbf995f5889e2d23102f1737a66133\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"7ec6cbbe8eebe5c2227a76e549de668665a4b7bb81fdc42afc602366d61a4a2f\"" Dec 13 14:33:29.317210 env[1138]: time="2024-12-13T14:33:29.317033895Z" level=info msg="StartContainer for \"7ec6cbbe8eebe5c2227a76e549de668665a4b7bb81fdc42afc602366d61a4a2f\"" Dec 13 14:33:29.373936 systemd[1]: Started cri-containerd-7ec6cbbe8eebe5c2227a76e549de668665a4b7bb81fdc42afc602366d61a4a2f.scope. Dec 13 14:33:29.405895 systemd[1]: cri-containerd-7ec6cbbe8eebe5c2227a76e549de668665a4b7bb81fdc42afc602366d61a4a2f.scope: Deactivated successfully. Dec 13 14:33:29.483494 env[1138]: time="2024-12-13T14:33:29.483426879Z" level=info msg="StartContainer for \"7ec6cbbe8eebe5c2227a76e549de668665a4b7bb81fdc42afc602366d61a4a2f\" returns successfully" Dec 13 14:33:29.720376 kubelet[1393]: E1213 14:33:29.720308 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:33:30.089617 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2165392662.mount: Deactivated successfully. Dec 13 14:33:30.133136 env[1138]: time="2024-12-13T14:33:30.133073386Z" level=info msg="shim disconnected" id=7ec6cbbe8eebe5c2227a76e549de668665a4b7bb81fdc42afc602366d61a4a2f Dec 13 14:33:30.133889 env[1138]: time="2024-12-13T14:33:30.133866196Z" level=warning msg="cleaning up after shim disconnected" id=7ec6cbbe8eebe5c2227a76e549de668665a4b7bb81fdc42afc602366d61a4a2f namespace=k8s.io Dec 13 14:33:30.133973 env[1138]: time="2024-12-13T14:33:30.133958240Z" level=info msg="cleaning up dead shim" Dec 13 14:33:30.159865 env[1138]: time="2024-12-13T14:33:30.159758235Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:33:30Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1775 runtime=io.containerd.runc.v2\n" Dec 13 14:33:30.720627 kubelet[1393]: E1213 14:33:30.720506 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:33:30.838320 env[1138]: time="2024-12-13T14:33:30.838193453Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.31.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:33:30.850850 env[1138]: time="2024-12-13T14:33:30.850729412Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ebf80573666f86f115452db568feb34f6f771c3bdc7bfed14b9577f992cfa300,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:33:30.859367 env[1138]: time="2024-12-13T14:33:30.859295009Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.31.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:33:30.863996 env[1138]: time="2024-12-13T14:33:30.863915283Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:1739b3febca392035bf6edfe31efdfa55226be7b57389b2001ae357f7dcb99cf,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:33:30.865602 env[1138]: time="2024-12-13T14:33:30.865505200Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.4\" returns image reference \"sha256:ebf80573666f86f115452db568feb34f6f771c3bdc7bfed14b9577f992cfa300\"" Dec 13 14:33:30.871914 env[1138]: time="2024-12-13T14:33:30.871837260Z" level=info msg="CreateContainer within sandbox \"0993c48255d03e48384905b387b4c75e4e120175dc2d88da9353836fa10cc544\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 14:33:30.910595 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3406925395.mount: Deactivated successfully. Dec 13 14:33:30.969005 env[1138]: time="2024-12-13T14:33:30.968675247Z" level=info msg="CreateContainer within sandbox \"0993c48255d03e48384905b387b4c75e4e120175dc2d88da9353836fa10cc544\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"0fc7565f64638c7293bd23ae51e1fe970d240310ac1acdcbc9dfcc846e5ad997\"" Dec 13 14:33:30.972748 env[1138]: time="2024-12-13T14:33:30.971220580Z" level=info msg="StartContainer for \"0fc7565f64638c7293bd23ae51e1fe970d240310ac1acdcbc9dfcc846e5ad997\"" Dec 13 14:33:31.019753 systemd[1]: Started cri-containerd-0fc7565f64638c7293bd23ae51e1fe970d240310ac1acdcbc9dfcc846e5ad997.scope. Dec 13 14:33:31.083717 env[1138]: time="2024-12-13T14:33:31.083574367Z" level=info msg="CreateContainer within sandbox \"4e775b4add3f7b60daf4106c7a5221678edbf995f5889e2d23102f1737a66133\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 14:33:31.110441 env[1138]: time="2024-12-13T14:33:31.110392028Z" level=info msg="StartContainer for \"0fc7565f64638c7293bd23ae51e1fe970d240310ac1acdcbc9dfcc846e5ad997\" returns successfully" Dec 13 14:33:31.186800 env[1138]: time="2024-12-13T14:33:31.186710435Z" level=info msg="CreateContainer within sandbox \"4e775b4add3f7b60daf4106c7a5221678edbf995f5889e2d23102f1737a66133\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"fb59aa7c256f528ea730aba4e068a4721a0a92c77d6f273926cb96eef0fb83a9\"" Dec 13 14:33:31.188225 env[1138]: time="2024-12-13T14:33:31.188169637Z" level=info msg="StartContainer for \"fb59aa7c256f528ea730aba4e068a4721a0a92c77d6f273926cb96eef0fb83a9\"" Dec 13 14:33:31.222517 systemd[1]: Started cri-containerd-fb59aa7c256f528ea730aba4e068a4721a0a92c77d6f273926cb96eef0fb83a9.scope. Dec 13 14:33:31.374994 env[1138]: time="2024-12-13T14:33:31.374889745Z" level=info msg="StartContainer for \"fb59aa7c256f528ea730aba4e068a4721a0a92c77d6f273926cb96eef0fb83a9\" returns successfully" Dec 13 14:33:31.600668 kubelet[1393]: I1213 14:33:31.599909 1393 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Dec 13 14:33:31.721402 kubelet[1393]: E1213 14:33:31.721318 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:33:32.031583 kernel: Initializing XFRM netlink socket Dec 13 14:33:32.159990 kubelet[1393]: I1213 14:33:32.159774 1393 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-xqzfs" podStartSLOduration=4.945571977 podStartE2EDuration="24.159745287s" podCreationTimestamp="2024-12-13 14:33:08 +0000 UTC" firstStartedPulling="2024-12-13 14:33:11.653932378 +0000 UTC m=+5.511418360" lastFinishedPulling="2024-12-13 14:33:30.868105668 +0000 UTC m=+24.725591670" observedRunningTime="2024-12-13 14:33:32.120091303 +0000 UTC m=+25.977577255" watchObservedRunningTime="2024-12-13 14:33:32.159745287 +0000 UTC m=+26.017231239" Dec 13 14:33:32.160432 kubelet[1393]: I1213 14:33:32.160077 1393 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-q4ksm" podStartSLOduration=9.407295764 podStartE2EDuration="24.160070618s" podCreationTimestamp="2024-12-13 14:33:08 +0000 UTC" firstStartedPulling="2024-12-13 14:33:11.230436826 +0000 UTC m=+5.087922788" lastFinishedPulling="2024-12-13 14:33:25.98321164 +0000 UTC m=+19.840697642" observedRunningTime="2024-12-13 14:33:32.158281486 +0000 UTC m=+26.015767448" watchObservedRunningTime="2024-12-13 14:33:32.160070618 +0000 UTC m=+26.017556580" Dec 13 14:33:32.722676 kubelet[1393]: E1213 14:33:32.722549 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:33:33.723621 kubelet[1393]: E1213 14:33:33.723480 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:33:33.886772 systemd-networkd[977]: cilium_host: Link UP Dec 13 14:33:33.887571 systemd-networkd[977]: cilium_net: Link UP Dec 13 14:33:33.893589 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Dec 13 14:33:33.893732 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Dec 13 14:33:33.897706 systemd-networkd[977]: cilium_net: Gained carrier Dec 13 14:33:33.898150 systemd-networkd[977]: cilium_host: Gained carrier Dec 13 14:33:34.038569 systemd-networkd[977]: cilium_host: Gained IPv6LL Dec 13 14:33:34.077149 systemd-networkd[977]: cilium_vxlan: Link UP Dec 13 14:33:34.077161 systemd-networkd[977]: cilium_vxlan: Gained carrier Dec 13 14:33:34.472282 kernel: NET: Registered PF_ALG protocol family Dec 13 14:33:34.590418 systemd-networkd[977]: cilium_net: Gained IPv6LL Dec 13 14:33:34.725189 kubelet[1393]: E1213 14:33:34.725002 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:33:35.569359 systemd-networkd[977]: lxc_health: Link UP Dec 13 14:33:35.573071 systemd[1]: Created slice kubepods-besteffort-podea2d3fdd_7c24_4e49_81e5_35f254825ee6.slice. Dec 13 14:33:35.580324 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Dec 13 14:33:35.581472 systemd-networkd[977]: lxc_health: Gained carrier Dec 13 14:33:35.683723 kubelet[1393]: I1213 14:33:35.683646 1393 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hcmmm\" (UniqueName: \"kubernetes.io/projected/ea2d3fdd-7c24-4e49-81e5-35f254825ee6-kube-api-access-hcmmm\") pod \"nginx-deployment-8587fbcb89-jtkjr\" (UID: \"ea2d3fdd-7c24-4e49-81e5-35f254825ee6\") " pod="default/nginx-deployment-8587fbcb89-jtkjr" Dec 13 14:33:35.728394 kubelet[1393]: E1213 14:33:35.728325 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:33:35.887506 env[1138]: time="2024-12-13T14:33:35.886665062Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-jtkjr,Uid:ea2d3fdd-7c24-4e49-81e5-35f254825ee6,Namespace:default,Attempt:0,}" Dec 13 14:33:35.933670 systemd-networkd[977]: cilium_vxlan: Gained IPv6LL Dec 13 14:33:36.023386 systemd-networkd[977]: lxc1828eba054a5: Link UP Dec 13 14:33:36.028389 kernel: eth0: renamed from tmp49e0d Dec 13 14:33:36.035977 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc1828eba054a5: link becomes ready Dec 13 14:33:36.035382 systemd-networkd[977]: lxc1828eba054a5: Gained carrier Dec 13 14:33:36.729374 kubelet[1393]: E1213 14:33:36.729318 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:33:37.085597 systemd-networkd[977]: lxc1828eba054a5: Gained IPv6LL Dec 13 14:33:37.533542 systemd-networkd[977]: lxc_health: Gained IPv6LL Dec 13 14:33:37.730496 kubelet[1393]: E1213 14:33:37.730340 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:33:38.731020 kubelet[1393]: E1213 14:33:38.730968 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:33:39.732205 kubelet[1393]: E1213 14:33:39.732082 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:33:40.733234 kubelet[1393]: E1213 14:33:40.733189 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:33:40.932766 env[1138]: time="2024-12-13T14:33:40.932478026Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:33:40.932766 env[1138]: time="2024-12-13T14:33:40.932522470Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:33:40.933798 env[1138]: time="2024-12-13T14:33:40.932537358Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:33:40.934302 env[1138]: time="2024-12-13T14:33:40.934179291Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/49e0d707981d3f573583a2cf80916894a4c2ceb6e20701e79971c8a473c5ff88 pid=2461 runtime=io.containerd.runc.v2 Dec 13 14:33:40.956382 systemd[1]: run-containerd-runc-k8s.io-49e0d707981d3f573583a2cf80916894a4c2ceb6e20701e79971c8a473c5ff88-runc.M1p5Bw.mount: Deactivated successfully. Dec 13 14:33:40.962575 systemd[1]: Started cri-containerd-49e0d707981d3f573583a2cf80916894a4c2ceb6e20701e79971c8a473c5ff88.scope. Dec 13 14:33:41.005979 env[1138]: time="2024-12-13T14:33:41.005824363Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-jtkjr,Uid:ea2d3fdd-7c24-4e49-81e5-35f254825ee6,Namespace:default,Attempt:0,} returns sandbox id \"49e0d707981d3f573583a2cf80916894a4c2ceb6e20701e79971c8a473c5ff88\"" Dec 13 14:33:41.008731 env[1138]: time="2024-12-13T14:33:41.008691876Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Dec 13 14:33:41.734466 kubelet[1393]: E1213 14:33:41.734364 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:33:42.735493 kubelet[1393]: E1213 14:33:42.735422 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:33:43.736154 kubelet[1393]: E1213 14:33:43.736111 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:33:44.736509 kubelet[1393]: E1213 14:33:44.736379 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:33:45.305888 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount92947950.mount: Deactivated successfully. Dec 13 14:33:45.737365 kubelet[1393]: E1213 14:33:45.737174 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:33:46.737540 kubelet[1393]: E1213 14:33:46.737464 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:33:47.682189 kubelet[1393]: E1213 14:33:47.682081 1393 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:33:47.919329 kubelet[1393]: E1213 14:33:47.738561 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:33:48.739761 kubelet[1393]: E1213 14:33:48.739699 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:33:49.270881 env[1138]: time="2024-12-13T14:33:49.270737661Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:33:49.277325 env[1138]: time="2024-12-13T14:33:49.277240054Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:33:49.281610 env[1138]: time="2024-12-13T14:33:49.281509907Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:33:49.286978 env[1138]: time="2024-12-13T14:33:49.286907486Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx@sha256:e04edf30a4ea4c5a4107110797c72d3ee8a654415f00acd4019be17218afd9a1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:33:49.289158 env[1138]: time="2024-12-13T14:33:49.289094520Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3\"" Dec 13 14:33:49.292930 env[1138]: time="2024-12-13T14:33:49.292866509Z" level=info msg="CreateContainer within sandbox \"49e0d707981d3f573583a2cf80916894a4c2ceb6e20701e79971c8a473c5ff88\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Dec 13 14:33:49.331622 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2722969941.mount: Deactivated successfully. Dec 13 14:33:49.334094 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2779240560.mount: Deactivated successfully. Dec 13 14:33:49.353982 env[1138]: time="2024-12-13T14:33:49.353846274Z" level=info msg="CreateContainer within sandbox \"49e0d707981d3f573583a2cf80916894a4c2ceb6e20701e79971c8a473c5ff88\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"4effb1b2b2ae8af0ddf8904481c05fa9f684527f8e1e06e4a8d72d3a3dd55414\"" Dec 13 14:33:49.355764 env[1138]: time="2024-12-13T14:33:49.355698540Z" level=info msg="StartContainer for \"4effb1b2b2ae8af0ddf8904481c05fa9f684527f8e1e06e4a8d72d3a3dd55414\"" Dec 13 14:33:49.396357 systemd[1]: Started cri-containerd-4effb1b2b2ae8af0ddf8904481c05fa9f684527f8e1e06e4a8d72d3a3dd55414.scope. Dec 13 14:33:49.454441 env[1138]: time="2024-12-13T14:33:49.454355331Z" level=info msg="StartContainer for \"4effb1b2b2ae8af0ddf8904481c05fa9f684527f8e1e06e4a8d72d3a3dd55414\" returns successfully" Dec 13 14:33:49.740990 kubelet[1393]: E1213 14:33:49.740900 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:33:50.276897 kubelet[1393]: I1213 14:33:50.276767 1393 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-8587fbcb89-jtkjr" podStartSLOduration=6.994420678 podStartE2EDuration="15.276699022s" podCreationTimestamp="2024-12-13 14:33:35 +0000 UTC" firstStartedPulling="2024-12-13 14:33:41.008306242 +0000 UTC m=+34.865792204" lastFinishedPulling="2024-12-13 14:33:49.290584596 +0000 UTC m=+43.148070548" observedRunningTime="2024-12-13 14:33:50.275315447 +0000 UTC m=+44.132801529" watchObservedRunningTime="2024-12-13 14:33:50.276699022 +0000 UTC m=+44.134185025" Dec 13 14:33:50.742139 kubelet[1393]: E1213 14:33:50.742065 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:33:51.743377 kubelet[1393]: E1213 14:33:51.743310 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:33:52.745050 kubelet[1393]: E1213 14:33:52.744969 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:33:53.746318 kubelet[1393]: E1213 14:33:53.746144 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:33:54.748400 kubelet[1393]: E1213 14:33:54.748324 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:33:55.750282 kubelet[1393]: E1213 14:33:55.750148 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:33:56.750779 kubelet[1393]: E1213 14:33:56.750687 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:33:57.752001 kubelet[1393]: E1213 14:33:57.751887 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:33:58.752636 kubelet[1393]: E1213 14:33:58.752541 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:33:59.753563 kubelet[1393]: E1213 14:33:59.753468 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:34:00.158353 systemd[1]: Created slice kubepods-besteffort-podccedafa3_f919_4118_8212_f9f391a62b2e.slice. Dec 13 14:34:00.272081 kubelet[1393]: I1213 14:34:00.271980 1393 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/ccedafa3-f919-4118-8212-f9f391a62b2e-data\") pod \"nfs-server-provisioner-0\" (UID: \"ccedafa3-f919-4118-8212-f9f391a62b2e\") " pod="default/nfs-server-provisioner-0" Dec 13 14:34:00.272081 kubelet[1393]: I1213 14:34:00.272041 1393 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6rbj\" (UniqueName: \"kubernetes.io/projected/ccedafa3-f919-4118-8212-f9f391a62b2e-kube-api-access-p6rbj\") pod \"nfs-server-provisioner-0\" (UID: \"ccedafa3-f919-4118-8212-f9f391a62b2e\") " pod="default/nfs-server-provisioner-0" Dec 13 14:34:00.466624 env[1138]: time="2024-12-13T14:34:00.465509175Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:ccedafa3-f919-4118-8212-f9f391a62b2e,Namespace:default,Attempt:0,}" Dec 13 14:34:00.755452 kubelet[1393]: E1213 14:34:00.755151 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:34:01.757072 kubelet[1393]: E1213 14:34:01.756903 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:34:01.789033 systemd-networkd[977]: lxc40b852c08e44: Link UP Dec 13 14:34:01.795376 kernel: eth0: renamed from tmpc1b28 Dec 13 14:34:01.807409 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 14:34:01.807604 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc40b852c08e44: link becomes ready Dec 13 14:34:01.811604 systemd-networkd[977]: lxc40b852c08e44: Gained carrier Dec 13 14:34:02.128301 env[1138]: time="2024-12-13T14:34:02.127781644Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:34:02.128301 env[1138]: time="2024-12-13T14:34:02.127869659Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:34:02.128301 env[1138]: time="2024-12-13T14:34:02.127901329Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:34:02.129789 env[1138]: time="2024-12-13T14:34:02.129563988Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c1b281881d1ff763992fbe2cc449a4655a0580e1134adb3e5fe42d693fc4d60a pid=2582 runtime=io.containerd.runc.v2 Dec 13 14:34:02.158641 systemd[1]: run-containerd-runc-k8s.io-c1b281881d1ff763992fbe2cc449a4655a0580e1134adb3e5fe42d693fc4d60a-runc.pnDe2W.mount: Deactivated successfully. Dec 13 14:34:02.171829 systemd[1]: Started cri-containerd-c1b281881d1ff763992fbe2cc449a4655a0580e1134adb3e5fe42d693fc4d60a.scope. Dec 13 14:34:02.232301 env[1138]: time="2024-12-13T14:34:02.232192437Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:ccedafa3-f919-4118-8212-f9f391a62b2e,Namespace:default,Attempt:0,} returns sandbox id \"c1b281881d1ff763992fbe2cc449a4655a0580e1134adb3e5fe42d693fc4d60a\"" Dec 13 14:34:02.234684 env[1138]: time="2024-12-13T14:34:02.234652122Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Dec 13 14:34:02.757455 kubelet[1393]: E1213 14:34:02.757377 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:34:03.455071 systemd-networkd[977]: lxc40b852c08e44: Gained IPv6LL Dec 13 14:34:03.758180 kubelet[1393]: E1213 14:34:03.757769 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:34:04.758491 kubelet[1393]: E1213 14:34:04.758404 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:34:05.758923 kubelet[1393]: E1213 14:34:05.758854 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:34:06.692395 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3798928204.mount: Deactivated successfully. Dec 13 14:34:06.759622 kubelet[1393]: E1213 14:34:06.759521 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:34:07.682107 kubelet[1393]: E1213 14:34:07.682003 1393 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:34:07.760025 kubelet[1393]: E1213 14:34:07.759966 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:34:08.760556 kubelet[1393]: E1213 14:34:08.760392 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:34:09.761065 kubelet[1393]: E1213 14:34:09.760920 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:34:10.761692 kubelet[1393]: E1213 14:34:10.761589 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:34:11.103527 env[1138]: time="2024-12-13T14:34:11.102633094Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:34:11.108356 env[1138]: time="2024-12-13T14:34:11.108243650Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:34:11.113238 env[1138]: time="2024-12-13T14:34:11.113155687Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:34:11.117895 env[1138]: time="2024-12-13T14:34:11.117755741Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:34:11.120781 env[1138]: time="2024-12-13T14:34:11.120666281Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Dec 13 14:34:11.132343 env[1138]: time="2024-12-13T14:34:11.132159723Z" level=info msg="CreateContainer within sandbox \"c1b281881d1ff763992fbe2cc449a4655a0580e1134adb3e5fe42d693fc4d60a\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Dec 13 14:34:11.185028 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2509175943.mount: Deactivated successfully. Dec 13 14:34:11.203385 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3138334909.mount: Deactivated successfully. Dec 13 14:34:11.208623 env[1138]: time="2024-12-13T14:34:11.208407851Z" level=info msg="CreateContainer within sandbox \"c1b281881d1ff763992fbe2cc449a4655a0580e1134adb3e5fe42d693fc4d60a\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"ff900d8f3b6872b63eed180971622f8cd807d78efaa3b4220da9de54adb49ed4\"" Dec 13 14:34:11.210100 env[1138]: time="2024-12-13T14:34:11.210023566Z" level=info msg="StartContainer for \"ff900d8f3b6872b63eed180971622f8cd807d78efaa3b4220da9de54adb49ed4\"" Dec 13 14:34:11.247354 systemd[1]: Started cri-containerd-ff900d8f3b6872b63eed180971622f8cd807d78efaa3b4220da9de54adb49ed4.scope. Dec 13 14:34:11.302334 env[1138]: time="2024-12-13T14:34:11.302230447Z" level=info msg="StartContainer for \"ff900d8f3b6872b63eed180971622f8cd807d78efaa3b4220da9de54adb49ed4\" returns successfully" Dec 13 14:34:11.762517 kubelet[1393]: E1213 14:34:11.762400 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:34:12.763131 kubelet[1393]: E1213 14:34:12.763028 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:34:13.764184 kubelet[1393]: E1213 14:34:13.764111 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:34:14.765677 kubelet[1393]: E1213 14:34:14.765606 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:34:15.767295 kubelet[1393]: E1213 14:34:15.767189 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:34:16.768451 kubelet[1393]: E1213 14:34:16.768340 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:34:17.769451 kubelet[1393]: E1213 14:34:17.769383 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:34:18.771017 kubelet[1393]: E1213 14:34:18.770926 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:34:19.772599 kubelet[1393]: E1213 14:34:19.772528 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:34:20.773958 kubelet[1393]: E1213 14:34:20.773808 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:34:21.707456 kubelet[1393]: I1213 14:34:21.707243 1393 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=12.813018359 podStartE2EDuration="21.707208041s" podCreationTimestamp="2024-12-13 14:34:00 +0000 UTC" firstStartedPulling="2024-12-13 14:34:02.233741623 +0000 UTC m=+56.091227575" lastFinishedPulling="2024-12-13 14:34:11.127931255 +0000 UTC m=+64.985417257" observedRunningTime="2024-12-13 14:34:11.37187546 +0000 UTC m=+65.229361463" watchObservedRunningTime="2024-12-13 14:34:21.707208041 +0000 UTC m=+75.564694033" Dec 13 14:34:21.722927 systemd[1]: Created slice kubepods-besteffort-pod338651e6_86a3_4941_a607_555b0a7199da.slice. Dec 13 14:34:21.755976 kubelet[1393]: I1213 14:34:21.755841 1393 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6kp7j\" (UniqueName: \"kubernetes.io/projected/338651e6-86a3-4941-a607-555b0a7199da-kube-api-access-6kp7j\") pod \"test-pod-1\" (UID: \"338651e6-86a3-4941-a607-555b0a7199da\") " pod="default/test-pod-1" Dec 13 14:34:21.756459 kubelet[1393]: I1213 14:34:21.756058 1393 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-26f6162d-01dc-407e-b937-02f823dd9641\" (UniqueName: \"kubernetes.io/nfs/338651e6-86a3-4941-a607-555b0a7199da-pvc-26f6162d-01dc-407e-b937-02f823dd9641\") pod \"test-pod-1\" (UID: \"338651e6-86a3-4941-a607-555b0a7199da\") " pod="default/test-pod-1" Dec 13 14:34:21.775032 kubelet[1393]: E1213 14:34:21.774955 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:34:22.158350 kernel: FS-Cache: Loaded Dec 13 14:34:22.220004 kernel: RPC: Registered named UNIX socket transport module. Dec 13 14:34:22.220180 kernel: RPC: Registered udp transport module. Dec 13 14:34:22.220314 kernel: RPC: Registered tcp transport module. Dec 13 14:34:22.220738 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Dec 13 14:34:22.305557 kernel: FS-Cache: Netfs 'nfs' registered for caching Dec 13 14:34:22.613830 kernel: NFS: Registering the id_resolver key type Dec 13 14:34:22.614087 kernel: Key type id_resolver registered Dec 13 14:34:22.616335 kernel: Key type id_legacy registered Dec 13 14:34:22.744756 nfsidmap[2711]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'novalocal' Dec 13 14:34:22.756918 nfsidmap[2712]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'novalocal' Dec 13 14:34:22.778504 kubelet[1393]: E1213 14:34:22.778408 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:34:22.932817 env[1138]: time="2024-12-13T14:34:22.931861127Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:338651e6-86a3-4941-a607-555b0a7199da,Namespace:default,Attempt:0,}" Dec 13 14:34:23.055829 systemd-networkd[977]: lxc4f092ba4023c: Link UP Dec 13 14:34:23.062345 kernel: eth0: renamed from tmp9cdda Dec 13 14:34:23.080804 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 14:34:23.081022 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc4f092ba4023c: link becomes ready Dec 13 14:34:23.081506 systemd-networkd[977]: lxc4f092ba4023c: Gained carrier Dec 13 14:34:23.388203 env[1138]: time="2024-12-13T14:34:23.387527498Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:34:23.388542 env[1138]: time="2024-12-13T14:34:23.388470324Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:34:23.388542 env[1138]: time="2024-12-13T14:34:23.388494760Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:34:23.389210 env[1138]: time="2024-12-13T14:34:23.388980129Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9cdda6b040fd64a8458f1df95c170efd0872032a3ce1c0e8b18f0377dcb09cc3 pid=2736 runtime=io.containerd.runc.v2 Dec 13 14:34:23.429878 systemd[1]: Started cri-containerd-9cdda6b040fd64a8458f1df95c170efd0872032a3ce1c0e8b18f0377dcb09cc3.scope. Dec 13 14:34:23.492203 env[1138]: time="2024-12-13T14:34:23.492121651Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:338651e6-86a3-4941-a607-555b0a7199da,Namespace:default,Attempt:0,} returns sandbox id \"9cdda6b040fd64a8458f1df95c170efd0872032a3ce1c0e8b18f0377dcb09cc3\"" Dec 13 14:34:23.494761 env[1138]: time="2024-12-13T14:34:23.494730788Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Dec 13 14:34:23.779160 kubelet[1393]: E1213 14:34:23.779045 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:34:24.746298 env[1138]: time="2024-12-13T14:34:24.746116721Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:34:24.753570 env[1138]: time="2024-12-13T14:34:24.753457093Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:34:24.760665 env[1138]: time="2024-12-13T14:34:24.760537798Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:34:24.765930 env[1138]: time="2024-12-13T14:34:24.765863986Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx@sha256:e04edf30a4ea4c5a4107110797c72d3ee8a654415f00acd4019be17218afd9a1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:34:24.768083 env[1138]: time="2024-12-13T14:34:24.768018222Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3\"" Dec 13 14:34:24.777441 env[1138]: time="2024-12-13T14:34:24.777295452Z" level=info msg="CreateContainer within sandbox \"9cdda6b040fd64a8458f1df95c170efd0872032a3ce1c0e8b18f0377dcb09cc3\" for container &ContainerMetadata{Name:test,Attempt:0,}" Dec 13 14:34:24.779571 kubelet[1393]: E1213 14:34:24.779465 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:34:24.817699 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1442076278.mount: Deactivated successfully. Dec 13 14:34:24.840215 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3387772694.mount: Deactivated successfully. Dec 13 14:34:24.854046 env[1138]: time="2024-12-13T14:34:24.853925362Z" level=info msg="CreateContainer within sandbox \"9cdda6b040fd64a8458f1df95c170efd0872032a3ce1c0e8b18f0377dcb09cc3\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"41629e94fb3125863cc93ae91ce8c10566f1dddfea35a0d4e230c365a668a982\"" Dec 13 14:34:24.856234 env[1138]: time="2024-12-13T14:34:24.856147885Z" level=info msg="StartContainer for \"41629e94fb3125863cc93ae91ce8c10566f1dddfea35a0d4e230c365a668a982\"" Dec 13 14:34:24.891086 systemd[1]: Started cri-containerd-41629e94fb3125863cc93ae91ce8c10566f1dddfea35a0d4e230c365a668a982.scope. Dec 13 14:34:24.933340 env[1138]: time="2024-12-13T14:34:24.933235622Z" level=info msg="StartContainer for \"41629e94fb3125863cc93ae91ce8c10566f1dddfea35a0d4e230c365a668a982\" returns successfully" Dec 13 14:34:24.957668 systemd-networkd[977]: lxc4f092ba4023c: Gained IPv6LL Dec 13 14:34:25.428865 kubelet[1393]: I1213 14:34:25.428793 1393 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=22.151260429 podStartE2EDuration="23.428770083s" podCreationTimestamp="2024-12-13 14:34:02 +0000 UTC" firstStartedPulling="2024-12-13 14:34:23.493801307 +0000 UTC m=+77.351287269" lastFinishedPulling="2024-12-13 14:34:24.771310911 +0000 UTC m=+78.628796923" observedRunningTime="2024-12-13 14:34:25.428727162 +0000 UTC m=+79.286213165" watchObservedRunningTime="2024-12-13 14:34:25.428770083 +0000 UTC m=+79.286256045" Dec 13 14:34:25.781616 kubelet[1393]: E1213 14:34:25.781397 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:34:26.781740 kubelet[1393]: E1213 14:34:26.781666 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:34:27.682599 kubelet[1393]: E1213 14:34:27.682504 1393 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:34:27.783130 kubelet[1393]: E1213 14:34:27.783026 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:34:28.783936 kubelet[1393]: E1213 14:34:28.783810 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:34:29.785922 kubelet[1393]: E1213 14:34:29.785793 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:34:30.412832 systemd[1]: run-containerd-runc-k8s.io-fb59aa7c256f528ea730aba4e068a4721a0a92c77d6f273926cb96eef0fb83a9-runc.Ak2jcf.mount: Deactivated successfully. Dec 13 14:34:30.463443 env[1138]: time="2024-12-13T14:34:30.463286079Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 14:34:30.476347 env[1138]: time="2024-12-13T14:34:30.476230915Z" level=info msg="StopContainer for \"fb59aa7c256f528ea730aba4e068a4721a0a92c77d6f273926cb96eef0fb83a9\" with timeout 2 (s)" Dec 13 14:34:30.477556 env[1138]: time="2024-12-13T14:34:30.477472049Z" level=info msg="Stop container \"fb59aa7c256f528ea730aba4e068a4721a0a92c77d6f273926cb96eef0fb83a9\" with signal terminated" Dec 13 14:34:30.489219 systemd-networkd[977]: lxc_health: Link DOWN Dec 13 14:34:30.489233 systemd-networkd[977]: lxc_health: Lost carrier Dec 13 14:34:30.542033 systemd[1]: cri-containerd-fb59aa7c256f528ea730aba4e068a4721a0a92c77d6f273926cb96eef0fb83a9.scope: Deactivated successfully. Dec 13 14:34:30.542572 systemd[1]: cri-containerd-fb59aa7c256f528ea730aba4e068a4721a0a92c77d6f273926cb96eef0fb83a9.scope: Consumed 9.659s CPU time. Dec 13 14:34:30.585800 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fb59aa7c256f528ea730aba4e068a4721a0a92c77d6f273926cb96eef0fb83a9-rootfs.mount: Deactivated successfully. Dec 13 14:34:30.601715 env[1138]: time="2024-12-13T14:34:30.601616841Z" level=info msg="shim disconnected" id=fb59aa7c256f528ea730aba4e068a4721a0a92c77d6f273926cb96eef0fb83a9 Dec 13 14:34:30.601931 env[1138]: time="2024-12-13T14:34:30.601726908Z" level=warning msg="cleaning up after shim disconnected" id=fb59aa7c256f528ea730aba4e068a4721a0a92c77d6f273926cb96eef0fb83a9 namespace=k8s.io Dec 13 14:34:30.601931 env[1138]: time="2024-12-13T14:34:30.601755872Z" level=info msg="cleaning up dead shim" Dec 13 14:34:30.613226 env[1138]: time="2024-12-13T14:34:30.613169948Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:34:30Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2867 runtime=io.containerd.runc.v2\n" Dec 13 14:34:30.617632 env[1138]: time="2024-12-13T14:34:30.617562268Z" level=info msg="StopContainer for \"fb59aa7c256f528ea730aba4e068a4721a0a92c77d6f273926cb96eef0fb83a9\" returns successfully" Dec 13 14:34:30.618863 env[1138]: time="2024-12-13T14:34:30.618834342Z" level=info msg="StopPodSandbox for \"4e775b4add3f7b60daf4106c7a5221678edbf995f5889e2d23102f1737a66133\"" Dec 13 14:34:30.619082 env[1138]: time="2024-12-13T14:34:30.619057761Z" level=info msg="Container to stop \"fb59aa7c256f528ea730aba4e068a4721a0a92c77d6f273926cb96eef0fb83a9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:34:30.619166 env[1138]: time="2024-12-13T14:34:30.619146406Z" level=info msg="Container to stop \"4c01a9643e3c01c1566133ffc4a17226750c4f6a90ceaaf1819ccf718beaf47e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:34:30.619240 env[1138]: time="2024-12-13T14:34:30.619221627Z" level=info msg="Container to stop \"660c2c8b5089ab70927249bca0d6225f709c47e6986c349a3580789570e7ca90\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:34:30.619341 env[1138]: time="2024-12-13T14:34:30.619320893Z" level=info msg="Container to stop \"7ec6cbbe8eebe5c2227a76e549de668665a4b7bb81fdc42afc602366d61a4a2f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:34:30.619422 env[1138]: time="2024-12-13T14:34:30.619403468Z" level=info msg="Container to stop \"024c6bd607f69499795abd8d77f7bb9f887400a7fe1497c2172a56b40e72551e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:34:30.621600 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4e775b4add3f7b60daf4106c7a5221678edbf995f5889e2d23102f1737a66133-shm.mount: Deactivated successfully. Dec 13 14:34:30.630034 systemd[1]: cri-containerd-4e775b4add3f7b60daf4106c7a5221678edbf995f5889e2d23102f1737a66133.scope: Deactivated successfully. Dec 13 14:34:30.658034 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4e775b4add3f7b60daf4106c7a5221678edbf995f5889e2d23102f1737a66133-rootfs.mount: Deactivated successfully. Dec 13 14:34:30.665998 env[1138]: time="2024-12-13T14:34:30.665906502Z" level=info msg="shim disconnected" id=4e775b4add3f7b60daf4106c7a5221678edbf995f5889e2d23102f1737a66133 Dec 13 14:34:30.667015 env[1138]: time="2024-12-13T14:34:30.666990693Z" level=warning msg="cleaning up after shim disconnected" id=4e775b4add3f7b60daf4106c7a5221678edbf995f5889e2d23102f1737a66133 namespace=k8s.io Dec 13 14:34:30.667134 env[1138]: time="2024-12-13T14:34:30.667115506Z" level=info msg="cleaning up dead shim" Dec 13 14:34:30.679330 env[1138]: time="2024-12-13T14:34:30.677175516Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:34:30Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2901 runtime=io.containerd.runc.v2\n" Dec 13 14:34:30.679330 env[1138]: time="2024-12-13T14:34:30.678314159Z" level=info msg="TearDown network for sandbox \"4e775b4add3f7b60daf4106c7a5221678edbf995f5889e2d23102f1737a66133\" successfully" Dec 13 14:34:30.679330 env[1138]: time="2024-12-13T14:34:30.678345698Z" level=info msg="StopPodSandbox for \"4e775b4add3f7b60daf4106c7a5221678edbf995f5889e2d23102f1737a66133\" returns successfully" Dec 13 14:34:30.787006 kubelet[1393]: E1213 14:34:30.786840 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:34:30.824722 kubelet[1393]: I1213 14:34:30.824516 1393 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/70f97af7-716e-4242-ad8d-e0906d610939-host-proc-sys-net\") pod \"70f97af7-716e-4242-ad8d-e0906d610939\" (UID: \"70f97af7-716e-4242-ad8d-e0906d610939\") " Dec 13 14:34:30.824722 kubelet[1393]: I1213 14:34:30.824638 1393 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/70f97af7-716e-4242-ad8d-e0906d610939-hubble-tls\") pod \"70f97af7-716e-4242-ad8d-e0906d610939\" (UID: \"70f97af7-716e-4242-ad8d-e0906d610939\") " Dec 13 14:34:30.825097 kubelet[1393]: I1213 14:34:30.824816 1393 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/70f97af7-716e-4242-ad8d-e0906d610939-cilium-cgroup\") pod \"70f97af7-716e-4242-ad8d-e0906d610939\" (UID: \"70f97af7-716e-4242-ad8d-e0906d610939\") " Dec 13 14:34:30.825097 kubelet[1393]: I1213 14:34:30.824878 1393 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/70f97af7-716e-4242-ad8d-e0906d610939-host-proc-sys-kernel\") pod \"70f97af7-716e-4242-ad8d-e0906d610939\" (UID: \"70f97af7-716e-4242-ad8d-e0906d610939\") " Dec 13 14:34:30.825097 kubelet[1393]: I1213 14:34:30.824932 1393 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7dbsn\" (UniqueName: \"kubernetes.io/projected/70f97af7-716e-4242-ad8d-e0906d610939-kube-api-access-7dbsn\") pod \"70f97af7-716e-4242-ad8d-e0906d610939\" (UID: \"70f97af7-716e-4242-ad8d-e0906d610939\") " Dec 13 14:34:30.825097 kubelet[1393]: I1213 14:34:30.824973 1393 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/70f97af7-716e-4242-ad8d-e0906d610939-xtables-lock\") pod \"70f97af7-716e-4242-ad8d-e0906d610939\" (UID: \"70f97af7-716e-4242-ad8d-e0906d610939\") " Dec 13 14:34:30.825097 kubelet[1393]: I1213 14:34:30.825011 1393 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/70f97af7-716e-4242-ad8d-e0906d610939-etc-cni-netd\") pod \"70f97af7-716e-4242-ad8d-e0906d610939\" (UID: \"70f97af7-716e-4242-ad8d-e0906d610939\") " Dec 13 14:34:30.825097 kubelet[1393]: I1213 14:34:30.825054 1393 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/70f97af7-716e-4242-ad8d-e0906d610939-bpf-maps\") pod \"70f97af7-716e-4242-ad8d-e0906d610939\" (UID: \"70f97af7-716e-4242-ad8d-e0906d610939\") " Dec 13 14:34:30.825593 kubelet[1393]: I1213 14:34:30.825107 1393 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/70f97af7-716e-4242-ad8d-e0906d610939-clustermesh-secrets\") pod \"70f97af7-716e-4242-ad8d-e0906d610939\" (UID: \"70f97af7-716e-4242-ad8d-e0906d610939\") " Dec 13 14:34:30.825593 kubelet[1393]: I1213 14:34:30.825149 1393 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/70f97af7-716e-4242-ad8d-e0906d610939-cilium-run\") pod \"70f97af7-716e-4242-ad8d-e0906d610939\" (UID: \"70f97af7-716e-4242-ad8d-e0906d610939\") " Dec 13 14:34:30.825593 kubelet[1393]: I1213 14:34:30.825186 1393 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/70f97af7-716e-4242-ad8d-e0906d610939-hostproc\") pod \"70f97af7-716e-4242-ad8d-e0906d610939\" (UID: \"70f97af7-716e-4242-ad8d-e0906d610939\") " Dec 13 14:34:30.825593 kubelet[1393]: I1213 14:34:30.825234 1393 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/70f97af7-716e-4242-ad8d-e0906d610939-cilium-config-path\") pod \"70f97af7-716e-4242-ad8d-e0906d610939\" (UID: \"70f97af7-716e-4242-ad8d-e0906d610939\") " Dec 13 14:34:30.825593 kubelet[1393]: I1213 14:34:30.825326 1393 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/70f97af7-716e-4242-ad8d-e0906d610939-lib-modules\") pod \"70f97af7-716e-4242-ad8d-e0906d610939\" (UID: \"70f97af7-716e-4242-ad8d-e0906d610939\") " Dec 13 14:34:30.825593 kubelet[1393]: I1213 14:34:30.825369 1393 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/70f97af7-716e-4242-ad8d-e0906d610939-cni-path\") pod \"70f97af7-716e-4242-ad8d-e0906d610939\" (UID: \"70f97af7-716e-4242-ad8d-e0906d610939\") " Dec 13 14:34:30.826121 kubelet[1393]: I1213 14:34:30.825494 1393 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/70f97af7-716e-4242-ad8d-e0906d610939-cni-path" (OuterVolumeSpecName: "cni-path") pod "70f97af7-716e-4242-ad8d-e0906d610939" (UID: "70f97af7-716e-4242-ad8d-e0906d610939"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:34:30.827716 kubelet[1393]: I1213 14:34:30.827621 1393 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/70f97af7-716e-4242-ad8d-e0906d610939-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "70f97af7-716e-4242-ad8d-e0906d610939" (UID: "70f97af7-716e-4242-ad8d-e0906d610939"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:34:30.828093 kubelet[1393]: I1213 14:34:30.828045 1393 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/70f97af7-716e-4242-ad8d-e0906d610939-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "70f97af7-716e-4242-ad8d-e0906d610939" (UID: "70f97af7-716e-4242-ad8d-e0906d610939"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:34:30.828374 kubelet[1393]: I1213 14:34:30.828330 1393 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/70f97af7-716e-4242-ad8d-e0906d610939-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "70f97af7-716e-4242-ad8d-e0906d610939" (UID: "70f97af7-716e-4242-ad8d-e0906d610939"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:34:30.830326 kubelet[1393]: I1213 14:34:30.828624 1393 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/70f97af7-716e-4242-ad8d-e0906d610939-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "70f97af7-716e-4242-ad8d-e0906d610939" (UID: "70f97af7-716e-4242-ad8d-e0906d610939"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:34:30.830326 kubelet[1393]: I1213 14:34:30.829674 1393 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/70f97af7-716e-4242-ad8d-e0906d610939-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "70f97af7-716e-4242-ad8d-e0906d610939" (UID: "70f97af7-716e-4242-ad8d-e0906d610939"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:34:30.830569 kubelet[1393]: I1213 14:34:30.829716 1393 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/70f97af7-716e-4242-ad8d-e0906d610939-hostproc" (OuterVolumeSpecName: "hostproc") pod "70f97af7-716e-4242-ad8d-e0906d610939" (UID: "70f97af7-716e-4242-ad8d-e0906d610939"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:34:30.830569 kubelet[1393]: I1213 14:34:30.829781 1393 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/70f97af7-716e-4242-ad8d-e0906d610939-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "70f97af7-716e-4242-ad8d-e0906d610939" (UID: "70f97af7-716e-4242-ad8d-e0906d610939"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:34:30.830569 kubelet[1393]: I1213 14:34:30.829900 1393 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/70f97af7-716e-4242-ad8d-e0906d610939-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "70f97af7-716e-4242-ad8d-e0906d610939" (UID: "70f97af7-716e-4242-ad8d-e0906d610939"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:34:30.830569 kubelet[1393]: I1213 14:34:30.830433 1393 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/70f97af7-716e-4242-ad8d-e0906d610939-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "70f97af7-716e-4242-ad8d-e0906d610939" (UID: "70f97af7-716e-4242-ad8d-e0906d610939"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:34:30.840918 kubelet[1393]: I1213 14:34:30.840852 1393 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70f97af7-716e-4242-ad8d-e0906d610939-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "70f97af7-716e-4242-ad8d-e0906d610939" (UID: "70f97af7-716e-4242-ad8d-e0906d610939"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 14:34:30.844084 kubelet[1393]: I1213 14:34:30.844028 1393 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70f97af7-716e-4242-ad8d-e0906d610939-kube-api-access-7dbsn" (OuterVolumeSpecName: "kube-api-access-7dbsn") pod "70f97af7-716e-4242-ad8d-e0906d610939" (UID: "70f97af7-716e-4242-ad8d-e0906d610939"). InnerVolumeSpecName "kube-api-access-7dbsn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 14:34:30.844512 kubelet[1393]: I1213 14:34:30.844407 1393 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70f97af7-716e-4242-ad8d-e0906d610939-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "70f97af7-716e-4242-ad8d-e0906d610939" (UID: "70f97af7-716e-4242-ad8d-e0906d610939"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 14:34:30.847613 kubelet[1393]: I1213 14:34:30.847535 1393 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70f97af7-716e-4242-ad8d-e0906d610939-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "70f97af7-716e-4242-ad8d-e0906d610939" (UID: "70f97af7-716e-4242-ad8d-e0906d610939"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 14:34:30.926009 kubelet[1393]: I1213 14:34:30.925832 1393 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/70f97af7-716e-4242-ad8d-e0906d610939-host-proc-sys-net\") on node \"172.24.4.94\" DevicePath \"\"" Dec 13 14:34:30.926009 kubelet[1393]: I1213 14:34:30.925914 1393 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/70f97af7-716e-4242-ad8d-e0906d610939-hubble-tls\") on node \"172.24.4.94\" DevicePath \"\"" Dec 13 14:34:30.926009 kubelet[1393]: I1213 14:34:30.925939 1393 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/70f97af7-716e-4242-ad8d-e0906d610939-cilium-cgroup\") on node \"172.24.4.94\" DevicePath \"\"" Dec 13 14:34:30.926009 kubelet[1393]: I1213 14:34:30.925967 1393 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/70f97af7-716e-4242-ad8d-e0906d610939-host-proc-sys-kernel\") on node \"172.24.4.94\" DevicePath \"\"" Dec 13 14:34:30.926009 kubelet[1393]: I1213 14:34:30.925996 1393 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-7dbsn\" (UniqueName: \"kubernetes.io/projected/70f97af7-716e-4242-ad8d-e0906d610939-kube-api-access-7dbsn\") on node \"172.24.4.94\" DevicePath \"\"" Dec 13 14:34:30.926009 kubelet[1393]: I1213 14:34:30.926022 1393 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/70f97af7-716e-4242-ad8d-e0906d610939-xtables-lock\") on node \"172.24.4.94\" DevicePath \"\"" Dec 13 14:34:30.926717 kubelet[1393]: I1213 14:34:30.926051 1393 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/70f97af7-716e-4242-ad8d-e0906d610939-etc-cni-netd\") on node \"172.24.4.94\" DevicePath \"\"" Dec 13 14:34:30.926717 kubelet[1393]: I1213 14:34:30.926075 1393 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/70f97af7-716e-4242-ad8d-e0906d610939-bpf-maps\") on node \"172.24.4.94\" DevicePath \"\"" Dec 13 14:34:30.926717 kubelet[1393]: I1213 14:34:30.926097 1393 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/70f97af7-716e-4242-ad8d-e0906d610939-clustermesh-secrets\") on node \"172.24.4.94\" DevicePath \"\"" Dec 13 14:34:30.926717 kubelet[1393]: I1213 14:34:30.926122 1393 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/70f97af7-716e-4242-ad8d-e0906d610939-cilium-run\") on node \"172.24.4.94\" DevicePath \"\"" Dec 13 14:34:30.926717 kubelet[1393]: I1213 14:34:30.926143 1393 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/70f97af7-716e-4242-ad8d-e0906d610939-hostproc\") on node \"172.24.4.94\" DevicePath \"\"" Dec 13 14:34:30.926717 kubelet[1393]: I1213 14:34:30.926164 1393 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/70f97af7-716e-4242-ad8d-e0906d610939-cilium-config-path\") on node \"172.24.4.94\" DevicePath \"\"" Dec 13 14:34:30.926717 kubelet[1393]: I1213 14:34:30.926185 1393 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/70f97af7-716e-4242-ad8d-e0906d610939-lib-modules\") on node \"172.24.4.94\" DevicePath \"\"" Dec 13 14:34:30.926717 kubelet[1393]: I1213 14:34:30.926206 1393 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/70f97af7-716e-4242-ad8d-e0906d610939-cni-path\") on node \"172.24.4.94\" DevicePath \"\"" Dec 13 14:34:31.408557 systemd[1]: var-lib-kubelet-pods-70f97af7\x2d716e\x2d4242\x2dad8d\x2de0906d610939-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 14:34:31.408812 systemd[1]: var-lib-kubelet-pods-70f97af7\x2d716e\x2d4242\x2dad8d\x2de0906d610939-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d7dbsn.mount: Deactivated successfully. Dec 13 14:34:31.408975 systemd[1]: var-lib-kubelet-pods-70f97af7\x2d716e\x2d4242\x2dad8d\x2de0906d610939-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 14:34:31.420519 kubelet[1393]: I1213 14:34:31.420479 1393 scope.go:117] "RemoveContainer" containerID="fb59aa7c256f528ea730aba4e068a4721a0a92c77d6f273926cb96eef0fb83a9" Dec 13 14:34:31.423859 env[1138]: time="2024-12-13T14:34:31.423157576Z" level=info msg="RemoveContainer for \"fb59aa7c256f528ea730aba4e068a4721a0a92c77d6f273926cb96eef0fb83a9\"" Dec 13 14:34:31.429301 systemd[1]: Removed slice kubepods-burstable-pod70f97af7_716e_4242_ad8d_e0906d610939.slice. Dec 13 14:34:31.429644 systemd[1]: kubepods-burstable-pod70f97af7_716e_4242_ad8d_e0906d610939.slice: Consumed 9.793s CPU time. Dec 13 14:34:31.566758 env[1138]: time="2024-12-13T14:34:31.566658121Z" level=info msg="RemoveContainer for \"fb59aa7c256f528ea730aba4e068a4721a0a92c77d6f273926cb96eef0fb83a9\" returns successfully" Dec 13 14:34:31.567628 kubelet[1393]: I1213 14:34:31.567228 1393 scope.go:117] "RemoveContainer" containerID="7ec6cbbe8eebe5c2227a76e549de668665a4b7bb81fdc42afc602366d61a4a2f" Dec 13 14:34:31.570769 env[1138]: time="2024-12-13T14:34:31.570684135Z" level=info msg="RemoveContainer for \"7ec6cbbe8eebe5c2227a76e549de668665a4b7bb81fdc42afc602366d61a4a2f\"" Dec 13 14:34:31.631020 env[1138]: time="2024-12-13T14:34:31.630935822Z" level=info msg="RemoveContainer for \"7ec6cbbe8eebe5c2227a76e549de668665a4b7bb81fdc42afc602366d61a4a2f\" returns successfully" Dec 13 14:34:31.631907 kubelet[1393]: I1213 14:34:31.631866 1393 scope.go:117] "RemoveContainer" containerID="660c2c8b5089ab70927249bca0d6225f709c47e6986c349a3580789570e7ca90" Dec 13 14:34:31.635175 env[1138]: time="2024-12-13T14:34:31.634809250Z" level=info msg="RemoveContainer for \"660c2c8b5089ab70927249bca0d6225f709c47e6986c349a3580789570e7ca90\"" Dec 13 14:34:31.639158 env[1138]: time="2024-12-13T14:34:31.639115027Z" level=info msg="RemoveContainer for \"660c2c8b5089ab70927249bca0d6225f709c47e6986c349a3580789570e7ca90\" returns successfully" Dec 13 14:34:31.639697 kubelet[1393]: I1213 14:34:31.639649 1393 scope.go:117] "RemoveContainer" containerID="4c01a9643e3c01c1566133ffc4a17226750c4f6a90ceaaf1819ccf718beaf47e" Dec 13 14:34:31.643431 env[1138]: time="2024-12-13T14:34:31.643380380Z" level=info msg="RemoveContainer for \"4c01a9643e3c01c1566133ffc4a17226750c4f6a90ceaaf1819ccf718beaf47e\"" Dec 13 14:34:31.647244 env[1138]: time="2024-12-13T14:34:31.647170301Z" level=info msg="RemoveContainer for \"4c01a9643e3c01c1566133ffc4a17226750c4f6a90ceaaf1819ccf718beaf47e\" returns successfully" Dec 13 14:34:31.647929 kubelet[1393]: I1213 14:34:31.647898 1393 scope.go:117] "RemoveContainer" containerID="024c6bd607f69499795abd8d77f7bb9f887400a7fe1497c2172a56b40e72551e" Dec 13 14:34:31.650161 env[1138]: time="2024-12-13T14:34:31.650116532Z" level=info msg="RemoveContainer for \"024c6bd607f69499795abd8d77f7bb9f887400a7fe1497c2172a56b40e72551e\"" Dec 13 14:34:31.654001 env[1138]: time="2024-12-13T14:34:31.653959091Z" level=info msg="RemoveContainer for \"024c6bd607f69499795abd8d77f7bb9f887400a7fe1497c2172a56b40e72551e\" returns successfully" Dec 13 14:34:31.654306 kubelet[1393]: I1213 14:34:31.654222 1393 scope.go:117] "RemoveContainer" containerID="fb59aa7c256f528ea730aba4e068a4721a0a92c77d6f273926cb96eef0fb83a9" Dec 13 14:34:31.654685 env[1138]: time="2024-12-13T14:34:31.654561480Z" level=error msg="ContainerStatus for \"fb59aa7c256f528ea730aba4e068a4721a0a92c77d6f273926cb96eef0fb83a9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"fb59aa7c256f528ea730aba4e068a4721a0a92c77d6f273926cb96eef0fb83a9\": not found" Dec 13 14:34:31.655298 kubelet[1393]: E1213 14:34:31.655213 1393 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"fb59aa7c256f528ea730aba4e068a4721a0a92c77d6f273926cb96eef0fb83a9\": not found" containerID="fb59aa7c256f528ea730aba4e068a4721a0a92c77d6f273926cb96eef0fb83a9" Dec 13 14:34:31.655650 kubelet[1393]: I1213 14:34:31.655486 1393 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"fb59aa7c256f528ea730aba4e068a4721a0a92c77d6f273926cb96eef0fb83a9"} err="failed to get container status \"fb59aa7c256f528ea730aba4e068a4721a0a92c77d6f273926cb96eef0fb83a9\": rpc error: code = NotFound desc = an error occurred when try to find container \"fb59aa7c256f528ea730aba4e068a4721a0a92c77d6f273926cb96eef0fb83a9\": not found" Dec 13 14:34:31.655831 kubelet[1393]: I1213 14:34:31.655800 1393 scope.go:117] "RemoveContainer" containerID="7ec6cbbe8eebe5c2227a76e549de668665a4b7bb81fdc42afc602366d61a4a2f" Dec 13 14:34:31.656575 env[1138]: time="2024-12-13T14:34:31.656428970Z" level=error msg="ContainerStatus for \"7ec6cbbe8eebe5c2227a76e549de668665a4b7bb81fdc42afc602366d61a4a2f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7ec6cbbe8eebe5c2227a76e549de668665a4b7bb81fdc42afc602366d61a4a2f\": not found" Dec 13 14:34:31.657023 kubelet[1393]: E1213 14:34:31.656980 1393 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7ec6cbbe8eebe5c2227a76e549de668665a4b7bb81fdc42afc602366d61a4a2f\": not found" containerID="7ec6cbbe8eebe5c2227a76e549de668665a4b7bb81fdc42afc602366d61a4a2f" Dec 13 14:34:31.657242 kubelet[1393]: I1213 14:34:31.657194 1393 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7ec6cbbe8eebe5c2227a76e549de668665a4b7bb81fdc42afc602366d61a4a2f"} err="failed to get container status \"7ec6cbbe8eebe5c2227a76e549de668665a4b7bb81fdc42afc602366d61a4a2f\": rpc error: code = NotFound desc = an error occurred when try to find container \"7ec6cbbe8eebe5c2227a76e549de668665a4b7bb81fdc42afc602366d61a4a2f\": not found" Dec 13 14:34:31.657477 kubelet[1393]: I1213 14:34:31.657447 1393 scope.go:117] "RemoveContainer" containerID="660c2c8b5089ab70927249bca0d6225f709c47e6986c349a3580789570e7ca90" Dec 13 14:34:31.658223 env[1138]: time="2024-12-13T14:34:31.657951081Z" level=error msg="ContainerStatus for \"660c2c8b5089ab70927249bca0d6225f709c47e6986c349a3580789570e7ca90\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"660c2c8b5089ab70927249bca0d6225f709c47e6986c349a3580789570e7ca90\": not found" Dec 13 14:34:31.658473 kubelet[1393]: E1213 14:34:31.658429 1393 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"660c2c8b5089ab70927249bca0d6225f709c47e6986c349a3580789570e7ca90\": not found" containerID="660c2c8b5089ab70927249bca0d6225f709c47e6986c349a3580789570e7ca90" Dec 13 14:34:31.658579 kubelet[1393]: I1213 14:34:31.658489 1393 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"660c2c8b5089ab70927249bca0d6225f709c47e6986c349a3580789570e7ca90"} err="failed to get container status \"660c2c8b5089ab70927249bca0d6225f709c47e6986c349a3580789570e7ca90\": rpc error: code = NotFound desc = an error occurred when try to find container \"660c2c8b5089ab70927249bca0d6225f709c47e6986c349a3580789570e7ca90\": not found" Dec 13 14:34:31.658579 kubelet[1393]: I1213 14:34:31.658525 1393 scope.go:117] "RemoveContainer" containerID="4c01a9643e3c01c1566133ffc4a17226750c4f6a90ceaaf1819ccf718beaf47e" Dec 13 14:34:31.658826 env[1138]: time="2024-12-13T14:34:31.658762993Z" level=error msg="ContainerStatus for \"4c01a9643e3c01c1566133ffc4a17226750c4f6a90ceaaf1819ccf718beaf47e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4c01a9643e3c01c1566133ffc4a17226750c4f6a90ceaaf1819ccf718beaf47e\": not found" Dec 13 14:34:31.658977 kubelet[1393]: E1213 14:34:31.658927 1393 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4c01a9643e3c01c1566133ffc4a17226750c4f6a90ceaaf1819ccf718beaf47e\": not found" containerID="4c01a9643e3c01c1566133ffc4a17226750c4f6a90ceaaf1819ccf718beaf47e" Dec 13 14:34:31.659074 kubelet[1393]: I1213 14:34:31.658974 1393 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4c01a9643e3c01c1566133ffc4a17226750c4f6a90ceaaf1819ccf718beaf47e"} err="failed to get container status \"4c01a9643e3c01c1566133ffc4a17226750c4f6a90ceaaf1819ccf718beaf47e\": rpc error: code = NotFound desc = an error occurred when try to find container \"4c01a9643e3c01c1566133ffc4a17226750c4f6a90ceaaf1819ccf718beaf47e\": not found" Dec 13 14:34:31.659074 kubelet[1393]: I1213 14:34:31.658995 1393 scope.go:117] "RemoveContainer" containerID="024c6bd607f69499795abd8d77f7bb9f887400a7fe1497c2172a56b40e72551e" Dec 13 14:34:31.659332 env[1138]: time="2024-12-13T14:34:31.659209299Z" level=error msg="ContainerStatus for \"024c6bd607f69499795abd8d77f7bb9f887400a7fe1497c2172a56b40e72551e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"024c6bd607f69499795abd8d77f7bb9f887400a7fe1497c2172a56b40e72551e\": not found" Dec 13 14:34:31.659549 kubelet[1393]: E1213 14:34:31.659414 1393 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"024c6bd607f69499795abd8d77f7bb9f887400a7fe1497c2172a56b40e72551e\": not found" containerID="024c6bd607f69499795abd8d77f7bb9f887400a7fe1497c2172a56b40e72551e" Dec 13 14:34:31.659549 kubelet[1393]: I1213 14:34:31.659436 1393 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"024c6bd607f69499795abd8d77f7bb9f887400a7fe1497c2172a56b40e72551e"} err="failed to get container status \"024c6bd607f69499795abd8d77f7bb9f887400a7fe1497c2172a56b40e72551e\": rpc error: code = NotFound desc = an error occurred when try to find container \"024c6bd607f69499795abd8d77f7bb9f887400a7fe1497c2172a56b40e72551e\": not found" Dec 13 14:34:31.788774 kubelet[1393]: E1213 14:34:31.788589 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:34:31.926322 kubelet[1393]: I1213 14:34:31.926116 1393 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="70f97af7-716e-4242-ad8d-e0906d610939" path="/var/lib/kubelet/pods/70f97af7-716e-4242-ad8d-e0906d610939/volumes" Dec 13 14:34:32.790224 kubelet[1393]: E1213 14:34:32.790024 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:34:32.906492 kubelet[1393]: E1213 14:34:32.906383 1393 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 14:34:33.791356 kubelet[1393]: E1213 14:34:33.791285 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:34:34.793386 kubelet[1393]: E1213 14:34:34.793262 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:34:35.545647 kubelet[1393]: E1213 14:34:35.544862 1393 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="70f97af7-716e-4242-ad8d-e0906d610939" containerName="mount-cgroup" Dec 13 14:34:35.545647 kubelet[1393]: E1213 14:34:35.544912 1393 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="70f97af7-716e-4242-ad8d-e0906d610939" containerName="cilium-agent" Dec 13 14:34:35.545647 kubelet[1393]: E1213 14:34:35.544930 1393 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="70f97af7-716e-4242-ad8d-e0906d610939" containerName="apply-sysctl-overwrites" Dec 13 14:34:35.545647 kubelet[1393]: E1213 14:34:35.544945 1393 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="70f97af7-716e-4242-ad8d-e0906d610939" containerName="mount-bpf-fs" Dec 13 14:34:35.545647 kubelet[1393]: E1213 14:34:35.544960 1393 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="70f97af7-716e-4242-ad8d-e0906d610939" containerName="clean-cilium-state" Dec 13 14:34:35.545647 kubelet[1393]: I1213 14:34:35.545003 1393 memory_manager.go:354] "RemoveStaleState removing state" podUID="70f97af7-716e-4242-ad8d-e0906d610939" containerName="cilium-agent" Dec 13 14:34:35.556653 systemd[1]: Created slice kubepods-besteffort-poda88f4333_e531_4177_9957_80f8dd318fd6.slice. Dec 13 14:34:35.593139 systemd[1]: Created slice kubepods-burstable-pod10b8393e_49dc_4b3e_b814_d27c6c45447c.slice. Dec 13 14:34:35.660562 kubelet[1393]: I1213 14:34:35.660481 1393 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2dbw\" (UniqueName: \"kubernetes.io/projected/a88f4333-e531-4177-9957-80f8dd318fd6-kube-api-access-d2dbw\") pod \"cilium-operator-5d85765b45-pdv8j\" (UID: \"a88f4333-e531-4177-9957-80f8dd318fd6\") " pod="kube-system/cilium-operator-5d85765b45-pdv8j" Dec 13 14:34:35.660562 kubelet[1393]: I1213 14:34:35.660555 1393 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a88f4333-e531-4177-9957-80f8dd318fd6-cilium-config-path\") pod \"cilium-operator-5d85765b45-pdv8j\" (UID: \"a88f4333-e531-4177-9957-80f8dd318fd6\") " pod="kube-system/cilium-operator-5d85765b45-pdv8j" Dec 13 14:34:35.762155 kubelet[1393]: I1213 14:34:35.761943 1393 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/10b8393e-49dc-4b3e-b814-d27c6c45447c-hostproc\") pod \"cilium-k5nn9\" (UID: \"10b8393e-49dc-4b3e-b814-d27c6c45447c\") " pod="kube-system/cilium-k5nn9" Dec 13 14:34:35.762155 kubelet[1393]: I1213 14:34:35.762043 1393 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/10b8393e-49dc-4b3e-b814-d27c6c45447c-etc-cni-netd\") pod \"cilium-k5nn9\" (UID: \"10b8393e-49dc-4b3e-b814-d27c6c45447c\") " pod="kube-system/cilium-k5nn9" Dec 13 14:34:35.762155 kubelet[1393]: I1213 14:34:35.762148 1393 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/10b8393e-49dc-4b3e-b814-d27c6c45447c-cilium-config-path\") pod \"cilium-k5nn9\" (UID: \"10b8393e-49dc-4b3e-b814-d27c6c45447c\") " pod="kube-system/cilium-k5nn9" Dec 13 14:34:35.762716 kubelet[1393]: I1213 14:34:35.762285 1393 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/10b8393e-49dc-4b3e-b814-d27c6c45447c-hubble-tls\") pod \"cilium-k5nn9\" (UID: \"10b8393e-49dc-4b3e-b814-d27c6c45447c\") " pod="kube-system/cilium-k5nn9" Dec 13 14:34:35.762716 kubelet[1393]: I1213 14:34:35.762340 1393 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zv87d\" (UniqueName: \"kubernetes.io/projected/10b8393e-49dc-4b3e-b814-d27c6c45447c-kube-api-access-zv87d\") pod \"cilium-k5nn9\" (UID: \"10b8393e-49dc-4b3e-b814-d27c6c45447c\") " pod="kube-system/cilium-k5nn9" Dec 13 14:34:35.762716 kubelet[1393]: I1213 14:34:35.762400 1393 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/10b8393e-49dc-4b3e-b814-d27c6c45447c-host-proc-sys-kernel\") pod \"cilium-k5nn9\" (UID: \"10b8393e-49dc-4b3e-b814-d27c6c45447c\") " pod="kube-system/cilium-k5nn9" Dec 13 14:34:35.763119 kubelet[1393]: I1213 14:34:35.762445 1393 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/10b8393e-49dc-4b3e-b814-d27c6c45447c-xtables-lock\") pod \"cilium-k5nn9\" (UID: \"10b8393e-49dc-4b3e-b814-d27c6c45447c\") " pod="kube-system/cilium-k5nn9" Dec 13 14:34:35.763237 kubelet[1393]: I1213 14:34:35.763142 1393 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/10b8393e-49dc-4b3e-b814-d27c6c45447c-cilium-cgroup\") pod \"cilium-k5nn9\" (UID: \"10b8393e-49dc-4b3e-b814-d27c6c45447c\") " pod="kube-system/cilium-k5nn9" Dec 13 14:34:35.763237 kubelet[1393]: I1213 14:34:35.763190 1393 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/10b8393e-49dc-4b3e-b814-d27c6c45447c-cilium-run\") pod \"cilium-k5nn9\" (UID: \"10b8393e-49dc-4b3e-b814-d27c6c45447c\") " pod="kube-system/cilium-k5nn9" Dec 13 14:34:35.763237 kubelet[1393]: I1213 14:34:35.763235 1393 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/10b8393e-49dc-4b3e-b814-d27c6c45447c-clustermesh-secrets\") pod \"cilium-k5nn9\" (UID: \"10b8393e-49dc-4b3e-b814-d27c6c45447c\") " pod="kube-system/cilium-k5nn9" Dec 13 14:34:35.765168 kubelet[1393]: I1213 14:34:35.763341 1393 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/10b8393e-49dc-4b3e-b814-d27c6c45447c-cilium-ipsec-secrets\") pod \"cilium-k5nn9\" (UID: \"10b8393e-49dc-4b3e-b814-d27c6c45447c\") " pod="kube-system/cilium-k5nn9" Dec 13 14:34:35.765168 kubelet[1393]: I1213 14:34:35.763420 1393 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/10b8393e-49dc-4b3e-b814-d27c6c45447c-cni-path\") pod \"cilium-k5nn9\" (UID: \"10b8393e-49dc-4b3e-b814-d27c6c45447c\") " pod="kube-system/cilium-k5nn9" Dec 13 14:34:35.765168 kubelet[1393]: I1213 14:34:35.763462 1393 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/10b8393e-49dc-4b3e-b814-d27c6c45447c-lib-modules\") pod \"cilium-k5nn9\" (UID: \"10b8393e-49dc-4b3e-b814-d27c6c45447c\") " pod="kube-system/cilium-k5nn9" Dec 13 14:34:35.765168 kubelet[1393]: I1213 14:34:35.763504 1393 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/10b8393e-49dc-4b3e-b814-d27c6c45447c-host-proc-sys-net\") pod \"cilium-k5nn9\" (UID: \"10b8393e-49dc-4b3e-b814-d27c6c45447c\") " pod="kube-system/cilium-k5nn9" Dec 13 14:34:35.765168 kubelet[1393]: I1213 14:34:35.763583 1393 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/10b8393e-49dc-4b3e-b814-d27c6c45447c-bpf-maps\") pod \"cilium-k5nn9\" (UID: \"10b8393e-49dc-4b3e-b814-d27c6c45447c\") " pod="kube-system/cilium-k5nn9" Dec 13 14:34:35.797588 kubelet[1393]: E1213 14:34:35.794811 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:34:35.863120 env[1138]: time="2024-12-13T14:34:35.862970956Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-pdv8j,Uid:a88f4333-e531-4177-9957-80f8dd318fd6,Namespace:kube-system,Attempt:0,}" Dec 13 14:34:35.895501 env[1138]: time="2024-12-13T14:34:35.895378547Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:34:35.895501 env[1138]: time="2024-12-13T14:34:35.895431526Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:34:35.895501 env[1138]: time="2024-12-13T14:34:35.895446624Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:34:35.895911 env[1138]: time="2024-12-13T14:34:35.895593099Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8c59c9a4f2475fe122e70c343508d3cd4bdaffc932b02c095de7acd2b43b601c pid=2933 runtime=io.containerd.runc.v2 Dec 13 14:34:35.960392 systemd[1]: Started cri-containerd-8c59c9a4f2475fe122e70c343508d3cd4bdaffc932b02c095de7acd2b43b601c.scope. Dec 13 14:34:36.006696 env[1138]: time="2024-12-13T14:34:36.006645623Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-pdv8j,Uid:a88f4333-e531-4177-9957-80f8dd318fd6,Namespace:kube-system,Attempt:0,} returns sandbox id \"8c59c9a4f2475fe122e70c343508d3cd4bdaffc932b02c095de7acd2b43b601c\"" Dec 13 14:34:36.008649 env[1138]: time="2024-12-13T14:34:36.008538951Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Dec 13 14:34:36.203982 env[1138]: time="2024-12-13T14:34:36.203904839Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-k5nn9,Uid:10b8393e-49dc-4b3e-b814-d27c6c45447c,Namespace:kube-system,Attempt:0,}" Dec 13 14:34:36.502500 env[1138]: time="2024-12-13T14:34:36.502155577Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:34:36.502500 env[1138]: time="2024-12-13T14:34:36.502281352Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:34:36.502855 env[1138]: time="2024-12-13T14:34:36.502334182Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:34:36.503530 env[1138]: time="2024-12-13T14:34:36.503426899Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/173b3a30577e1da154e4ea3ee35e5813f02501df268fa065b062e348241e25a2 pid=2975 runtime=io.containerd.runc.v2 Dec 13 14:34:36.533514 systemd[1]: Started cri-containerd-173b3a30577e1da154e4ea3ee35e5813f02501df268fa065b062e348241e25a2.scope. Dec 13 14:34:36.579694 env[1138]: time="2024-12-13T14:34:36.579642043Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-k5nn9,Uid:10b8393e-49dc-4b3e-b814-d27c6c45447c,Namespace:kube-system,Attempt:0,} returns sandbox id \"173b3a30577e1da154e4ea3ee35e5813f02501df268fa065b062e348241e25a2\"" Dec 13 14:34:36.594422 env[1138]: time="2024-12-13T14:34:36.594197860Z" level=info msg="CreateContainer within sandbox \"173b3a30577e1da154e4ea3ee35e5813f02501df268fa065b062e348241e25a2\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 14:34:36.618540 env[1138]: time="2024-12-13T14:34:36.618412747Z" level=info msg="CreateContainer within sandbox \"173b3a30577e1da154e4ea3ee35e5813f02501df268fa065b062e348241e25a2\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"9a5c52f92bbf50c2985e9c4ff98a57fec0273042d51842c6f4dbad693acb0314\"" Dec 13 14:34:36.619484 env[1138]: time="2024-12-13T14:34:36.619404014Z" level=info msg="StartContainer for \"9a5c52f92bbf50c2985e9c4ff98a57fec0273042d51842c6f4dbad693acb0314\"" Dec 13 14:34:36.650871 systemd[1]: Started cri-containerd-9a5c52f92bbf50c2985e9c4ff98a57fec0273042d51842c6f4dbad693acb0314.scope. Dec 13 14:34:36.666705 systemd[1]: cri-containerd-9a5c52f92bbf50c2985e9c4ff98a57fec0273042d51842c6f4dbad693acb0314.scope: Deactivated successfully. Dec 13 14:34:36.721673 env[1138]: time="2024-12-13T14:34:36.721579887Z" level=info msg="shim disconnected" id=9a5c52f92bbf50c2985e9c4ff98a57fec0273042d51842c6f4dbad693acb0314 Dec 13 14:34:36.722225 env[1138]: time="2024-12-13T14:34:36.722165906Z" level=warning msg="cleaning up after shim disconnected" id=9a5c52f92bbf50c2985e9c4ff98a57fec0273042d51842c6f4dbad693acb0314 namespace=k8s.io Dec 13 14:34:36.722498 env[1138]: time="2024-12-13T14:34:36.722461109Z" level=info msg="cleaning up dead shim" Dec 13 14:34:36.738399 env[1138]: time="2024-12-13T14:34:36.738328934Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:34:36Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3033 runtime=io.containerd.runc.v2\ntime=\"2024-12-13T14:34:36Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/9a5c52f92bbf50c2985e9c4ff98a57fec0273042d51842c6f4dbad693acb0314/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Dec 13 14:34:36.739365 env[1138]: time="2024-12-13T14:34:36.739134023Z" level=error msg="copy shim log" error="read /proc/self/fd/72: file already closed" Dec 13 14:34:36.739897 env[1138]: time="2024-12-13T14:34:36.739769875Z" level=error msg="Failed to pipe stderr of container \"9a5c52f92bbf50c2985e9c4ff98a57fec0273042d51842c6f4dbad693acb0314\"" error="reading from a closed fifo" Dec 13 14:34:36.740433 env[1138]: time="2024-12-13T14:34:36.740357125Z" level=error msg="Failed to pipe stdout of container \"9a5c52f92bbf50c2985e9c4ff98a57fec0273042d51842c6f4dbad693acb0314\"" error="reading from a closed fifo" Dec 13 14:34:36.744020 env[1138]: time="2024-12-13T14:34:36.743881250Z" level=error msg="StartContainer for \"9a5c52f92bbf50c2985e9c4ff98a57fec0273042d51842c6f4dbad693acb0314\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Dec 13 14:34:36.744634 kubelet[1393]: E1213 14:34:36.744388 1393 log.go:32] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="9a5c52f92bbf50c2985e9c4ff98a57fec0273042d51842c6f4dbad693acb0314" Dec 13 14:34:36.749511 kubelet[1393]: E1213 14:34:36.749454 1393 kuberuntime_manager.go:1272] "Unhandled Error" err=< Dec 13 14:34:36.749511 kubelet[1393]: init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Dec 13 14:34:36.749511 kubelet[1393]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Dec 13 14:34:36.749511 kubelet[1393]: rm /hostbin/cilium-mount Dec 13 14:34:36.749861 kubelet[1393]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zv87d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:&AppArmorProfile{Type:Unconfined,LocalhostProfile:nil,},},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-k5nn9_kube-system(10b8393e-49dc-4b3e-b814-d27c6c45447c): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Dec 13 14:34:36.749861 kubelet[1393]: > logger="UnhandledError" Dec 13 14:34:36.751461 kubelet[1393]: E1213 14:34:36.751378 1393 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-k5nn9" podUID="10b8393e-49dc-4b3e-b814-d27c6c45447c" Dec 13 14:34:36.796231 kubelet[1393]: E1213 14:34:36.796015 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:34:37.454435 env[1138]: time="2024-12-13T14:34:37.454350683Z" level=info msg="CreateContainer within sandbox \"173b3a30577e1da154e4ea3ee35e5813f02501df268fa065b062e348241e25a2\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:1,}" Dec 13 14:34:37.786194 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2838085280.mount: Deactivated successfully. Dec 13 14:34:37.797204 kubelet[1393]: E1213 14:34:37.797157 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:34:37.808889 env[1138]: time="2024-12-13T14:34:37.808771384Z" level=info msg="CreateContainer within sandbox \"173b3a30577e1da154e4ea3ee35e5813f02501df268fa065b062e348241e25a2\" for &ContainerMetadata{Name:mount-cgroup,Attempt:1,} returns container id \"2931f30acaef6dc184ae102281d0bc671cfc454f1702fc1aee88ecf0e56e82c1\"" Dec 13 14:34:37.811128 env[1138]: time="2024-12-13T14:34:37.811063258Z" level=info msg="StartContainer for \"2931f30acaef6dc184ae102281d0bc671cfc454f1702fc1aee88ecf0e56e82c1\"" Dec 13 14:34:37.866866 systemd[1]: run-containerd-runc-k8s.io-2931f30acaef6dc184ae102281d0bc671cfc454f1702fc1aee88ecf0e56e82c1-runc.bSIErn.mount: Deactivated successfully. Dec 13 14:34:37.871477 systemd[1]: Started cri-containerd-2931f30acaef6dc184ae102281d0bc671cfc454f1702fc1aee88ecf0e56e82c1.scope. Dec 13 14:34:37.894589 systemd[1]: cri-containerd-2931f30acaef6dc184ae102281d0bc671cfc454f1702fc1aee88ecf0e56e82c1.scope: Deactivated successfully. Dec 13 14:34:37.905549 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2931f30acaef6dc184ae102281d0bc671cfc454f1702fc1aee88ecf0e56e82c1-rootfs.mount: Deactivated successfully. Dec 13 14:34:37.917298 kubelet[1393]: E1213 14:34:37.917224 1393 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 14:34:38.079556 env[1138]: time="2024-12-13T14:34:38.079220375Z" level=info msg="shim disconnected" id=2931f30acaef6dc184ae102281d0bc671cfc454f1702fc1aee88ecf0e56e82c1 Dec 13 14:34:38.079556 env[1138]: time="2024-12-13T14:34:38.079375364Z" level=warning msg="cleaning up after shim disconnected" id=2931f30acaef6dc184ae102281d0bc671cfc454f1702fc1aee88ecf0e56e82c1 namespace=k8s.io Dec 13 14:34:38.079556 env[1138]: time="2024-12-13T14:34:38.079407314Z" level=info msg="cleaning up dead shim" Dec 13 14:34:38.100714 env[1138]: time="2024-12-13T14:34:38.100583119Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:34:38Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3070 runtime=io.containerd.runc.v2\ntime=\"2024-12-13T14:34:38Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/2931f30acaef6dc184ae102281d0bc671cfc454f1702fc1aee88ecf0e56e82c1/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Dec 13 14:34:38.101949 env[1138]: time="2024-12-13T14:34:38.101804988Z" level=error msg="copy shim log" error="read /proc/self/fd/72: file already closed" Dec 13 14:34:38.104469 env[1138]: time="2024-12-13T14:34:38.102359949Z" level=error msg="Failed to pipe stderr of container \"2931f30acaef6dc184ae102281d0bc671cfc454f1702fc1aee88ecf0e56e82c1\"" error="reading from a closed fifo" Dec 13 14:34:38.106497 env[1138]: time="2024-12-13T14:34:38.106345628Z" level=error msg="Failed to pipe stdout of container \"2931f30acaef6dc184ae102281d0bc671cfc454f1702fc1aee88ecf0e56e82c1\"" error="reading from a closed fifo" Dec 13 14:34:38.205755 env[1138]: time="2024-12-13T14:34:38.205590575Z" level=error msg="StartContainer for \"2931f30acaef6dc184ae102281d0bc671cfc454f1702fc1aee88ecf0e56e82c1\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Dec 13 14:34:38.206410 kubelet[1393]: E1213 14:34:38.206289 1393 log.go:32] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="2931f30acaef6dc184ae102281d0bc671cfc454f1702fc1aee88ecf0e56e82c1" Dec 13 14:34:38.206728 kubelet[1393]: E1213 14:34:38.206635 1393 kuberuntime_manager.go:1272] "Unhandled Error" err=< Dec 13 14:34:38.206728 kubelet[1393]: init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Dec 13 14:34:38.206728 kubelet[1393]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Dec 13 14:34:38.206728 kubelet[1393]: rm /hostbin/cilium-mount Dec 13 14:34:38.206728 kubelet[1393]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zv87d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:&AppArmorProfile{Type:Unconfined,LocalhostProfile:nil,},},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-k5nn9_kube-system(10b8393e-49dc-4b3e-b814-d27c6c45447c): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Dec 13 14:34:38.206728 kubelet[1393]: > logger="UnhandledError" Dec 13 14:34:38.208991 kubelet[1393]: E1213 14:34:38.208887 1393 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-k5nn9" podUID="10b8393e-49dc-4b3e-b814-d27c6c45447c" Dec 13 14:34:38.476329 kubelet[1393]: I1213 14:34:38.476040 1393 scope.go:117] "RemoveContainer" containerID="9a5c52f92bbf50c2985e9c4ff98a57fec0273042d51842c6f4dbad693acb0314" Dec 13 14:34:38.477603 env[1138]: time="2024-12-13T14:34:38.477485003Z" level=info msg="StopPodSandbox for \"173b3a30577e1da154e4ea3ee35e5813f02501df268fa065b062e348241e25a2\"" Dec 13 14:34:38.485654 env[1138]: time="2024-12-13T14:34:38.477638240Z" level=info msg="Container to stop \"2931f30acaef6dc184ae102281d0bc671cfc454f1702fc1aee88ecf0e56e82c1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:34:38.485654 env[1138]: time="2024-12-13T14:34:38.477688714Z" level=info msg="Container to stop \"9a5c52f92bbf50c2985e9c4ff98a57fec0273042d51842c6f4dbad693acb0314\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:34:38.483018 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-173b3a30577e1da154e4ea3ee35e5813f02501df268fa065b062e348241e25a2-shm.mount: Deactivated successfully. Dec 13 14:34:38.499530 systemd[1]: cri-containerd-173b3a30577e1da154e4ea3ee35e5813f02501df268fa065b062e348241e25a2.scope: Deactivated successfully. Dec 13 14:34:38.555081 env[1138]: time="2024-12-13T14:34:38.554995249Z" level=info msg="RemoveContainer for \"9a5c52f92bbf50c2985e9c4ff98a57fec0273042d51842c6f4dbad693acb0314\"" Dec 13 14:34:38.580951 env[1138]: time="2024-12-13T14:34:38.580225531Z" level=info msg="shim disconnected" id=173b3a30577e1da154e4ea3ee35e5813f02501df268fa065b062e348241e25a2 Dec 13 14:34:38.581311 env[1138]: time="2024-12-13T14:34:38.581227750Z" level=warning msg="cleaning up after shim disconnected" id=173b3a30577e1da154e4ea3ee35e5813f02501df268fa065b062e348241e25a2 namespace=k8s.io Dec 13 14:34:38.581478 env[1138]: time="2024-12-13T14:34:38.581446740Z" level=info msg="cleaning up dead shim" Dec 13 14:34:38.603830 env[1138]: time="2024-12-13T14:34:38.603756198Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:34:38Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3105 runtime=io.containerd.runc.v2\n" Dec 13 14:34:38.604448 env[1138]: time="2024-12-13T14:34:38.604367524Z" level=info msg="TearDown network for sandbox \"173b3a30577e1da154e4ea3ee35e5813f02501df268fa065b062e348241e25a2\" successfully" Dec 13 14:34:38.604448 env[1138]: time="2024-12-13T14:34:38.604422216Z" level=info msg="StopPodSandbox for \"173b3a30577e1da154e4ea3ee35e5813f02501df268fa065b062e348241e25a2\" returns successfully" Dec 13 14:34:38.637795 env[1138]: time="2024-12-13T14:34:38.637712814Z" level=info msg="RemoveContainer for \"9a5c52f92bbf50c2985e9c4ff98a57fec0273042d51842c6f4dbad693acb0314\" returns successfully" Dec 13 14:34:38.693630 kubelet[1393]: I1213 14:34:38.692413 1393 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/10b8393e-49dc-4b3e-b814-d27c6c45447c-cilium-cgroup\") pod \"10b8393e-49dc-4b3e-b814-d27c6c45447c\" (UID: \"10b8393e-49dc-4b3e-b814-d27c6c45447c\") " Dec 13 14:34:38.693630 kubelet[1393]: I1213 14:34:38.692500 1393 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/10b8393e-49dc-4b3e-b814-d27c6c45447c-cilium-run\") pod \"10b8393e-49dc-4b3e-b814-d27c6c45447c\" (UID: \"10b8393e-49dc-4b3e-b814-d27c6c45447c\") " Dec 13 14:34:38.693630 kubelet[1393]: I1213 14:34:38.692552 1393 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/10b8393e-49dc-4b3e-b814-d27c6c45447c-hostproc\") pod \"10b8393e-49dc-4b3e-b814-d27c6c45447c\" (UID: \"10b8393e-49dc-4b3e-b814-d27c6c45447c\") " Dec 13 14:34:38.693630 kubelet[1393]: I1213 14:34:38.692590 1393 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/10b8393e-49dc-4b3e-b814-d27c6c45447c-xtables-lock\") pod \"10b8393e-49dc-4b3e-b814-d27c6c45447c\" (UID: \"10b8393e-49dc-4b3e-b814-d27c6c45447c\") " Dec 13 14:34:38.693630 kubelet[1393]: I1213 14:34:38.692629 1393 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/10b8393e-49dc-4b3e-b814-d27c6c45447c-host-proc-sys-net\") pod \"10b8393e-49dc-4b3e-b814-d27c6c45447c\" (UID: \"10b8393e-49dc-4b3e-b814-d27c6c45447c\") " Dec 13 14:34:38.693630 kubelet[1393]: I1213 14:34:38.692629 1393 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/10b8393e-49dc-4b3e-b814-d27c6c45447c-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "10b8393e-49dc-4b3e-b814-d27c6c45447c" (UID: "10b8393e-49dc-4b3e-b814-d27c6c45447c"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:34:38.693630 kubelet[1393]: I1213 14:34:38.692667 1393 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/10b8393e-49dc-4b3e-b814-d27c6c45447c-bpf-maps\") pod \"10b8393e-49dc-4b3e-b814-d27c6c45447c\" (UID: \"10b8393e-49dc-4b3e-b814-d27c6c45447c\") " Dec 13 14:34:38.693630 kubelet[1393]: I1213 14:34:38.692732 1393 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/10b8393e-49dc-4b3e-b814-d27c6c45447c-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "10b8393e-49dc-4b3e-b814-d27c6c45447c" (UID: "10b8393e-49dc-4b3e-b814-d27c6c45447c"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:34:38.693630 kubelet[1393]: I1213 14:34:38.692781 1393 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/10b8393e-49dc-4b3e-b814-d27c6c45447c-host-proc-sys-kernel\") pod \"10b8393e-49dc-4b3e-b814-d27c6c45447c\" (UID: \"10b8393e-49dc-4b3e-b814-d27c6c45447c\") " Dec 13 14:34:38.693630 kubelet[1393]: I1213 14:34:38.692797 1393 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/10b8393e-49dc-4b3e-b814-d27c6c45447c-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "10b8393e-49dc-4b3e-b814-d27c6c45447c" (UID: "10b8393e-49dc-4b3e-b814-d27c6c45447c"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:34:38.693630 kubelet[1393]: I1213 14:34:38.692830 1393 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/10b8393e-49dc-4b3e-b814-d27c6c45447c-cni-path\") pod \"10b8393e-49dc-4b3e-b814-d27c6c45447c\" (UID: \"10b8393e-49dc-4b3e-b814-d27c6c45447c\") " Dec 13 14:34:38.693630 kubelet[1393]: I1213 14:34:38.692893 1393 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/10b8393e-49dc-4b3e-b814-d27c6c45447c-cilium-config-path\") pod \"10b8393e-49dc-4b3e-b814-d27c6c45447c\" (UID: \"10b8393e-49dc-4b3e-b814-d27c6c45447c\") " Dec 13 14:34:38.693630 kubelet[1393]: I1213 14:34:38.692947 1393 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/10b8393e-49dc-4b3e-b814-d27c6c45447c-clustermesh-secrets\") pod \"10b8393e-49dc-4b3e-b814-d27c6c45447c\" (UID: \"10b8393e-49dc-4b3e-b814-d27c6c45447c\") " Dec 13 14:34:38.693630 kubelet[1393]: I1213 14:34:38.692836 1393 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/10b8393e-49dc-4b3e-b814-d27c6c45447c-hostproc" (OuterVolumeSpecName: "hostproc") pod "10b8393e-49dc-4b3e-b814-d27c6c45447c" (UID: "10b8393e-49dc-4b3e-b814-d27c6c45447c"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:34:38.693630 kubelet[1393]: I1213 14:34:38.692861 1393 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/10b8393e-49dc-4b3e-b814-d27c6c45447c-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "10b8393e-49dc-4b3e-b814-d27c6c45447c" (UID: "10b8393e-49dc-4b3e-b814-d27c6c45447c"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:34:38.695014 kubelet[1393]: I1213 14:34:38.692887 1393 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/10b8393e-49dc-4b3e-b814-d27c6c45447c-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "10b8393e-49dc-4b3e-b814-d27c6c45447c" (UID: "10b8393e-49dc-4b3e-b814-d27c6c45447c"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:34:38.695014 kubelet[1393]: I1213 14:34:38.692921 1393 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/10b8393e-49dc-4b3e-b814-d27c6c45447c-cni-path" (OuterVolumeSpecName: "cni-path") pod "10b8393e-49dc-4b3e-b814-d27c6c45447c" (UID: "10b8393e-49dc-4b3e-b814-d27c6c45447c"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:34:38.695014 kubelet[1393]: I1213 14:34:38.692996 1393 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/10b8393e-49dc-4b3e-b814-d27c6c45447c-cilium-ipsec-secrets\") pod \"10b8393e-49dc-4b3e-b814-d27c6c45447c\" (UID: \"10b8393e-49dc-4b3e-b814-d27c6c45447c\") " Dec 13 14:34:38.695014 kubelet[1393]: I1213 14:34:38.693046 1393 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/10b8393e-49dc-4b3e-b814-d27c6c45447c-hubble-tls\") pod \"10b8393e-49dc-4b3e-b814-d27c6c45447c\" (UID: \"10b8393e-49dc-4b3e-b814-d27c6c45447c\") " Dec 13 14:34:38.695014 kubelet[1393]: I1213 14:34:38.693102 1393 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zv87d\" (UniqueName: \"kubernetes.io/projected/10b8393e-49dc-4b3e-b814-d27c6c45447c-kube-api-access-zv87d\") pod \"10b8393e-49dc-4b3e-b814-d27c6c45447c\" (UID: \"10b8393e-49dc-4b3e-b814-d27c6c45447c\") " Dec 13 14:34:38.695014 kubelet[1393]: I1213 14:34:38.693143 1393 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/10b8393e-49dc-4b3e-b814-d27c6c45447c-lib-modules\") pod \"10b8393e-49dc-4b3e-b814-d27c6c45447c\" (UID: \"10b8393e-49dc-4b3e-b814-d27c6c45447c\") " Dec 13 14:34:38.695014 kubelet[1393]: I1213 14:34:38.693181 1393 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/10b8393e-49dc-4b3e-b814-d27c6c45447c-etc-cni-netd\") pod \"10b8393e-49dc-4b3e-b814-d27c6c45447c\" (UID: \"10b8393e-49dc-4b3e-b814-d27c6c45447c\") " Dec 13 14:34:38.695014 kubelet[1393]: I1213 14:34:38.693318 1393 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/10b8393e-49dc-4b3e-b814-d27c6c45447c-xtables-lock\") on node \"172.24.4.94\" DevicePath \"\"" Dec 13 14:34:38.695014 kubelet[1393]: I1213 14:34:38.693355 1393 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/10b8393e-49dc-4b3e-b814-d27c6c45447c-host-proc-sys-net\") on node \"172.24.4.94\" DevicePath \"\"" Dec 13 14:34:38.695014 kubelet[1393]: I1213 14:34:38.693380 1393 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/10b8393e-49dc-4b3e-b814-d27c6c45447c-bpf-maps\") on node \"172.24.4.94\" DevicePath \"\"" Dec 13 14:34:38.695014 kubelet[1393]: I1213 14:34:38.693402 1393 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/10b8393e-49dc-4b3e-b814-d27c6c45447c-cni-path\") on node \"172.24.4.94\" DevicePath \"\"" Dec 13 14:34:38.695014 kubelet[1393]: I1213 14:34:38.693423 1393 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/10b8393e-49dc-4b3e-b814-d27c6c45447c-cilium-cgroup\") on node \"172.24.4.94\" DevicePath \"\"" Dec 13 14:34:38.695014 kubelet[1393]: I1213 14:34:38.693445 1393 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/10b8393e-49dc-4b3e-b814-d27c6c45447c-cilium-run\") on node \"172.24.4.94\" DevicePath \"\"" Dec 13 14:34:38.697021 kubelet[1393]: I1213 14:34:38.692946 1393 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/10b8393e-49dc-4b3e-b814-d27c6c45447c-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "10b8393e-49dc-4b3e-b814-d27c6c45447c" (UID: "10b8393e-49dc-4b3e-b814-d27c6c45447c"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:34:38.697021 kubelet[1393]: I1213 14:34:38.693528 1393 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/10b8393e-49dc-4b3e-b814-d27c6c45447c-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "10b8393e-49dc-4b3e-b814-d27c6c45447c" (UID: "10b8393e-49dc-4b3e-b814-d27c6c45447c"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:34:38.702526 kubelet[1393]: I1213 14:34:38.702458 1393 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/10b8393e-49dc-4b3e-b814-d27c6c45447c-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "10b8393e-49dc-4b3e-b814-d27c6c45447c" (UID: "10b8393e-49dc-4b3e-b814-d27c6c45447c"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 14:34:38.705364 kubelet[1393]: I1213 14:34:38.705305 1393 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/10b8393e-49dc-4b3e-b814-d27c6c45447c-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "10b8393e-49dc-4b3e-b814-d27c6c45447c" (UID: "10b8393e-49dc-4b3e-b814-d27c6c45447c"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:34:38.707082 kubelet[1393]: I1213 14:34:38.707028 1393 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10b8393e-49dc-4b3e-b814-d27c6c45447c-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "10b8393e-49dc-4b3e-b814-d27c6c45447c" (UID: "10b8393e-49dc-4b3e-b814-d27c6c45447c"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 14:34:38.709540 kubelet[1393]: I1213 14:34:38.709489 1393 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10b8393e-49dc-4b3e-b814-d27c6c45447c-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "10b8393e-49dc-4b3e-b814-d27c6c45447c" (UID: "10b8393e-49dc-4b3e-b814-d27c6c45447c"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 14:34:38.713465 kubelet[1393]: I1213 14:34:38.713387 1393 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10b8393e-49dc-4b3e-b814-d27c6c45447c-kube-api-access-zv87d" (OuterVolumeSpecName: "kube-api-access-zv87d") pod "10b8393e-49dc-4b3e-b814-d27c6c45447c" (UID: "10b8393e-49dc-4b3e-b814-d27c6c45447c"). InnerVolumeSpecName "kube-api-access-zv87d". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 14:34:38.715704 kubelet[1393]: I1213 14:34:38.715643 1393 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10b8393e-49dc-4b3e-b814-d27c6c45447c-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "10b8393e-49dc-4b3e-b814-d27c6c45447c" (UID: "10b8393e-49dc-4b3e-b814-d27c6c45447c"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 14:34:38.795927 kubelet[1393]: I1213 14:34:38.794338 1393 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/10b8393e-49dc-4b3e-b814-d27c6c45447c-clustermesh-secrets\") on node \"172.24.4.94\" DevicePath \"\"" Dec 13 14:34:38.795927 kubelet[1393]: I1213 14:34:38.795508 1393 reconciler_common.go:288] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/10b8393e-49dc-4b3e-b814-d27c6c45447c-cilium-ipsec-secrets\") on node \"172.24.4.94\" DevicePath \"\"" Dec 13 14:34:38.795927 kubelet[1393]: I1213 14:34:38.795606 1393 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/10b8393e-49dc-4b3e-b814-d27c6c45447c-hubble-tls\") on node \"172.24.4.94\" DevicePath \"\"" Dec 13 14:34:38.795927 kubelet[1393]: I1213 14:34:38.795633 1393 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-zv87d\" (UniqueName: \"kubernetes.io/projected/10b8393e-49dc-4b3e-b814-d27c6c45447c-kube-api-access-zv87d\") on node \"172.24.4.94\" DevicePath \"\"" Dec 13 14:34:38.795927 kubelet[1393]: I1213 14:34:38.795657 1393 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/10b8393e-49dc-4b3e-b814-d27c6c45447c-lib-modules\") on node \"172.24.4.94\" DevicePath \"\"" Dec 13 14:34:38.795927 kubelet[1393]: I1213 14:34:38.795679 1393 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/10b8393e-49dc-4b3e-b814-d27c6c45447c-etc-cni-netd\") on node \"172.24.4.94\" DevicePath \"\"" Dec 13 14:34:38.795927 kubelet[1393]: I1213 14:34:38.795706 1393 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/10b8393e-49dc-4b3e-b814-d27c6c45447c-hostproc\") on node \"172.24.4.94\" DevicePath \"\"" Dec 13 14:34:38.795927 kubelet[1393]: I1213 14:34:38.795728 1393 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/10b8393e-49dc-4b3e-b814-d27c6c45447c-host-proc-sys-kernel\") on node \"172.24.4.94\" DevicePath \"\"" Dec 13 14:34:38.795927 kubelet[1393]: I1213 14:34:38.795750 1393 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/10b8393e-49dc-4b3e-b814-d27c6c45447c-cilium-config-path\") on node \"172.24.4.94\" DevicePath \"\"" Dec 13 14:34:38.798589 kubelet[1393]: E1213 14:34:38.798549 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:34:38.827566 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-173b3a30577e1da154e4ea3ee35e5813f02501df268fa065b062e348241e25a2-rootfs.mount: Deactivated successfully. Dec 13 14:34:38.827692 systemd[1]: var-lib-kubelet-pods-10b8393e\x2d49dc\x2d4b3e\x2db814\x2dd27c6c45447c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dzv87d.mount: Deactivated successfully. Dec 13 14:34:38.827766 systemd[1]: var-lib-kubelet-pods-10b8393e\x2d49dc\x2d4b3e\x2db814\x2dd27c6c45447c-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 14:34:38.827847 systemd[1]: var-lib-kubelet-pods-10b8393e\x2d49dc\x2d4b3e\x2db814\x2dd27c6c45447c-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Dec 13 14:34:38.827927 systemd[1]: var-lib-kubelet-pods-10b8393e\x2d49dc\x2d4b3e\x2db814\x2dd27c6c45447c-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 14:34:39.484543 kubelet[1393]: I1213 14:34:39.484465 1393 scope.go:117] "RemoveContainer" containerID="2931f30acaef6dc184ae102281d0bc671cfc454f1702fc1aee88ecf0e56e82c1" Dec 13 14:34:39.490087 env[1138]: time="2024-12-13T14:34:39.489702791Z" level=info msg="RemoveContainer for \"2931f30acaef6dc184ae102281d0bc671cfc454f1702fc1aee88ecf0e56e82c1\"" Dec 13 14:34:39.492441 systemd[1]: Removed slice kubepods-burstable-pod10b8393e_49dc_4b3e_b814_d27c6c45447c.slice. Dec 13 14:34:39.682652 env[1138]: time="2024-12-13T14:34:39.682519610Z" level=info msg="RemoveContainer for \"2931f30acaef6dc184ae102281d0bc671cfc454f1702fc1aee88ecf0e56e82c1\" returns successfully" Dec 13 14:34:39.708920 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4040254008.mount: Deactivated successfully. Dec 13 14:34:39.735872 kubelet[1393]: E1213 14:34:39.735688 1393 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="10b8393e-49dc-4b3e-b814-d27c6c45447c" containerName="mount-cgroup" Dec 13 14:34:39.735872 kubelet[1393]: E1213 14:34:39.735770 1393 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="10b8393e-49dc-4b3e-b814-d27c6c45447c" containerName="mount-cgroup" Dec 13 14:34:39.736938 kubelet[1393]: I1213 14:34:39.735821 1393 memory_manager.go:354] "RemoveStaleState removing state" podUID="10b8393e-49dc-4b3e-b814-d27c6c45447c" containerName="mount-cgroup" Dec 13 14:34:39.737085 kubelet[1393]: I1213 14:34:39.736972 1393 memory_manager.go:354] "RemoveStaleState removing state" podUID="10b8393e-49dc-4b3e-b814-d27c6c45447c" containerName="mount-cgroup" Dec 13 14:34:39.742293 kubelet[1393]: I1213 14:34:39.742174 1393 setters.go:600] "Node became not ready" node="172.24.4.94" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-12-13T14:34:39Z","lastTransitionTime":"2024-12-13T14:34:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Dec 13 14:34:39.749648 systemd[1]: Created slice kubepods-burstable-pod9b8a9f66_d653_4dd6_a620_e84e11371096.slice. Dec 13 14:34:39.800341 kubelet[1393]: E1213 14:34:39.800291 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:34:39.831657 kubelet[1393]: W1213 14:34:39.829528 1393 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod10b8393e_49dc_4b3e_b814_d27c6c45447c.slice/cri-containerd-9a5c52f92bbf50c2985e9c4ff98a57fec0273042d51842c6f4dbad693acb0314.scope WatchSource:0}: container "9a5c52f92bbf50c2985e9c4ff98a57fec0273042d51842c6f4dbad693acb0314" in namespace "k8s.io": not found Dec 13 14:34:39.902696 kubelet[1393]: I1213 14:34:39.902610 1393 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9b8a9f66-d653-4dd6-a620-e84e11371096-xtables-lock\") pod \"cilium-9l2mr\" (UID: \"9b8a9f66-d653-4dd6-a620-e84e11371096\") " pod="kube-system/cilium-9l2mr" Dec 13 14:34:39.902925 kubelet[1393]: I1213 14:34:39.902712 1393 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9b8a9f66-d653-4dd6-a620-e84e11371096-etc-cni-netd\") pod \"cilium-9l2mr\" (UID: \"9b8a9f66-d653-4dd6-a620-e84e11371096\") " pod="kube-system/cilium-9l2mr" Dec 13 14:34:39.902925 kubelet[1393]: I1213 14:34:39.902757 1393 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9b8a9f66-d653-4dd6-a620-e84e11371096-clustermesh-secrets\") pod \"cilium-9l2mr\" (UID: \"9b8a9f66-d653-4dd6-a620-e84e11371096\") " pod="kube-system/cilium-9l2mr" Dec 13 14:34:39.902925 kubelet[1393]: I1213 14:34:39.902802 1393 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9b8a9f66-d653-4dd6-a620-e84e11371096-cilium-config-path\") pod \"cilium-9l2mr\" (UID: \"9b8a9f66-d653-4dd6-a620-e84e11371096\") " pod="kube-system/cilium-9l2mr" Dec 13 14:34:39.902925 kubelet[1393]: I1213 14:34:39.902842 1393 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9b8a9f66-d653-4dd6-a620-e84e11371096-host-proc-sys-net\") pod \"cilium-9l2mr\" (UID: \"9b8a9f66-d653-4dd6-a620-e84e11371096\") " pod="kube-system/cilium-9l2mr" Dec 13 14:34:39.902925 kubelet[1393]: I1213 14:34:39.902903 1393 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9b8a9f66-d653-4dd6-a620-e84e11371096-cilium-cgroup\") pod \"cilium-9l2mr\" (UID: \"9b8a9f66-d653-4dd6-a620-e84e11371096\") " pod="kube-system/cilium-9l2mr" Dec 13 14:34:39.903106 kubelet[1393]: I1213 14:34:39.902944 1393 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9b8a9f66-d653-4dd6-a620-e84e11371096-cni-path\") pod \"cilium-9l2mr\" (UID: \"9b8a9f66-d653-4dd6-a620-e84e11371096\") " pod="kube-system/cilium-9l2mr" Dec 13 14:34:39.903106 kubelet[1393]: I1213 14:34:39.902985 1393 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9b8a9f66-d653-4dd6-a620-e84e11371096-hostproc\") pod \"cilium-9l2mr\" (UID: \"9b8a9f66-d653-4dd6-a620-e84e11371096\") " pod="kube-system/cilium-9l2mr" Dec 13 14:34:39.903106 kubelet[1393]: I1213 14:34:39.903029 1393 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/9b8a9f66-d653-4dd6-a620-e84e11371096-cilium-ipsec-secrets\") pod \"cilium-9l2mr\" (UID: \"9b8a9f66-d653-4dd6-a620-e84e11371096\") " pod="kube-system/cilium-9l2mr" Dec 13 14:34:39.903106 kubelet[1393]: I1213 14:34:39.903070 1393 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9b8a9f66-d653-4dd6-a620-e84e11371096-host-proc-sys-kernel\") pod \"cilium-9l2mr\" (UID: \"9b8a9f66-d653-4dd6-a620-e84e11371096\") " pod="kube-system/cilium-9l2mr" Dec 13 14:34:39.903225 kubelet[1393]: I1213 14:34:39.903112 1393 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9b8a9f66-d653-4dd6-a620-e84e11371096-hubble-tls\") pod \"cilium-9l2mr\" (UID: \"9b8a9f66-d653-4dd6-a620-e84e11371096\") " pod="kube-system/cilium-9l2mr" Dec 13 14:34:39.903225 kubelet[1393]: I1213 14:34:39.903153 1393 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9b8a9f66-d653-4dd6-a620-e84e11371096-bpf-maps\") pod \"cilium-9l2mr\" (UID: \"9b8a9f66-d653-4dd6-a620-e84e11371096\") " pod="kube-system/cilium-9l2mr" Dec 13 14:34:39.903225 kubelet[1393]: I1213 14:34:39.903190 1393 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwv4h\" (UniqueName: \"kubernetes.io/projected/9b8a9f66-d653-4dd6-a620-e84e11371096-kube-api-access-pwv4h\") pod \"cilium-9l2mr\" (UID: \"9b8a9f66-d653-4dd6-a620-e84e11371096\") " pod="kube-system/cilium-9l2mr" Dec 13 14:34:39.903351 kubelet[1393]: I1213 14:34:39.903229 1393 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9b8a9f66-d653-4dd6-a620-e84e11371096-cilium-run\") pod \"cilium-9l2mr\" (UID: \"9b8a9f66-d653-4dd6-a620-e84e11371096\") " pod="kube-system/cilium-9l2mr" Dec 13 14:34:39.903351 kubelet[1393]: I1213 14:34:39.903335 1393 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9b8a9f66-d653-4dd6-a620-e84e11371096-lib-modules\") pod \"cilium-9l2mr\" (UID: \"9b8a9f66-d653-4dd6-a620-e84e11371096\") " pod="kube-system/cilium-9l2mr" Dec 13 14:34:39.925711 kubelet[1393]: I1213 14:34:39.925595 1393 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="10b8393e-49dc-4b3e-b814-d27c6c45447c" path="/var/lib/kubelet/pods/10b8393e-49dc-4b3e-b814-d27c6c45447c/volumes" Dec 13 14:34:40.360325 env[1138]: time="2024-12-13T14:34:40.359935220Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9l2mr,Uid:9b8a9f66-d653-4dd6-a620-e84e11371096,Namespace:kube-system,Attempt:0,}" Dec 13 14:34:40.417820 env[1138]: time="2024-12-13T14:34:40.417657357Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:34:40.417820 env[1138]: time="2024-12-13T14:34:40.417716698Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:34:40.417820 env[1138]: time="2024-12-13T14:34:40.417730745Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:34:40.418338 env[1138]: time="2024-12-13T14:34:40.418210363Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/fa66c19126e8ce3562c96480d7e6588e143dcbffc1b744e9eee0fa0ad133a432 pid=3131 runtime=io.containerd.runc.v2 Dec 13 14:34:40.457590 systemd[1]: Started cri-containerd-fa66c19126e8ce3562c96480d7e6588e143dcbffc1b744e9eee0fa0ad133a432.scope. Dec 13 14:34:40.507660 env[1138]: time="2024-12-13T14:34:40.507528413Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9l2mr,Uid:9b8a9f66-d653-4dd6-a620-e84e11371096,Namespace:kube-system,Attempt:0,} returns sandbox id \"fa66c19126e8ce3562c96480d7e6588e143dcbffc1b744e9eee0fa0ad133a432\"" Dec 13 14:34:40.511136 env[1138]: time="2024-12-13T14:34:40.511085430Z" level=info msg="CreateContainer within sandbox \"fa66c19126e8ce3562c96480d7e6588e143dcbffc1b744e9eee0fa0ad133a432\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 14:34:40.659587 env[1138]: time="2024-12-13T14:34:40.659470607Z" level=info msg="CreateContainer within sandbox \"fa66c19126e8ce3562c96480d7e6588e143dcbffc1b744e9eee0fa0ad133a432\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"55199c4c819bfe0ba4167d4f3721fc47a0efb1182569490eee9396cf919381e6\"" Dec 13 14:34:40.661949 env[1138]: time="2024-12-13T14:34:40.661869133Z" level=info msg="StartContainer for \"55199c4c819bfe0ba4167d4f3721fc47a0efb1182569490eee9396cf919381e6\"" Dec 13 14:34:40.704007 systemd[1]: Started cri-containerd-55199c4c819bfe0ba4167d4f3721fc47a0efb1182569490eee9396cf919381e6.scope. Dec 13 14:34:40.803030 kubelet[1393]: E1213 14:34:40.802747 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:34:40.912428 env[1138]: time="2024-12-13T14:34:40.912056737Z" level=info msg="StartContainer for \"55199c4c819bfe0ba4167d4f3721fc47a0efb1182569490eee9396cf919381e6\" returns successfully" Dec 13 14:34:40.955590 systemd[1]: cri-containerd-55199c4c819bfe0ba4167d4f3721fc47a0efb1182569490eee9396cf919381e6.scope: Deactivated successfully. Dec 13 14:34:41.005175 env[1138]: time="2024-12-13T14:34:41.005049555Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:34:41.018105 env[1138]: time="2024-12-13T14:34:41.018029260Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:34:41.387762 env[1138]: time="2024-12-13T14:34:41.387492932Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:34:41.390412 env[1138]: time="2024-12-13T14:34:41.390292118Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Dec 13 14:34:41.393045 env[1138]: time="2024-12-13T14:34:41.392902269Z" level=info msg="shim disconnected" id=55199c4c819bfe0ba4167d4f3721fc47a0efb1182569490eee9396cf919381e6 Dec 13 14:34:41.393045 env[1138]: time="2024-12-13T14:34:41.393006074Z" level=warning msg="cleaning up after shim disconnected" id=55199c4c819bfe0ba4167d4f3721fc47a0efb1182569490eee9396cf919381e6 namespace=k8s.io Dec 13 14:34:41.393045 env[1138]: time="2024-12-13T14:34:41.393035299Z" level=info msg="cleaning up dead shim" Dec 13 14:34:41.398102 env[1138]: time="2024-12-13T14:34:41.397972141Z" level=info msg="CreateContainer within sandbox \"8c59c9a4f2475fe122e70c343508d3cd4bdaffc932b02c095de7acd2b43b601c\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Dec 13 14:34:41.418591 env[1138]: time="2024-12-13T14:34:41.418469135Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:34:41Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3216 runtime=io.containerd.runc.v2\n" Dec 13 14:34:41.440902 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1246776898.mount: Deactivated successfully. Dec 13 14:34:41.452604 env[1138]: time="2024-12-13T14:34:41.452499782Z" level=info msg="CreateContainer within sandbox \"8c59c9a4f2475fe122e70c343508d3cd4bdaffc932b02c095de7acd2b43b601c\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"03078c7358686d20e67ac4bb9bd48dd1539e9c2625d05a5d3153a563b9a1ca35\"" Dec 13 14:34:41.455053 env[1138]: time="2024-12-13T14:34:41.454954403Z" level=info msg="StartContainer for \"03078c7358686d20e67ac4bb9bd48dd1539e9c2625d05a5d3153a563b9a1ca35\"" Dec 13 14:34:41.491476 systemd[1]: Started cri-containerd-03078c7358686d20e67ac4bb9bd48dd1539e9c2625d05a5d3153a563b9a1ca35.scope. Dec 13 14:34:41.501789 env[1138]: time="2024-12-13T14:34:41.501740315Z" level=info msg="CreateContainer within sandbox \"fa66c19126e8ce3562c96480d7e6588e143dcbffc1b744e9eee0fa0ad133a432\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 14:34:41.532501 env[1138]: time="2024-12-13T14:34:41.532426014Z" level=info msg="CreateContainer within sandbox \"fa66c19126e8ce3562c96480d7e6588e143dcbffc1b744e9eee0fa0ad133a432\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"da2bc814b54bcaf2768ae47825395755572dd366e89e07d8a1fdeb358f8be111\"" Dec 13 14:34:41.534125 env[1138]: time="2024-12-13T14:34:41.534075656Z" level=info msg="StartContainer for \"da2bc814b54bcaf2768ae47825395755572dd366e89e07d8a1fdeb358f8be111\"" Dec 13 14:34:41.547908 env[1138]: time="2024-12-13T14:34:41.547826787Z" level=info msg="StartContainer for \"03078c7358686d20e67ac4bb9bd48dd1539e9c2625d05a5d3153a563b9a1ca35\" returns successfully" Dec 13 14:34:41.577159 systemd[1]: Started cri-containerd-da2bc814b54bcaf2768ae47825395755572dd366e89e07d8a1fdeb358f8be111.scope. Dec 13 14:34:41.624549 env[1138]: time="2024-12-13T14:34:41.624461280Z" level=info msg="StartContainer for \"da2bc814b54bcaf2768ae47825395755572dd366e89e07d8a1fdeb358f8be111\" returns successfully" Dec 13 14:34:41.645848 systemd[1]: cri-containerd-da2bc814b54bcaf2768ae47825395755572dd366e89e07d8a1fdeb358f8be111.scope: Deactivated successfully. Dec 13 14:34:41.690447 env[1138]: time="2024-12-13T14:34:41.690373398Z" level=info msg="shim disconnected" id=da2bc814b54bcaf2768ae47825395755572dd366e89e07d8a1fdeb358f8be111 Dec 13 14:34:41.690795 env[1138]: time="2024-12-13T14:34:41.690772495Z" level=warning msg="cleaning up after shim disconnected" id=da2bc814b54bcaf2768ae47825395755572dd366e89e07d8a1fdeb358f8be111 namespace=k8s.io Dec 13 14:34:41.690888 env[1138]: time="2024-12-13T14:34:41.690858717Z" level=info msg="cleaning up dead shim" Dec 13 14:34:41.704627 env[1138]: time="2024-12-13T14:34:41.704587707Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:34:41Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3317 runtime=io.containerd.runc.v2\n" Dec 13 14:34:41.803528 kubelet[1393]: E1213 14:34:41.803461 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:34:42.512306 env[1138]: time="2024-12-13T14:34:42.512183357Z" level=info msg="CreateContainer within sandbox \"fa66c19126e8ce3562c96480d7e6588e143dcbffc1b744e9eee0fa0ad133a432\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 14:34:42.805041 kubelet[1393]: E1213 14:34:42.804344 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:34:42.881166 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2574865985.mount: Deactivated successfully. Dec 13 14:34:42.919405 kubelet[1393]: E1213 14:34:42.919294 1393 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 14:34:43.200838 env[1138]: time="2024-12-13T14:34:43.200714094Z" level=info msg="CreateContainer within sandbox \"fa66c19126e8ce3562c96480d7e6588e143dcbffc1b744e9eee0fa0ad133a432\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"3dca4400edfec4af474155651e5336c95a492039e24fcf425d9acdc028c1724b\"" Dec 13 14:34:43.202543 env[1138]: time="2024-12-13T14:34:43.202475165Z" level=info msg="StartContainer for \"3dca4400edfec4af474155651e5336c95a492039e24fcf425d9acdc028c1724b\"" Dec 13 14:34:43.265233 systemd[1]: run-containerd-runc-k8s.io-3dca4400edfec4af474155651e5336c95a492039e24fcf425d9acdc028c1724b-runc.Su9Y9L.mount: Deactivated successfully. Dec 13 14:34:43.274424 systemd[1]: Started cri-containerd-3dca4400edfec4af474155651e5336c95a492039e24fcf425d9acdc028c1724b.scope. Dec 13 14:34:43.413557 env[1138]: time="2024-12-13T14:34:43.413409376Z" level=info msg="StartContainer for \"3dca4400edfec4af474155651e5336c95a492039e24fcf425d9acdc028c1724b\" returns successfully" Dec 13 14:34:43.539030 systemd[1]: cri-containerd-3dca4400edfec4af474155651e5336c95a492039e24fcf425d9acdc028c1724b.scope: Deactivated successfully. Dec 13 14:34:43.592518 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3dca4400edfec4af474155651e5336c95a492039e24fcf425d9acdc028c1724b-rootfs.mount: Deactivated successfully. Dec 13 14:34:43.605539 kubelet[1393]: I1213 14:34:43.605390 1393 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-pdv8j" podStartSLOduration=3.219262461 podStartE2EDuration="8.605345381s" podCreationTimestamp="2024-12-13 14:34:35 +0000 UTC" firstStartedPulling="2024-12-13 14:34:36.008035448 +0000 UTC m=+89.865521400" lastFinishedPulling="2024-12-13 14:34:41.394118318 +0000 UTC m=+95.251604320" observedRunningTime="2024-12-13 14:34:42.939163144 +0000 UTC m=+96.796649146" watchObservedRunningTime="2024-12-13 14:34:43.605345381 +0000 UTC m=+97.462831383" Dec 13 14:34:43.656118 env[1138]: time="2024-12-13T14:34:43.655858512Z" level=info msg="shim disconnected" id=3dca4400edfec4af474155651e5336c95a492039e24fcf425d9acdc028c1724b Dec 13 14:34:43.656118 env[1138]: time="2024-12-13T14:34:43.655969269Z" level=warning msg="cleaning up after shim disconnected" id=3dca4400edfec4af474155651e5336c95a492039e24fcf425d9acdc028c1724b namespace=k8s.io Dec 13 14:34:43.656118 env[1138]: time="2024-12-13T14:34:43.655995128Z" level=info msg="cleaning up dead shim" Dec 13 14:34:43.671445 env[1138]: time="2024-12-13T14:34:43.671347802Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:34:43Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3378 runtime=io.containerd.runc.v2\n" Dec 13 14:34:43.805402 kubelet[1393]: E1213 14:34:43.804653 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:34:44.535141 env[1138]: time="2024-12-13T14:34:44.535053425Z" level=info msg="CreateContainer within sandbox \"fa66c19126e8ce3562c96480d7e6588e143dcbffc1b744e9eee0fa0ad133a432\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 14:34:44.570691 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3681689213.mount: Deactivated successfully. Dec 13 14:34:44.593669 env[1138]: time="2024-12-13T14:34:44.593481781Z" level=info msg="CreateContainer within sandbox \"fa66c19126e8ce3562c96480d7e6588e143dcbffc1b744e9eee0fa0ad133a432\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"0981e835e37a9f2dac00d63dd9d429d802bbe3c538591b66ec4c9ad281e40781\"" Dec 13 14:34:44.595482 env[1138]: time="2024-12-13T14:34:44.595387853Z" level=info msg="StartContainer for \"0981e835e37a9f2dac00d63dd9d429d802bbe3c538591b66ec4c9ad281e40781\"" Dec 13 14:34:44.639822 systemd[1]: Started cri-containerd-0981e835e37a9f2dac00d63dd9d429d802bbe3c538591b66ec4c9ad281e40781.scope. Dec 13 14:34:44.683613 systemd[1]: cri-containerd-0981e835e37a9f2dac00d63dd9d429d802bbe3c538591b66ec4c9ad281e40781.scope: Deactivated successfully. Dec 13 14:34:44.685494 env[1138]: time="2024-12-13T14:34:44.685434761Z" level=info msg="StartContainer for \"0981e835e37a9f2dac00d63dd9d429d802bbe3c538591b66ec4c9ad281e40781\" returns successfully" Dec 13 14:34:44.712750 env[1138]: time="2024-12-13T14:34:44.712656591Z" level=info msg="shim disconnected" id=0981e835e37a9f2dac00d63dd9d429d802bbe3c538591b66ec4c9ad281e40781 Dec 13 14:34:44.712750 env[1138]: time="2024-12-13T14:34:44.712740898Z" level=warning msg="cleaning up after shim disconnected" id=0981e835e37a9f2dac00d63dd9d429d802bbe3c538591b66ec4c9ad281e40781 namespace=k8s.io Dec 13 14:34:44.712750 env[1138]: time="2024-12-13T14:34:44.712754824Z" level=info msg="cleaning up dead shim" Dec 13 14:34:44.721889 env[1138]: time="2024-12-13T14:34:44.721837798Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:34:44Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3435 runtime=io.containerd.runc.v2\n" Dec 13 14:34:44.805703 kubelet[1393]: E1213 14:34:44.805482 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:34:45.543405 env[1138]: time="2024-12-13T14:34:45.543323244Z" level=info msg="CreateContainer within sandbox \"fa66c19126e8ce3562c96480d7e6588e143dcbffc1b744e9eee0fa0ad133a432\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 14:34:45.560653 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0981e835e37a9f2dac00d63dd9d429d802bbe3c538591b66ec4c9ad281e40781-rootfs.mount: Deactivated successfully. Dec 13 14:34:45.595758 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3639936183.mount: Deactivated successfully. Dec 13 14:34:45.613072 env[1138]: time="2024-12-13T14:34:45.612941346Z" level=info msg="CreateContainer within sandbox \"fa66c19126e8ce3562c96480d7e6588e143dcbffc1b744e9eee0fa0ad133a432\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"891032bb1a204af1c2ee6a7ede67c9f129b067c11c58efb49c1d90dadafa38e5\"" Dec 13 14:34:45.616289 env[1138]: time="2024-12-13T14:34:45.614622826Z" level=info msg="StartContainer for \"891032bb1a204af1c2ee6a7ede67c9f129b067c11c58efb49c1d90dadafa38e5\"" Dec 13 14:34:45.663282 systemd[1]: Started cri-containerd-891032bb1a204af1c2ee6a7ede67c9f129b067c11c58efb49c1d90dadafa38e5.scope. Dec 13 14:34:45.709596 env[1138]: time="2024-12-13T14:34:45.709554692Z" level=info msg="StartContainer for \"891032bb1a204af1c2ee6a7ede67c9f129b067c11c58efb49c1d90dadafa38e5\" returns successfully" Dec 13 14:34:45.806120 kubelet[1393]: E1213 14:34:45.806008 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:34:46.705316 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 14:34:46.763311 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm_base(ctr(aes-generic),ghash-generic)))) Dec 13 14:34:46.806856 kubelet[1393]: E1213 14:34:46.806731 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:34:47.682416 kubelet[1393]: E1213 14:34:47.682356 1393 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:34:47.807310 kubelet[1393]: E1213 14:34:47.807174 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:34:48.709840 systemd[1]: run-containerd-runc-k8s.io-891032bb1a204af1c2ee6a7ede67c9f129b067c11c58efb49c1d90dadafa38e5-runc.AesTOK.mount: Deactivated successfully. Dec 13 14:34:48.808134 kubelet[1393]: E1213 14:34:48.808042 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:34:49.809011 kubelet[1393]: E1213 14:34:49.808958 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:34:50.139612 systemd-networkd[977]: lxc_health: Link UP Dec 13 14:34:50.148290 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Dec 13 14:34:50.151400 systemd-networkd[977]: lxc_health: Gained carrier Dec 13 14:34:50.404659 kubelet[1393]: I1213 14:34:50.404512 1393 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-9l2mr" podStartSLOduration=11.404476107 podStartE2EDuration="11.404476107s" podCreationTimestamp="2024-12-13 14:34:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:34:46.591722816 +0000 UTC m=+100.449208769" watchObservedRunningTime="2024-12-13 14:34:50.404476107 +0000 UTC m=+104.261962109" Dec 13 14:34:50.810000 kubelet[1393]: E1213 14:34:50.809816 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:34:51.031071 systemd[1]: run-containerd-runc-k8s.io-891032bb1a204af1c2ee6a7ede67c9f129b067c11c58efb49c1d90dadafa38e5-runc.sd4IWv.mount: Deactivated successfully. Dec 13 14:34:51.811782 kubelet[1393]: E1213 14:34:51.811717 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:34:51.866706 systemd-networkd[977]: lxc_health: Gained IPv6LL Dec 13 14:34:52.812906 kubelet[1393]: E1213 14:34:52.812776 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:34:53.270272 systemd[1]: run-containerd-runc-k8s.io-891032bb1a204af1c2ee6a7ede67c9f129b067c11c58efb49c1d90dadafa38e5-runc.GR8vJJ.mount: Deactivated successfully. Dec 13 14:34:53.814640 kubelet[1393]: E1213 14:34:53.814589 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:34:54.815859 kubelet[1393]: E1213 14:34:54.815790 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:34:55.458688 systemd[1]: run-containerd-runc-k8s.io-891032bb1a204af1c2ee6a7ede67c9f129b067c11c58efb49c1d90dadafa38e5-runc.Xuk0qG.mount: Deactivated successfully. Dec 13 14:34:55.817737 kubelet[1393]: E1213 14:34:55.817559 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:34:56.819008 kubelet[1393]: E1213 14:34:56.818848 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:34:57.820861 kubelet[1393]: E1213 14:34:57.820740 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:34:57.844615 systemd[1]: run-containerd-runc-k8s.io-891032bb1a204af1c2ee6a7ede67c9f129b067c11c58efb49c1d90dadafa38e5-runc.d9Mik2.mount: Deactivated successfully. Dec 13 14:34:58.821463 kubelet[1393]: E1213 14:34:58.821363 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:34:59.822541 kubelet[1393]: E1213 14:34:59.822478 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:35:00.824149 kubelet[1393]: E1213 14:35:00.824087 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:35:01.825793 kubelet[1393]: E1213 14:35:01.825686 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:35:02.826014 kubelet[1393]: E1213 14:35:02.825939 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:35:03.827837 kubelet[1393]: E1213 14:35:03.827772 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:35:04.828796 kubelet[1393]: E1213 14:35:04.828710 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"