Dec 13 04:11:24.960982 kernel: Linux version 5.15.173-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Thu Dec 12 23:50:37 -00 2024 Dec 13 04:11:24.961004 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=66bd2580285375a2ba5b0e34ba63606314bcd90aaed1de1996371bdcb032485c Dec 13 04:11:24.961016 kernel: BIOS-provided physical RAM map: Dec 13 04:11:24.961023 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Dec 13 04:11:24.961030 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Dec 13 04:11:24.961037 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Dec 13 04:11:24.961044 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdcfff] usable Dec 13 04:11:24.961051 kernel: BIOS-e820: [mem 0x000000007ffdd000-0x000000007fffffff] reserved Dec 13 04:11:24.961060 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Dec 13 04:11:24.961067 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Dec 13 04:11:24.961073 kernel: NX (Execute Disable) protection: active Dec 13 04:11:24.961080 kernel: SMBIOS 2.8 present. Dec 13 04:11:24.961086 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014 Dec 13 04:11:24.961094 kernel: Hypervisor detected: KVM Dec 13 04:11:24.961102 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 13 04:11:24.961111 kernel: kvm-clock: cpu 0, msr 6319b001, primary cpu clock Dec 13 04:11:24.961118 kernel: kvm-clock: using sched offset of 7079467096 cycles Dec 13 04:11:24.961126 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 13 04:11:24.961134 kernel: tsc: Detected 1996.249 MHz processor Dec 13 04:11:24.961141 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 13 04:11:24.961149 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 13 04:11:24.961156 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Dec 13 04:11:24.961164 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 13 04:11:24.961173 kernel: ACPI: Early table checksum verification disabled Dec 13 04:11:24.961180 kernel: ACPI: RSDP 0x00000000000F5930 000014 (v00 BOCHS ) Dec 13 04:11:24.961187 kernel: ACPI: RSDT 0x000000007FFE1848 000030 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 04:11:24.961195 kernel: ACPI: FACP 0x000000007FFE172C 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 04:11:24.961202 kernel: ACPI: DSDT 0x000000007FFE0040 0016EC (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 04:11:24.961209 kernel: ACPI: FACS 0x000000007FFE0000 000040 Dec 13 04:11:24.961217 kernel: ACPI: APIC 0x000000007FFE17A0 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 04:11:24.961224 kernel: ACPI: WAET 0x000000007FFE1820 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 04:11:24.961231 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe172c-0x7ffe179f] Dec 13 04:11:24.961240 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe172b] Dec 13 04:11:24.961247 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Dec 13 04:11:24.961254 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17a0-0x7ffe181f] Dec 13 04:11:24.961261 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe1820-0x7ffe1847] Dec 13 04:11:24.961268 kernel: No NUMA configuration found Dec 13 04:11:24.961276 kernel: Faking a node at [mem 0x0000000000000000-0x000000007ffdcfff] Dec 13 04:11:24.961283 kernel: NODE_DATA(0) allocated [mem 0x7ffd7000-0x7ffdcfff] Dec 13 04:11:24.961290 kernel: Zone ranges: Dec 13 04:11:24.961317 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 13 04:11:24.961325 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdcfff] Dec 13 04:11:24.961333 kernel: Normal empty Dec 13 04:11:24.961340 kernel: Movable zone start for each node Dec 13 04:11:24.961348 kernel: Early memory node ranges Dec 13 04:11:24.961355 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Dec 13 04:11:24.961364 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdcfff] Dec 13 04:11:24.961372 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdcfff] Dec 13 04:11:24.961379 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 04:11:24.961387 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Dec 13 04:11:24.961394 kernel: On node 0, zone DMA32: 35 pages in unavailable ranges Dec 13 04:11:24.961402 kernel: ACPI: PM-Timer IO Port: 0x608 Dec 13 04:11:24.961409 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 13 04:11:24.961417 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 13 04:11:24.961424 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Dec 13 04:11:24.961433 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 13 04:11:24.961441 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 13 04:11:24.961449 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 13 04:11:24.961456 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 13 04:11:24.961464 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 13 04:11:24.961472 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Dec 13 04:11:24.961479 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Dec 13 04:11:24.961487 kernel: Booting paravirtualized kernel on KVM Dec 13 04:11:24.961494 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 13 04:11:24.961502 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 Dec 13 04:11:24.961511 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u1048576 Dec 13 04:11:24.961519 kernel: pcpu-alloc: s188696 r8192 d32488 u1048576 alloc=1*2097152 Dec 13 04:11:24.961526 kernel: pcpu-alloc: [0] 0 1 Dec 13 04:11:24.961534 kernel: kvm-guest: stealtime: cpu 0, msr 7dc1c0c0 Dec 13 04:11:24.961541 kernel: kvm-guest: PV spinlocks disabled, no host support Dec 13 04:11:24.961549 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515805 Dec 13 04:11:24.961556 kernel: Policy zone: DMA32 Dec 13 04:11:24.961565 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=66bd2580285375a2ba5b0e34ba63606314bcd90aaed1de1996371bdcb032485c Dec 13 04:11:24.961575 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 04:11:24.961583 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 04:11:24.961590 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 13 04:11:24.961598 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 04:11:24.961606 kernel: Memory: 1973284K/2096620K available (12294K kernel code, 2275K rwdata, 13716K rodata, 47476K init, 4108K bss, 123076K reserved, 0K cma-reserved) Dec 13 04:11:24.961614 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 13 04:11:24.961621 kernel: ftrace: allocating 34549 entries in 135 pages Dec 13 04:11:24.961629 kernel: ftrace: allocated 135 pages with 4 groups Dec 13 04:11:24.961638 kernel: rcu: Hierarchical RCU implementation. Dec 13 04:11:24.961646 kernel: rcu: RCU event tracing is enabled. Dec 13 04:11:24.961654 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 13 04:11:24.961662 kernel: Rude variant of Tasks RCU enabled. Dec 13 04:11:24.961669 kernel: Tracing variant of Tasks RCU enabled. Dec 13 04:11:24.961677 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 04:11:24.961685 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 13 04:11:24.961692 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Dec 13 04:11:24.961700 kernel: Console: colour VGA+ 80x25 Dec 13 04:11:24.961709 kernel: printk: console [tty0] enabled Dec 13 04:11:24.961716 kernel: printk: console [ttyS0] enabled Dec 13 04:11:24.961724 kernel: ACPI: Core revision 20210730 Dec 13 04:11:24.961731 kernel: APIC: Switch to symmetric I/O mode setup Dec 13 04:11:24.961739 kernel: x2apic enabled Dec 13 04:11:24.961747 kernel: Switched APIC routing to physical x2apic. Dec 13 04:11:24.961754 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Dec 13 04:11:24.961762 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Dec 13 04:11:24.961770 kernel: Calibrating delay loop (skipped) preset value.. 3992.49 BogoMIPS (lpj=1996249) Dec 13 04:11:24.961777 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Dec 13 04:11:24.961786 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Dec 13 04:11:24.961794 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 13 04:11:24.961802 kernel: Spectre V2 : Mitigation: Retpolines Dec 13 04:11:24.961809 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Dec 13 04:11:24.961817 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Dec 13 04:11:24.961825 kernel: Speculative Store Bypass: Vulnerable Dec 13 04:11:24.961832 kernel: x86/fpu: x87 FPU will use FXSAVE Dec 13 04:11:24.961840 kernel: Freeing SMP alternatives memory: 32K Dec 13 04:11:24.961847 kernel: pid_max: default: 32768 minimum: 301 Dec 13 04:11:24.961857 kernel: LSM: Security Framework initializing Dec 13 04:11:24.961864 kernel: SELinux: Initializing. Dec 13 04:11:24.961872 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 13 04:11:24.961879 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 13 04:11:24.961887 kernel: smpboot: CPU0: AMD Intel Core i7 9xx (Nehalem Class Core i7) (family: 0x6, model: 0x1a, stepping: 0x3) Dec 13 04:11:24.961895 kernel: Performance Events: AMD PMU driver. Dec 13 04:11:24.961902 kernel: ... version: 0 Dec 13 04:11:24.961910 kernel: ... bit width: 48 Dec 13 04:11:24.961918 kernel: ... generic registers: 4 Dec 13 04:11:24.961932 kernel: ... value mask: 0000ffffffffffff Dec 13 04:11:24.961940 kernel: ... max period: 00007fffffffffff Dec 13 04:11:24.961949 kernel: ... fixed-purpose events: 0 Dec 13 04:11:24.961957 kernel: ... event mask: 000000000000000f Dec 13 04:11:24.961965 kernel: signal: max sigframe size: 1440 Dec 13 04:11:24.961973 kernel: rcu: Hierarchical SRCU implementation. Dec 13 04:11:24.961981 kernel: smp: Bringing up secondary CPUs ... Dec 13 04:11:24.961989 kernel: x86: Booting SMP configuration: Dec 13 04:11:24.961998 kernel: .... node #0, CPUs: #1 Dec 13 04:11:24.962006 kernel: kvm-clock: cpu 1, msr 6319b041, secondary cpu clock Dec 13 04:11:24.962014 kernel: kvm-guest: stealtime: cpu 1, msr 7dd1c0c0 Dec 13 04:11:24.962022 kernel: smp: Brought up 1 node, 2 CPUs Dec 13 04:11:24.962030 kernel: smpboot: Max logical packages: 2 Dec 13 04:11:24.962037 kernel: smpboot: Total of 2 processors activated (7984.99 BogoMIPS) Dec 13 04:11:24.962045 kernel: devtmpfs: initialized Dec 13 04:11:24.962053 kernel: x86/mm: Memory block size: 128MB Dec 13 04:11:24.962061 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 04:11:24.962071 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 13 04:11:24.962079 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 04:11:24.962087 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 04:11:24.962095 kernel: audit: initializing netlink subsys (disabled) Dec 13 04:11:24.962103 kernel: audit: type=2000 audit(1734063084.847:1): state=initialized audit_enabled=0 res=1 Dec 13 04:11:24.962110 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 04:11:24.962118 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 13 04:11:24.962126 kernel: cpuidle: using governor menu Dec 13 04:11:24.962134 kernel: ACPI: bus type PCI registered Dec 13 04:11:24.962144 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 04:11:24.962152 kernel: dca service started, version 1.12.1 Dec 13 04:11:24.962159 kernel: PCI: Using configuration type 1 for base access Dec 13 04:11:24.962168 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 13 04:11:24.962175 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 04:11:24.962183 kernel: ACPI: Added _OSI(Module Device) Dec 13 04:11:24.962191 kernel: ACPI: Added _OSI(Processor Device) Dec 13 04:11:24.962199 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 04:11:24.962207 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 04:11:24.962216 kernel: ACPI: Added _OSI(Linux-Dell-Video) Dec 13 04:11:24.962224 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Dec 13 04:11:24.962232 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Dec 13 04:11:24.962240 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 04:11:24.962248 kernel: ACPI: Interpreter enabled Dec 13 04:11:24.962256 kernel: ACPI: PM: (supports S0 S3 S5) Dec 13 04:11:24.962264 kernel: ACPI: Using IOAPIC for interrupt routing Dec 13 04:11:24.962272 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 13 04:11:24.962280 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Dec 13 04:11:24.962289 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 13 04:11:24.964545 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Dec 13 04:11:24.964640 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. Dec 13 04:11:24.964653 kernel: acpiphp: Slot [3] registered Dec 13 04:11:24.964661 kernel: acpiphp: Slot [4] registered Dec 13 04:11:24.964669 kernel: acpiphp: Slot [5] registered Dec 13 04:11:24.964676 kernel: acpiphp: Slot [6] registered Dec 13 04:11:24.964687 kernel: acpiphp: Slot [7] registered Dec 13 04:11:24.964695 kernel: acpiphp: Slot [8] registered Dec 13 04:11:24.964703 kernel: acpiphp: Slot [9] registered Dec 13 04:11:24.964711 kernel: acpiphp: Slot [10] registered Dec 13 04:11:24.964719 kernel: acpiphp: Slot [11] registered Dec 13 04:11:24.964727 kernel: acpiphp: Slot [12] registered Dec 13 04:11:24.964735 kernel: acpiphp: Slot [13] registered Dec 13 04:11:24.964743 kernel: acpiphp: Slot [14] registered Dec 13 04:11:24.964750 kernel: acpiphp: Slot [15] registered Dec 13 04:11:24.964758 kernel: acpiphp: Slot [16] registered Dec 13 04:11:24.964767 kernel: acpiphp: Slot [17] registered Dec 13 04:11:24.964775 kernel: acpiphp: Slot [18] registered Dec 13 04:11:24.964783 kernel: acpiphp: Slot [19] registered Dec 13 04:11:24.964791 kernel: acpiphp: Slot [20] registered Dec 13 04:11:24.964799 kernel: acpiphp: Slot [21] registered Dec 13 04:11:24.964807 kernel: acpiphp: Slot [22] registered Dec 13 04:11:24.964814 kernel: acpiphp: Slot [23] registered Dec 13 04:11:24.964822 kernel: acpiphp: Slot [24] registered Dec 13 04:11:24.964830 kernel: acpiphp: Slot [25] registered Dec 13 04:11:24.964839 kernel: acpiphp: Slot [26] registered Dec 13 04:11:24.964847 kernel: acpiphp: Slot [27] registered Dec 13 04:11:24.964854 kernel: acpiphp: Slot [28] registered Dec 13 04:11:24.964862 kernel: acpiphp: Slot [29] registered Dec 13 04:11:24.964870 kernel: acpiphp: Slot [30] registered Dec 13 04:11:24.964878 kernel: acpiphp: Slot [31] registered Dec 13 04:11:24.964886 kernel: PCI host bridge to bus 0000:00 Dec 13 04:11:24.964981 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 13 04:11:24.965064 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 13 04:11:24.965147 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 13 04:11:24.965221 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Dec 13 04:11:24.965307 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Dec 13 04:11:24.965385 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 13 04:11:24.965481 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Dec 13 04:11:24.965581 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Dec 13 04:11:24.965678 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Dec 13 04:11:24.965763 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc120-0xc12f] Dec 13 04:11:24.965845 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Dec 13 04:11:24.965926 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Dec 13 04:11:24.966008 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Dec 13 04:11:24.966088 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Dec 13 04:11:24.966177 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Dec 13 04:11:24.966265 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Dec 13 04:11:24.966365 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Dec 13 04:11:24.966459 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Dec 13 04:11:24.966542 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Dec 13 04:11:24.966625 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Dec 13 04:11:24.966706 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfeb90000-0xfeb90fff] Dec 13 04:11:24.966792 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfeb80000-0xfeb8ffff pref] Dec 13 04:11:24.966875 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 13 04:11:24.966971 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Dec 13 04:11:24.967054 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc080-0xc0bf] Dec 13 04:11:24.967180 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfeb91000-0xfeb91fff] Dec 13 04:11:24.967266 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Dec 13 04:11:24.967366 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb00000-0xfeb7ffff pref] Dec 13 04:11:24.968498 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Dec 13 04:11:24.968628 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Dec 13 04:11:24.968723 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfeb92000-0xfeb92fff] Dec 13 04:11:24.968822 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Dec 13 04:11:24.968931 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 Dec 13 04:11:24.969030 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc0c0-0xc0ff] Dec 13 04:11:24.969129 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Dec 13 04:11:24.969240 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 Dec 13 04:11:24.969369 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc100-0xc11f] Dec 13 04:11:24.969456 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Dec 13 04:11:24.969469 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 13 04:11:24.969478 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 13 04:11:24.969487 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 13 04:11:24.969495 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 13 04:11:24.969504 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Dec 13 04:11:24.969516 kernel: iommu: Default domain type: Translated Dec 13 04:11:24.969525 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 13 04:11:24.969613 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Dec 13 04:11:24.969701 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 13 04:11:24.969787 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Dec 13 04:11:24.969799 kernel: vgaarb: loaded Dec 13 04:11:24.969808 kernel: pps_core: LinuxPPS API ver. 1 registered Dec 13 04:11:24.969817 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Dec 13 04:11:24.969825 kernel: PTP clock support registered Dec 13 04:11:24.969837 kernel: PCI: Using ACPI for IRQ routing Dec 13 04:11:24.969845 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 13 04:11:24.969854 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Dec 13 04:11:24.969862 kernel: e820: reserve RAM buffer [mem 0x7ffdd000-0x7fffffff] Dec 13 04:11:24.969870 kernel: clocksource: Switched to clocksource kvm-clock Dec 13 04:11:24.969879 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 04:11:24.969887 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 04:11:24.969896 kernel: pnp: PnP ACPI init Dec 13 04:11:24.969989 kernel: pnp 00:03: [dma 2] Dec 13 04:11:24.970006 kernel: pnp: PnP ACPI: found 5 devices Dec 13 04:11:24.970014 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 13 04:11:24.970023 kernel: NET: Registered PF_INET protocol family Dec 13 04:11:24.970031 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 04:11:24.970039 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Dec 13 04:11:24.970048 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 04:11:24.970057 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 13 04:11:24.970065 kernel: TCP bind hash table entries: 16384 (order: 6, 262144 bytes, linear) Dec 13 04:11:24.970075 kernel: TCP: Hash tables configured (established 16384 bind 16384) Dec 13 04:11:24.970083 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 13 04:11:24.970092 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 13 04:11:24.970100 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 04:11:24.970108 kernel: NET: Registered PF_XDP protocol family Dec 13 04:11:24.970197 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 13 04:11:24.970280 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 13 04:11:24.970380 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 13 04:11:24.970461 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Dec 13 04:11:24.970544 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Dec 13 04:11:24.970638 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Dec 13 04:11:24.970731 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Dec 13 04:11:24.970818 kernel: pci 0000:00:01.0: Activating ISA DMA hang workarounds Dec 13 04:11:24.970830 kernel: PCI: CLS 0 bytes, default 64 Dec 13 04:11:24.970839 kernel: Initialise system trusted keyrings Dec 13 04:11:24.970847 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Dec 13 04:11:24.970859 kernel: Key type asymmetric registered Dec 13 04:11:24.970867 kernel: Asymmetric key parser 'x509' registered Dec 13 04:11:24.970875 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Dec 13 04:11:24.970883 kernel: io scheduler mq-deadline registered Dec 13 04:11:24.970892 kernel: io scheduler kyber registered Dec 13 04:11:24.970900 kernel: io scheduler bfq registered Dec 13 04:11:24.970908 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 13 04:11:24.970917 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Dec 13 04:11:24.970926 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Dec 13 04:11:24.970934 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Dec 13 04:11:24.970944 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Dec 13 04:11:24.970952 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 04:11:24.970961 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 13 04:11:24.970969 kernel: random: crng init done Dec 13 04:11:24.970977 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 13 04:11:24.970985 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 13 04:11:24.970994 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 13 04:11:24.971083 kernel: rtc_cmos 00:04: RTC can wake from S4 Dec 13 04:11:24.971099 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 13 04:11:24.971179 kernel: rtc_cmos 00:04: registered as rtc0 Dec 13 04:11:24.971260 kernel: rtc_cmos 00:04: setting system clock to 2024-12-13T04:11:24 UTC (1734063084) Dec 13 04:11:24.971359 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Dec 13 04:11:24.971372 kernel: NET: Registered PF_INET6 protocol family Dec 13 04:11:24.971380 kernel: Segment Routing with IPv6 Dec 13 04:11:24.971389 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 04:11:24.971397 kernel: NET: Registered PF_PACKET protocol family Dec 13 04:11:24.971405 kernel: Key type dns_resolver registered Dec 13 04:11:24.971417 kernel: IPI shorthand broadcast: enabled Dec 13 04:11:24.971425 kernel: sched_clock: Marking stable (712649949, 121229904)->(900824106, -66944253) Dec 13 04:11:24.971434 kernel: registered taskstats version 1 Dec 13 04:11:24.971442 kernel: Loading compiled-in X.509 certificates Dec 13 04:11:24.971450 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.173-flatcar: d9defb0205602bee9bb670636cbe5c74194fdb5e' Dec 13 04:11:24.971459 kernel: Key type .fscrypt registered Dec 13 04:11:24.971467 kernel: Key type fscrypt-provisioning registered Dec 13 04:11:24.971476 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 04:11:24.971485 kernel: ima: Allocated hash algorithm: sha1 Dec 13 04:11:24.971494 kernel: ima: No architecture policies found Dec 13 04:11:24.971502 kernel: clk: Disabling unused clocks Dec 13 04:11:24.971510 kernel: Freeing unused kernel image (initmem) memory: 47476K Dec 13 04:11:24.971519 kernel: Write protecting the kernel read-only data: 28672k Dec 13 04:11:24.971527 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Dec 13 04:11:24.971535 kernel: Freeing unused kernel image (rodata/data gap) memory: 620K Dec 13 04:11:24.971544 kernel: Run /init as init process Dec 13 04:11:24.971552 kernel: with arguments: Dec 13 04:11:24.971562 kernel: /init Dec 13 04:11:24.971570 kernel: with environment: Dec 13 04:11:24.971578 kernel: HOME=/ Dec 13 04:11:24.971586 kernel: TERM=linux Dec 13 04:11:24.971594 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 04:11:24.971606 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 04:11:24.971619 systemd[1]: Detected virtualization kvm. Dec 13 04:11:24.971629 systemd[1]: Detected architecture x86-64. Dec 13 04:11:24.971640 systemd[1]: Running in initrd. Dec 13 04:11:24.971649 systemd[1]: No hostname configured, using default hostname. Dec 13 04:11:24.971658 systemd[1]: Hostname set to . Dec 13 04:11:24.971668 systemd[1]: Initializing machine ID from VM UUID. Dec 13 04:11:24.971677 systemd[1]: Queued start job for default target initrd.target. Dec 13 04:11:24.971686 systemd[1]: Started systemd-ask-password-console.path. Dec 13 04:11:24.971695 systemd[1]: Reached target cryptsetup.target. Dec 13 04:11:24.971704 systemd[1]: Reached target paths.target. Dec 13 04:11:24.971714 systemd[1]: Reached target slices.target. Dec 13 04:11:24.971723 systemd[1]: Reached target swap.target. Dec 13 04:11:24.971732 systemd[1]: Reached target timers.target. Dec 13 04:11:24.971742 systemd[1]: Listening on iscsid.socket. Dec 13 04:11:24.971750 systemd[1]: Listening on iscsiuio.socket. Dec 13 04:11:24.971759 systemd[1]: Listening on systemd-journald-audit.socket. Dec 13 04:11:24.971768 systemd[1]: Listening on systemd-journald-dev-log.socket. Dec 13 04:11:24.971779 systemd[1]: Listening on systemd-journald.socket. Dec 13 04:11:24.971788 systemd[1]: Listening on systemd-networkd.socket. Dec 13 04:11:24.971797 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 04:11:24.971805 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 04:11:24.971815 systemd[1]: Reached target sockets.target. Dec 13 04:11:24.971832 systemd[1]: Starting kmod-static-nodes.service... Dec 13 04:11:24.971842 systemd[1]: Finished network-cleanup.service. Dec 13 04:11:24.971853 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 04:11:24.971862 systemd[1]: Starting systemd-journald.service... Dec 13 04:11:24.971871 systemd[1]: Starting systemd-modules-load.service... Dec 13 04:11:24.971880 systemd[1]: Starting systemd-resolved.service... Dec 13 04:11:24.971889 systemd[1]: Starting systemd-vconsole-setup.service... Dec 13 04:11:24.971898 systemd[1]: Finished kmod-static-nodes.service. Dec 13 04:11:24.971907 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 04:11:24.971921 systemd-journald[185]: Journal started Dec 13 04:11:24.971973 systemd-journald[185]: Runtime Journal (/run/log/journal/4edb8460fa194c8fb3eeacb5f5a58ac2) is 4.9M, max 39.5M, 34.5M free. Dec 13 04:11:24.960370 systemd-resolved[187]: Positive Trust Anchors: Dec 13 04:11:25.002562 kernel: audit: type=1130 audit(1734063084.997:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:11:25.002588 systemd[1]: Started systemd-resolved.service. Dec 13 04:11:24.997000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:11:24.960381 systemd-resolved[187]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 04:11:24.960418 systemd-resolved[187]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 04:11:24.963088 systemd-resolved[187]: Defaulting to hostname 'linux'. Dec 13 04:11:24.964384 systemd-modules-load[186]: Inserted module 'overlay' Dec 13 04:11:25.014396 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 04:11:25.014421 kernel: audit: type=1130 audit(1734063085.009:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:11:25.014432 systemd[1]: Started systemd-journald.service. Dec 13 04:11:25.009000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:11:25.014000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:11:25.014838 systemd[1]: Finished systemd-vconsole-setup.service. Dec 13 04:11:25.018327 kernel: audit: type=1130 audit(1734063085.014:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:11:25.019065 systemd[1]: Reached target nss-lookup.target. Dec 13 04:11:25.025601 kernel: audit: type=1130 audit(1734063085.018:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:11:25.025623 kernel: Bridge firewalling registered Dec 13 04:11:25.018000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:11:25.023716 systemd[1]: Starting dracut-cmdline-ask.service... Dec 13 04:11:25.023916 systemd-modules-load[186]: Inserted module 'br_netfilter' Dec 13 04:11:25.027235 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Dec 13 04:11:25.035880 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Dec 13 04:11:25.042455 kernel: audit: type=1130 audit(1734063085.036:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:11:25.036000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:11:25.049000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:11:25.049119 systemd[1]: Finished dracut-cmdline-ask.service. Dec 13 04:11:25.053521 kernel: audit: type=1130 audit(1734063085.049:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:11:25.053661 systemd[1]: Starting dracut-cmdline.service... Dec 13 04:11:25.060329 kernel: SCSI subsystem initialized Dec 13 04:11:25.065328 dracut-cmdline[203]: dracut-dracut-053 Dec 13 04:11:25.069321 dracut-cmdline[203]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=66bd2580285375a2ba5b0e34ba63606314bcd90aaed1de1996371bdcb032485c Dec 13 04:11:25.078966 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 04:11:25.079001 kernel: device-mapper: uevent: version 1.0.3 Dec 13 04:11:25.080668 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Dec 13 04:11:25.084142 systemd-modules-load[186]: Inserted module 'dm_multipath' Dec 13 04:11:25.085000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:11:25.085702 systemd[1]: Finished systemd-modules-load.service. Dec 13 04:11:25.091194 kernel: audit: type=1130 audit(1734063085.085:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:11:25.089911 systemd[1]: Starting systemd-sysctl.service... Dec 13 04:11:25.098682 systemd[1]: Finished systemd-sysctl.service. Dec 13 04:11:25.102996 kernel: audit: type=1130 audit(1734063085.098:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:11:25.098000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:11:25.134346 kernel: Loading iSCSI transport class v2.0-870. Dec 13 04:11:25.153596 kernel: iscsi: registered transport (tcp) Dec 13 04:11:25.179374 kernel: iscsi: registered transport (qla4xxx) Dec 13 04:11:25.179408 kernel: QLogic iSCSI HBA Driver Dec 13 04:11:25.212881 systemd[1]: Finished dracut-cmdline.service. Dec 13 04:11:25.219564 kernel: audit: type=1130 audit(1734063085.213:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:11:25.213000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:11:25.214389 systemd[1]: Starting dracut-pre-udev.service... Dec 13 04:11:25.268645 kernel: raid6: sse2x4 gen() 13078 MB/s Dec 13 04:11:25.285350 kernel: raid6: sse2x4 xor() 5082 MB/s Dec 13 04:11:25.302369 kernel: raid6: sse2x2 gen() 14373 MB/s Dec 13 04:11:25.319395 kernel: raid6: sse2x2 xor() 8844 MB/s Dec 13 04:11:25.336392 kernel: raid6: sse2x1 gen() 11083 MB/s Dec 13 04:11:25.354084 kernel: raid6: sse2x1 xor() 7016 MB/s Dec 13 04:11:25.354152 kernel: raid6: using algorithm sse2x2 gen() 14373 MB/s Dec 13 04:11:25.354180 kernel: raid6: .... xor() 8844 MB/s, rmw enabled Dec 13 04:11:25.355017 kernel: raid6: using ssse3x2 recovery algorithm Dec 13 04:11:25.370127 kernel: xor: measuring software checksum speed Dec 13 04:11:25.370186 kernel: prefetch64-sse : 18336 MB/sec Dec 13 04:11:25.371057 kernel: generic_sse : 16723 MB/sec Dec 13 04:11:25.371093 kernel: xor: using function: prefetch64-sse (18336 MB/sec) Dec 13 04:11:25.485341 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Dec 13 04:11:25.501032 systemd[1]: Finished dracut-pre-udev.service. Dec 13 04:11:25.501000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:11:25.502000 audit: BPF prog-id=7 op=LOAD Dec 13 04:11:25.503000 audit: BPF prog-id=8 op=LOAD Dec 13 04:11:25.504364 systemd[1]: Starting systemd-udevd.service... Dec 13 04:11:25.518519 systemd-udevd[385]: Using default interface naming scheme 'v252'. Dec 13 04:11:25.523889 systemd[1]: Started systemd-udevd.service. Dec 13 04:11:25.528000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:11:25.531356 systemd[1]: Starting dracut-pre-trigger.service... Dec 13 04:11:25.555610 dracut-pre-trigger[401]: rd.md=0: removing MD RAID activation Dec 13 04:11:25.600440 systemd[1]: Finished dracut-pre-trigger.service. Dec 13 04:11:25.601000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:11:25.603285 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 04:11:25.645521 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 04:11:25.647000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:11:25.710552 kernel: virtio_blk virtio2: [vda] 41943040 512-byte logical blocks (21.5 GB/20.0 GiB) Dec 13 04:11:25.822600 kernel: libata version 3.00 loaded. Dec 13 04:11:25.822648 kernel: ata_piix 0000:00:01.1: version 2.13 Dec 13 04:11:25.822923 kernel: scsi host0: ata_piix Dec 13 04:11:25.823180 kernel: scsi host1: ata_piix Dec 13 04:11:25.823480 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc120 irq 14 Dec 13 04:11:25.823511 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc128 irq 15 Dec 13 04:11:25.823537 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 04:11:25.823570 kernel: GPT:17805311 != 41943039 Dec 13 04:11:25.823594 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 04:11:25.823618 kernel: GPT:17805311 != 41943039 Dec 13 04:11:25.823641 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 04:11:25.823665 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 04:11:25.941384 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (443) Dec 13 04:11:25.959503 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Dec 13 04:11:25.967706 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Dec 13 04:11:25.969040 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Dec 13 04:11:25.982154 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Dec 13 04:11:25.990942 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Dec 13 04:11:25.992498 systemd[1]: Starting disk-uuid.service... Dec 13 04:11:26.016322 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 04:11:26.016488 disk-uuid[461]: Primary Header is updated. Dec 13 04:11:26.016488 disk-uuid[461]: Secondary Entries is updated. Dec 13 04:11:26.016488 disk-uuid[461]: Secondary Header is updated. Dec 13 04:11:27.043373 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 04:11:27.044121 disk-uuid[462]: The operation has completed successfully. Dec 13 04:11:27.110665 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 04:11:27.111690 systemd[1]: Finished disk-uuid.service. Dec 13 04:11:27.113000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:11:27.113000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:11:27.139799 systemd[1]: Starting verity-setup.service... Dec 13 04:11:27.179377 kernel: device-mapper: verity: sha256 using implementation "sha256-ssse3" Dec 13 04:11:27.285842 systemd[1]: Found device dev-mapper-usr.device. Dec 13 04:11:27.288814 systemd[1]: Mounting sysusr-usr.mount... Dec 13 04:11:27.290491 systemd[1]: Finished verity-setup.service. Dec 13 04:11:27.291000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:11:27.426347 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Dec 13 04:11:27.426413 systemd[1]: Mounted sysusr-usr.mount. Dec 13 04:11:27.426998 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Dec 13 04:11:27.427732 systemd[1]: Starting ignition-setup.service... Dec 13 04:11:27.428955 systemd[1]: Starting parse-ip-for-networkd.service... Dec 13 04:11:27.442948 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 04:11:27.443012 kernel: BTRFS info (device vda6): using free space tree Dec 13 04:11:27.443040 kernel: BTRFS info (device vda6): has skinny extents Dec 13 04:11:27.466942 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 04:11:27.482000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:11:27.481840 systemd[1]: Finished ignition-setup.service. Dec 13 04:11:27.484730 systemd[1]: Starting ignition-fetch-offline.service... Dec 13 04:11:27.588047 systemd[1]: Finished parse-ip-for-networkd.service. Dec 13 04:11:27.589000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:11:27.591000 audit: BPF prog-id=9 op=LOAD Dec 13 04:11:27.593458 systemd[1]: Starting systemd-networkd.service... Dec 13 04:11:27.618102 systemd-networkd[632]: lo: Link UP Dec 13 04:11:27.618113 systemd-networkd[632]: lo: Gained carrier Dec 13 04:11:27.618774 systemd-networkd[632]: Enumeration completed Dec 13 04:11:27.619153 systemd-networkd[632]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 04:11:27.620632 systemd-networkd[632]: eth0: Link UP Dec 13 04:11:27.620637 systemd-networkd[632]: eth0: Gained carrier Dec 13 04:11:27.621983 systemd[1]: Started systemd-networkd.service. Dec 13 04:11:27.622000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:11:27.623239 systemd[1]: Reached target network.target. Dec 13 04:11:27.625261 systemd[1]: Starting iscsiuio.service... Dec 13 04:11:27.633000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:11:27.633696 systemd[1]: Started iscsiuio.service. Dec 13 04:11:27.634911 systemd[1]: Starting iscsid.service... Dec 13 04:11:27.636510 systemd-networkd[632]: eth0: DHCPv4 address 172.24.4.93/24, gateway 172.24.4.1 acquired from 172.24.4.1 Dec 13 04:11:27.638872 iscsid[641]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Dec 13 04:11:27.638872 iscsid[641]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Dec 13 04:11:27.638872 iscsid[641]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Dec 13 04:11:27.638872 iscsid[641]: If using hardware iscsi like qla4xxx this message can be ignored. Dec 13 04:11:27.638872 iscsid[641]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Dec 13 04:11:27.638872 iscsid[641]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Dec 13 04:11:27.642000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:11:27.641736 systemd[1]: Started iscsid.service. Dec 13 04:11:27.643612 systemd[1]: Starting dracut-initqueue.service... Dec 13 04:11:27.658000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:11:27.658063 systemd[1]: Finished dracut-initqueue.service. Dec 13 04:11:27.658598 systemd[1]: Reached target remote-fs-pre.target. Dec 13 04:11:27.659001 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 04:11:27.659430 systemd[1]: Reached target remote-fs.target. Dec 13 04:11:27.660674 systemd[1]: Starting dracut-pre-mount.service... Dec 13 04:11:27.670050 systemd[1]: Finished dracut-pre-mount.service. Dec 13 04:11:27.670000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:11:27.792885 ignition[556]: Ignition 2.14.0 Dec 13 04:11:27.793536 ignition[556]: Stage: fetch-offline Dec 13 04:11:27.793646 ignition[556]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 04:11:27.793673 ignition[556]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Dec 13 04:11:27.796400 systemd[1]: Finished ignition-fetch-offline.service. Dec 13 04:11:27.797000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:11:27.794762 ignition[556]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 04:11:27.799122 systemd[1]: Starting ignition-fetch.service... Dec 13 04:11:27.794892 ignition[556]: parsed url from cmdline: "" Dec 13 04:11:27.794896 ignition[556]: no config URL provided Dec 13 04:11:27.794902 ignition[556]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 04:11:27.794910 ignition[556]: no config at "/usr/lib/ignition/user.ign" Dec 13 04:11:27.794919 ignition[556]: failed to fetch config: resource requires networking Dec 13 04:11:27.795034 ignition[556]: Ignition finished successfully Dec 13 04:11:27.807843 ignition[655]: Ignition 2.14.0 Dec 13 04:11:27.807851 ignition[655]: Stage: fetch Dec 13 04:11:27.807961 ignition[655]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 04:11:27.807983 ignition[655]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Dec 13 04:11:27.809000 ignition[655]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 04:11:27.809102 ignition[655]: parsed url from cmdline: "" Dec 13 04:11:27.809106 ignition[655]: no config URL provided Dec 13 04:11:27.809112 ignition[655]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 04:11:27.809120 ignition[655]: no config at "/usr/lib/ignition/user.ign" Dec 13 04:11:27.813872 ignition[655]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Dec 13 04:11:27.813900 ignition[655]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Dec 13 04:11:27.817681 ignition[655]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Dec 13 04:11:28.260392 ignition[655]: GET result: OK Dec 13 04:11:28.260606 ignition[655]: parsing config with SHA512: afbbd3ef4402d5f4d2f59a0307a68907508b1fb534c9654415336b459acbd02f0741f5cccfffb41b671633c02c0835671797a094f8c2df748a217c96264d2027 Dec 13 04:11:28.277927 unknown[655]: fetched base config from "system" Dec 13 04:11:28.277955 unknown[655]: fetched base config from "system" Dec 13 04:11:28.278713 ignition[655]: fetch: fetch complete Dec 13 04:11:28.277971 unknown[655]: fetched user config from "openstack" Dec 13 04:11:28.278726 ignition[655]: fetch: fetch passed Dec 13 04:11:28.283000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:11:28.282215 systemd[1]: Finished ignition-fetch.service. Dec 13 04:11:28.278831 ignition[655]: Ignition finished successfully Dec 13 04:11:28.286002 systemd[1]: Starting ignition-kargs.service... Dec 13 04:11:28.306972 ignition[661]: Ignition 2.14.0 Dec 13 04:11:28.307000 ignition[661]: Stage: kargs Dec 13 04:11:28.307279 ignition[661]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 04:11:28.307364 ignition[661]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Dec 13 04:11:28.309657 ignition[661]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 04:11:28.322000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:11:28.321940 systemd[1]: Finished ignition-kargs.service. Dec 13 04:11:28.311769 ignition[661]: kargs: kargs passed Dec 13 04:11:28.324799 systemd[1]: Starting ignition-disks.service... Dec 13 04:11:28.311867 ignition[661]: Ignition finished successfully Dec 13 04:11:28.343434 ignition[667]: Ignition 2.14.0 Dec 13 04:11:28.343461 ignition[667]: Stage: disks Dec 13 04:11:28.343716 ignition[667]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 04:11:28.343759 ignition[667]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Dec 13 04:11:28.346003 ignition[667]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 04:11:28.348135 ignition[667]: disks: disks passed Dec 13 04:11:28.350433 systemd[1]: Finished ignition-disks.service. Dec 13 04:11:28.352000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:11:28.348260 ignition[667]: Ignition finished successfully Dec 13 04:11:28.352703 systemd[1]: Reached target initrd-root-device.target. Dec 13 04:11:28.354761 systemd[1]: Reached target local-fs-pre.target. Dec 13 04:11:28.356983 systemd[1]: Reached target local-fs.target. Dec 13 04:11:28.359194 systemd[1]: Reached target sysinit.target. Dec 13 04:11:28.361431 systemd[1]: Reached target basic.target. Dec 13 04:11:28.365397 systemd[1]: Starting systemd-fsck-root.service... Dec 13 04:11:28.397611 systemd-fsck[675]: ROOT: clean, 621/1628000 files, 124058/1617920 blocks Dec 13 04:11:28.409213 systemd[1]: Finished systemd-fsck-root.service. Dec 13 04:11:28.410000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:11:28.412468 systemd[1]: Mounting sysroot.mount... Dec 13 04:11:28.437375 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Dec 13 04:11:28.438688 systemd[1]: Mounted sysroot.mount. Dec 13 04:11:28.441389 systemd[1]: Reached target initrd-root-fs.target. Dec 13 04:11:28.444978 systemd[1]: Mounting sysroot-usr.mount... Dec 13 04:11:28.446946 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Dec 13 04:11:28.448516 systemd[1]: Starting flatcar-openstack-hostname.service... Dec 13 04:11:28.454137 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 04:11:28.454203 systemd[1]: Reached target ignition-diskful.target. Dec 13 04:11:28.462078 systemd[1]: Mounted sysroot-usr.mount. Dec 13 04:11:28.470545 systemd[1]: Mounting sysroot-usr-share-oem.mount... Dec 13 04:11:28.478821 systemd[1]: Starting initrd-setup-root.service... Dec 13 04:11:28.502356 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (682) Dec 13 04:11:28.512780 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 04:11:28.512890 kernel: BTRFS info (device vda6): using free space tree Dec 13 04:11:28.512931 kernel: BTRFS info (device vda6): has skinny extents Dec 13 04:11:28.512971 initrd-setup-root[687]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 04:11:28.523539 initrd-setup-root[711]: cut: /sysroot/etc/group: No such file or directory Dec 13 04:11:28.534539 initrd-setup-root[721]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 04:11:28.536680 systemd[1]: Mounted sysroot-usr-share-oem.mount. Dec 13 04:11:28.547955 initrd-setup-root[729]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 04:11:28.652972 systemd[1]: Finished initrd-setup-root.service. Dec 13 04:11:28.653000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:11:28.655923 systemd[1]: Starting ignition-mount.service... Dec 13 04:11:28.658574 systemd[1]: Starting sysroot-boot.service... Dec 13 04:11:28.679344 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Dec 13 04:11:28.681161 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Dec 13 04:11:28.708791 ignition[750]: INFO : Ignition 2.14.0 Dec 13 04:11:28.709690 ignition[750]: INFO : Stage: mount Dec 13 04:11:28.710374 ignition[750]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 04:11:28.711071 ignition[750]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Dec 13 04:11:28.713149 ignition[750]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 04:11:28.714765 ignition[750]: INFO : mount: mount passed Dec 13 04:11:28.717408 ignition[750]: INFO : Ignition finished successfully Dec 13 04:11:28.719036 systemd[1]: Finished ignition-mount.service. Dec 13 04:11:28.719000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:11:28.725568 systemd[1]: Finished sysroot-boot.service. Dec 13 04:11:28.725000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:11:28.729926 coreos-metadata[681]: Dec 13 04:11:28.729 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Dec 13 04:11:28.745841 coreos-metadata[681]: Dec 13 04:11:28.745 INFO Fetch successful Dec 13 04:11:28.746454 coreos-metadata[681]: Dec 13 04:11:28.746 INFO wrote hostname ci-3510-3-6-e-d6f0f5ff51.novalocal to /sysroot/etc/hostname Dec 13 04:11:28.750406 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Dec 13 04:11:28.750533 systemd[1]: Finished flatcar-openstack-hostname.service. Dec 13 04:11:28.751000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:11:28.751000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:11:28.752746 systemd[1]: Starting ignition-files.service... Dec 13 04:11:28.759930 systemd[1]: Mounting sysroot-usr-share-oem.mount... Dec 13 04:11:28.769333 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (758) Dec 13 04:11:28.772327 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 04:11:28.772367 kernel: BTRFS info (device vda6): using free space tree Dec 13 04:11:28.772381 kernel: BTRFS info (device vda6): has skinny extents Dec 13 04:11:28.782776 systemd[1]: Mounted sysroot-usr-share-oem.mount. Dec 13 04:11:28.800227 ignition[777]: INFO : Ignition 2.14.0 Dec 13 04:11:28.801821 ignition[777]: INFO : Stage: files Dec 13 04:11:28.803117 ignition[777]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 04:11:28.804742 ignition[777]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Dec 13 04:11:28.807003 ignition[777]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 04:11:28.808693 ignition[777]: DEBUG : files: compiled without relabeling support, skipping Dec 13 04:11:28.808693 ignition[777]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 04:11:28.808693 ignition[777]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 04:11:28.813580 ignition[777]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 04:11:28.815152 ignition[777]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 04:11:28.815152 ignition[777]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 04:11:28.814685 unknown[777]: wrote ssh authorized keys file for user: core Dec 13 04:11:28.819878 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Dec 13 04:11:28.819878 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 04:11:28.819878 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 04:11:28.819878 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 04:11:28.819878 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Dec 13 04:11:28.819878 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Dec 13 04:11:28.819878 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Dec 13 04:11:28.819878 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Dec 13 04:11:29.229483 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Dec 13 04:11:29.309656 systemd-networkd[632]: eth0: Gained IPv6LL Dec 13 04:11:30.837344 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Dec 13 04:11:30.838804 ignition[777]: INFO : files: op(7): [started] processing unit "coreos-metadata-sshkeys@.service" Dec 13 04:11:30.839574 ignition[777]: INFO : files: op(7): [finished] processing unit "coreos-metadata-sshkeys@.service" Dec 13 04:11:30.840328 ignition[777]: INFO : files: op(8): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Dec 13 04:11:30.842686 ignition[777]: INFO : files: op(8): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Dec 13 04:11:30.858545 ignition[777]: INFO : files: createResultFile: createFiles: op(9): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 04:11:30.859476 ignition[777]: INFO : files: createResultFile: createFiles: op(9): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 04:11:30.860288 ignition[777]: INFO : files: files passed Dec 13 04:11:30.860989 ignition[777]: INFO : Ignition finished successfully Dec 13 04:11:30.864481 systemd[1]: Finished ignition-files.service. Dec 13 04:11:30.873095 kernel: kauditd_printk_skb: 27 callbacks suppressed Dec 13 04:11:30.873120 kernel: audit: type=1130 audit(1734063090.868:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:11:30.868000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:11:30.873361 systemd[1]: Starting initrd-setup-root-after-ignition.service... Dec 13 04:11:30.874727 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Dec 13 04:11:30.879808 systemd[1]: Starting ignition-quench.service... Dec 13 04:11:30.885000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:11:30.886484 initrd-setup-root-after-ignition[801]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 04:11:30.894798 kernel: audit: type=1130 audit(1734063090.885:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:11:30.894846 kernel: audit: type=1131 audit(1734063090.885:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:11:30.885000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:11:30.885583 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 04:11:30.885694 systemd[1]: Finished ignition-quench.service. Dec 13 04:11:30.887763 systemd[1]: Finished initrd-setup-root-after-ignition.service. Dec 13 04:11:30.908120 kernel: audit: type=1130 audit(1734063090.897:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:11:30.897000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:11:30.898269 systemd[1]: Reached target ignition-complete.target. Dec 13 04:11:30.909412 systemd[1]: Starting initrd-parse-etc.service... Dec 13 04:11:30.936500 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 04:11:30.936598 systemd[1]: Finished initrd-parse-etc.service. Dec 13 04:11:30.937000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:11:30.939000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:11:30.939682 systemd[1]: Reached target initrd-fs.target. Dec 13 04:11:30.946524 kernel: audit: type=1130 audit(1734063090.937:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:11:30.946572 kernel: audit: type=1131 audit(1734063090.939:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:11:30.945309 systemd[1]: Reached target initrd.target. Dec 13 04:11:30.946919 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Dec 13 04:11:30.947726 systemd[1]: Starting dracut-pre-pivot.service... Dec 13 04:11:30.962601 systemd[1]: Finished dracut-pre-pivot.service. Dec 13 04:11:30.972992 kernel: audit: type=1130 audit(1734063090.962:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:11:30.962000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:11:30.963918 systemd[1]: Starting initrd-cleanup.service... Dec 13 04:11:30.981718 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 04:11:30.981817 systemd[1]: Finished initrd-cleanup.service. Dec 13 04:11:31.000413 kernel: audit: type=1130 audit(1734063090.983:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:11:31.000456 kernel: audit: type=1131 audit(1734063090.983:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:11:30.983000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:11:30.983000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:11:30.984244 systemd[1]: Stopped target nss-lookup.target. Dec 13 04:11:31.000801 systemd[1]: Stopped target remote-cryptsetup.target. Dec 13 04:11:31.002348 systemd[1]: Stopped target timers.target. Dec 13 04:11:31.003912 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 04:11:31.013358 kernel: audit: type=1131 audit(1734063091.005:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:11:31.005000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:11:31.003958 systemd[1]: Stopped dracut-pre-pivot.service. Dec 13 04:11:31.005459 systemd[1]: Stopped target initrd.target. Dec 13 04:11:31.013732 systemd[1]: Stopped target basic.target. Dec 13 04:11:31.014664 systemd[1]: Stopped target ignition-complete.target. Dec 13 04:11:31.015561 systemd[1]: Stopped target ignition-diskful.target. Dec 13 04:11:31.016458 systemd[1]: Stopped target initrd-root-device.target. Dec 13 04:11:31.017368 systemd[1]: Stopped target remote-fs.target. Dec 13 04:11:31.018199 systemd[1]: Stopped target remote-fs-pre.target. Dec 13 04:11:31.019062 systemd[1]: Stopped target sysinit.target. Dec 13 04:11:31.019891 systemd[1]: Stopped target local-fs.target. Dec 13 04:11:31.020765 systemd[1]: Stopped target local-fs-pre.target. Dec 13 04:11:31.021592 systemd[1]: Stopped target swap.target. Dec 13 04:11:31.022000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:11:31.022399 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 04:11:31.022440 systemd[1]: Stopped dracut-pre-mount.service. Dec 13 04:11:31.024000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:11:31.023258 systemd[1]: Stopped target cryptsetup.target. Dec 13 04:11:31.025000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:11:31.024035 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 04:11:31.026000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:11:31.024073 systemd[1]: Stopped dracut-initqueue.service. Dec 13 04:11:31.025020 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 04:11:31.025057 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Dec 13 04:11:31.029319 iscsid[641]: iscsid shutting down. Dec 13 04:11:31.025902 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 04:11:31.025939 systemd[1]: Stopped ignition-files.service. Dec 13 04:11:31.027371 systemd[1]: Stopping ignition-mount.service... Dec 13 04:11:31.032117 systemd[1]: Stopping iscsid.service... Dec 13 04:11:31.033000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:11:31.034000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:11:31.033161 systemd[1]: Stopping sysroot-boot.service... Dec 13 04:11:31.033633 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 04:11:31.033686 systemd[1]: Stopped systemd-udev-trigger.service. Dec 13 04:11:31.034171 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 04:11:31.034214 systemd[1]: Stopped dracut-pre-trigger.service. Dec 13 04:11:31.037130 systemd[1]: iscsid.service: Deactivated successfully. Dec 13 04:11:31.038346 systemd[1]: Stopped iscsid.service. Dec 13 04:11:31.045000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:11:31.048000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:11:31.045746 systemd[1]: Stopping iscsiuio.service... Dec 13 04:11:31.048605 systemd[1]: iscsiuio.service: Deactivated successfully. Dec 13 04:11:31.048710 systemd[1]: Stopped iscsiuio.service. Dec 13 04:11:31.053944 ignition[815]: INFO : Ignition 2.14.0 Dec 13 04:11:31.053944 ignition[815]: INFO : Stage: umount Dec 13 04:11:31.053944 ignition[815]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 04:11:31.053944 ignition[815]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Dec 13 04:11:31.053944 ignition[815]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 04:11:31.058000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:11:31.061580 ignition[815]: INFO : umount: umount passed Dec 13 04:11:31.061580 ignition[815]: INFO : Ignition finished successfully Dec 13 04:11:31.055518 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 04:11:31.060000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:11:31.063000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:11:31.063000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:11:31.055618 systemd[1]: Stopped ignition-mount.service. Dec 13 04:11:31.059865 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 04:11:31.065000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:11:31.060163 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 04:11:31.060217 systemd[1]: Stopped ignition-disks.service. Dec 13 04:11:31.060712 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 04:11:31.060750 systemd[1]: Stopped ignition-kargs.service. Dec 13 04:11:31.063569 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 13 04:11:31.063606 systemd[1]: Stopped ignition-fetch.service. Dec 13 04:11:31.064082 systemd[1]: Stopped target network.target. Dec 13 04:11:31.065041 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 04:11:31.065080 systemd[1]: Stopped ignition-fetch-offline.service. Dec 13 04:11:31.066066 systemd[1]: Stopped target paths.target. Dec 13 04:11:31.073000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:11:31.066647 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 04:11:31.071362 systemd[1]: Stopped systemd-ask-password-console.path. Dec 13 04:11:31.072010 systemd[1]: Stopped target slices.target. Dec 13 04:11:31.072439 systemd[1]: Stopped target sockets.target. Dec 13 04:11:31.072840 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 04:11:31.072870 systemd[1]: Closed iscsid.socket. Dec 13 04:11:31.073257 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 04:11:31.073286 systemd[1]: Closed iscsiuio.socket. Dec 13 04:11:31.073698 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 04:11:31.073732 systemd[1]: Stopped ignition-setup.service. Dec 13 04:11:31.074252 systemd[1]: Stopping systemd-networkd.service... Dec 13 04:11:31.083000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:11:31.074858 systemd[1]: Stopping systemd-resolved.service... Dec 13 04:11:31.082357 systemd-networkd[632]: eth0: DHCPv6 lease lost Dec 13 04:11:31.083436 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 04:11:31.083528 systemd[1]: Stopped systemd-networkd.service. Dec 13 04:11:31.087000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:11:31.086678 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 04:11:31.088000 audit: BPF prog-id=9 op=UNLOAD Dec 13 04:11:31.086769 systemd[1]: Stopped systemd-resolved.service. Dec 13 04:11:31.088482 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 04:11:31.088519 systemd[1]: Closed systemd-networkd.socket. Dec 13 04:11:31.090426 systemd[1]: Stopping network-cleanup.service... Dec 13 04:11:31.091000 audit: BPF prog-id=6 op=UNLOAD Dec 13 04:11:31.091275 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 04:11:31.091417 systemd[1]: Stopped parse-ip-for-networkd.service. Dec 13 04:11:31.093000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:11:31.093506 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 04:11:31.093549 systemd[1]: Stopped systemd-sysctl.service. Dec 13 04:11:31.094000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:11:31.094715 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 04:11:31.095000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:11:31.094753 systemd[1]: Stopped systemd-modules-load.service. Dec 13 04:11:31.099131 systemd[1]: Stopping systemd-udevd.service... Dec 13 04:11:31.101037 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 13 04:11:31.103615 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 04:11:31.104792 systemd[1]: Stopped network-cleanup.service. Dec 13 04:11:31.104000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:11:31.105497 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 04:11:31.105000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:11:31.105614 systemd[1]: Stopped systemd-udevd.service. Dec 13 04:11:31.107008 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 04:11:31.107050 systemd[1]: Closed systemd-udevd-control.socket. Dec 13 04:11:31.107895 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 04:11:31.109000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:11:31.107924 systemd[1]: Closed systemd-udevd-kernel.socket. Dec 13 04:11:31.110000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:11:31.109027 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 04:11:31.111000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:11:31.109070 systemd[1]: Stopped dracut-pre-udev.service. Dec 13 04:11:31.109968 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 04:11:31.110004 systemd[1]: Stopped dracut-cmdline.service. Dec 13 04:11:31.110987 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 04:11:31.111023 systemd[1]: Stopped dracut-cmdline-ask.service. Dec 13 04:11:31.112623 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Dec 13 04:11:31.119192 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 04:11:31.119915 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Dec 13 04:11:31.120000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:11:31.121050 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 04:11:31.121639 systemd[1]: Stopped kmod-static-nodes.service. Dec 13 04:11:31.122000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:11:31.122634 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 04:11:31.123348 systemd[1]: Stopped systemd-vconsole-setup.service. Dec 13 04:11:31.123000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:11:31.125171 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Dec 13 04:11:31.125644 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 04:11:31.126000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:11:31.126000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:11:31.125726 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Dec 13 04:11:31.472283 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 04:11:31.472562 systemd[1]: Stopped sysroot-boot.service. Dec 13 04:11:31.474000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:11:31.475331 systemd[1]: Reached target initrd-switch-root.target. Dec 13 04:11:31.477461 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 04:11:31.479000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:11:31.477563 systemd[1]: Stopped initrd-setup-root.service. Dec 13 04:11:31.481361 systemd[1]: Starting initrd-switch-root.service... Dec 13 04:11:31.527172 systemd[1]: Switching root. Dec 13 04:11:31.557076 systemd-journald[185]: Journal stopped Dec 13 04:11:35.798465 systemd-journald[185]: Received SIGTERM from PID 1 (systemd). Dec 13 04:11:35.798548 kernel: SELinux: Class mctp_socket not defined in policy. Dec 13 04:11:35.798567 kernel: SELinux: Class anon_inode not defined in policy. Dec 13 04:11:35.798601 kernel: SELinux: the above unknown classes and permissions will be allowed Dec 13 04:11:35.798619 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 04:11:35.798632 kernel: SELinux: policy capability open_perms=1 Dec 13 04:11:35.798644 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 04:11:35.798656 kernel: SELinux: policy capability always_check_network=0 Dec 13 04:11:35.798672 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 04:11:35.798685 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 04:11:35.798697 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 04:11:35.798712 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 04:11:35.798726 systemd[1]: Successfully loaded SELinux policy in 88.102ms. Dec 13 04:11:35.798747 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 21.524ms. Dec 13 04:11:35.798764 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 04:11:35.798798 systemd[1]: Detected virtualization kvm. Dec 13 04:11:35.798813 systemd[1]: Detected architecture x86-64. Dec 13 04:11:35.798826 systemd[1]: Detected first boot. Dec 13 04:11:35.798839 systemd[1]: Hostname set to . Dec 13 04:11:35.798855 systemd[1]: Initializing machine ID from VM UUID. Dec 13 04:11:35.798869 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Dec 13 04:11:35.798882 systemd[1]: Populated /etc with preset unit settings. Dec 13 04:11:35.798897 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 04:11:35.798911 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 04:11:35.798926 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 04:11:35.798940 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 13 04:11:35.798955 systemd[1]: Stopped initrd-switch-root.service. Dec 13 04:11:35.798969 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 13 04:11:35.798984 systemd[1]: Created slice system-addon\x2dconfig.slice. Dec 13 04:11:35.798997 systemd[1]: Created slice system-addon\x2drun.slice. Dec 13 04:11:35.799010 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Dec 13 04:11:35.799024 systemd[1]: Created slice system-getty.slice. Dec 13 04:11:35.799041 systemd[1]: Created slice system-modprobe.slice. Dec 13 04:11:35.799054 systemd[1]: Created slice system-serial\x2dgetty.slice. Dec 13 04:11:35.799067 systemd[1]: Created slice system-system\x2dcloudinit.slice. Dec 13 04:11:35.799080 systemd[1]: Created slice system-systemd\x2dfsck.slice. Dec 13 04:11:35.799094 systemd[1]: Created slice user.slice. Dec 13 04:11:35.799110 systemd[1]: Started systemd-ask-password-console.path. Dec 13 04:11:35.799123 systemd[1]: Started systemd-ask-password-wall.path. Dec 13 04:11:35.799136 systemd[1]: Set up automount boot.automount. Dec 13 04:11:35.799151 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Dec 13 04:11:35.799165 systemd[1]: Stopped target initrd-switch-root.target. Dec 13 04:11:35.799178 systemd[1]: Stopped target initrd-fs.target. Dec 13 04:11:35.799191 systemd[1]: Stopped target initrd-root-fs.target. Dec 13 04:11:35.799203 systemd[1]: Reached target integritysetup.target. Dec 13 04:11:35.799217 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 04:11:35.799229 systemd[1]: Reached target remote-fs.target. Dec 13 04:11:35.799244 systemd[1]: Reached target slices.target. Dec 13 04:11:35.799258 systemd[1]: Reached target swap.target. Dec 13 04:11:35.799272 systemd[1]: Reached target torcx.target. Dec 13 04:11:35.799284 systemd[1]: Reached target veritysetup.target. Dec 13 04:11:35.799374 systemd[1]: Listening on systemd-coredump.socket. Dec 13 04:11:35.799395 systemd[1]: Listening on systemd-initctl.socket. Dec 13 04:11:35.799410 systemd[1]: Listening on systemd-networkd.socket. Dec 13 04:11:35.799423 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 04:11:35.799437 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 04:11:35.799450 systemd[1]: Listening on systemd-userdbd.socket. Dec 13 04:11:35.799466 systemd[1]: Mounting dev-hugepages.mount... Dec 13 04:11:35.799479 systemd[1]: Mounting dev-mqueue.mount... Dec 13 04:11:35.799492 systemd[1]: Mounting media.mount... Dec 13 04:11:35.799506 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 04:11:35.799519 systemd[1]: Mounting sys-kernel-debug.mount... Dec 13 04:11:35.799531 systemd[1]: Mounting sys-kernel-tracing.mount... Dec 13 04:11:35.799544 systemd[1]: Mounting tmp.mount... Dec 13 04:11:35.799557 systemd[1]: Starting flatcar-tmpfiles.service... Dec 13 04:11:35.799571 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 04:11:35.799586 systemd[1]: Starting kmod-static-nodes.service... Dec 13 04:11:35.799599 systemd[1]: Starting modprobe@configfs.service... Dec 13 04:11:35.799613 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 04:11:35.799626 systemd[1]: Starting modprobe@drm.service... Dec 13 04:11:35.799638 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 04:11:35.799651 systemd[1]: Starting modprobe@fuse.service... Dec 13 04:11:35.799665 systemd[1]: Starting modprobe@loop.service... Dec 13 04:11:35.799678 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 04:11:35.799694 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 13 04:11:35.799707 systemd[1]: Stopped systemd-fsck-root.service. Dec 13 04:11:35.799719 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 13 04:11:35.799733 systemd[1]: Stopped systemd-fsck-usr.service. Dec 13 04:11:35.799746 systemd[1]: Stopped systemd-journald.service. Dec 13 04:11:35.799759 systemd[1]: Starting systemd-journald.service... Dec 13 04:11:35.799773 systemd[1]: Starting systemd-modules-load.service... Dec 13 04:11:35.799787 systemd[1]: Starting systemd-network-generator.service... Dec 13 04:11:35.799800 systemd[1]: Starting systemd-remount-fs.service... Dec 13 04:11:35.799813 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 04:11:35.799828 systemd[1]: verity-setup.service: Deactivated successfully. Dec 13 04:11:35.799842 systemd[1]: Stopped verity-setup.service. Dec 13 04:11:35.799856 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 04:11:35.799869 systemd[1]: Mounted dev-hugepages.mount. Dec 13 04:11:35.799882 systemd[1]: Mounted dev-mqueue.mount. Dec 13 04:11:35.799895 systemd[1]: Mounted media.mount. Dec 13 04:11:35.799907 systemd[1]: Mounted sys-kernel-debug.mount. Dec 13 04:11:35.799920 systemd[1]: Mounted sys-kernel-tracing.mount. Dec 13 04:11:35.799933 systemd[1]: Mounted tmp.mount. Dec 13 04:11:35.800358 systemd[1]: Finished kmod-static-nodes.service. Dec 13 04:11:35.800380 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 04:11:35.800393 systemd[1]: Finished modprobe@configfs.service. Dec 13 04:11:35.800406 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 04:11:35.800420 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 04:11:35.800443 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 04:11:35.800457 systemd[1]: Finished modprobe@drm.service. Dec 13 04:11:35.800471 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 04:11:35.800486 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 04:11:35.800499 systemd[1]: Finished systemd-modules-load.service. Dec 13 04:11:35.800514 systemd[1]: Finished systemd-network-generator.service. Dec 13 04:11:35.800531 systemd-journald[913]: Journal started Dec 13 04:11:35.800585 systemd-journald[913]: Runtime Journal (/run/log/journal/4edb8460fa194c8fb3eeacb5f5a58ac2) is 4.9M, max 39.5M, 34.5M free. Dec 13 04:11:32.057000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 04:11:32.199000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 04:11:32.199000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 04:11:32.200000 audit: BPF prog-id=10 op=LOAD Dec 13 04:11:32.200000 audit: BPF prog-id=10 op=UNLOAD Dec 13 04:11:32.200000 audit: BPF prog-id=11 op=LOAD Dec 13 04:11:32.200000 audit: BPF prog-id=11 op=UNLOAD Dec 13 04:11:32.352000 audit[847]: AVC avc: denied { associate } for pid=847 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Dec 13 04:11:32.352000 audit[847]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c0001178d2 a1=c00002ae40 a2=c000029100 a3=32 items=0 ppid=830 pid=847 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 04:11:32.352000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Dec 13 04:11:32.354000 audit[847]: AVC avc: denied { associate } for pid=847 comm="torcx-generator" name="bin" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Dec 13 04:11:32.354000 audit[847]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c0001179a9 a2=1ed a3=0 items=2 ppid=830 pid=847 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 04:11:32.354000 audit: CWD cwd="/" Dec 13 04:11:32.354000 audit: PATH item=0 name=(null) inode=2 dev=00:1a mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:11:35.803322 systemd[1]: Started systemd-journald.service. Dec 13 04:11:32.354000 audit: PATH item=1 name=(null) inode=3 dev=00:1a mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:11:32.354000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Dec 13 04:11:35.606000 audit: BPF prog-id=12 op=LOAD Dec 13 04:11:35.606000 audit: BPF prog-id=3 op=UNLOAD Dec 13 04:11:35.606000 audit: BPF prog-id=13 op=LOAD Dec 13 04:11:35.606000 audit: BPF prog-id=14 op=LOAD Dec 13 04:11:35.606000 audit: BPF prog-id=4 op=UNLOAD Dec 13 04:11:35.606000 audit: BPF prog-id=5 op=UNLOAD Dec 13 04:11:35.607000 audit: BPF prog-id=15 op=LOAD Dec 13 04:11:35.607000 audit: BPF prog-id=12 op=UNLOAD Dec 13 04:11:35.607000 audit: BPF prog-id=16 op=LOAD Dec 13 04:11:35.607000 audit: BPF prog-id=17 op=LOAD Dec 13 04:11:35.607000 audit: BPF prog-id=13 op=UNLOAD Dec 13 04:11:35.607000 audit: BPF prog-id=14 op=UNLOAD Dec 13 04:11:35.608000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:11:35.612000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:11:35.612000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:11:35.621000 audit: BPF prog-id=15 op=UNLOAD Dec 13 04:11:35.726000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:11:35.730000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:11:35.732000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:11:35.732000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:11:35.733000 audit: BPF prog-id=18 op=LOAD Dec 13 04:11:35.733000 audit: BPF prog-id=19 op=LOAD Dec 13 04:11:35.733000 audit: BPF prog-id=20 op=LOAD Dec 13 04:11:35.733000 audit: BPF prog-id=16 op=UNLOAD Dec 13 04:11:35.733000 audit: BPF prog-id=17 op=UNLOAD Dec 13 04:11:35.757000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:11:35.778000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:11:35.782000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:11:35.782000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:11:35.787000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:11:35.787000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:11:35.790000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:11:35.790000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:11:35.795000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:11:35.795000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:11:35.796000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Dec 13 04:11:35.796000 audit[913]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7fff16eebe70 a2=4000 a3=7fff16eebf0c items=0 ppid=1 pid=913 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 04:11:35.796000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Dec 13 04:11:35.798000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:11:35.801000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:11:35.803000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:11:35.804000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:11:32.348396 /usr/lib/systemd/system-generators/torcx-generator[847]: time="2024-12-13T04:11:32Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 04:11:35.604940 systemd[1]: Queued start job for default target multi-user.target. Dec 13 04:11:32.349269 /usr/lib/systemd/system-generators/torcx-generator[847]: time="2024-12-13T04:11:32Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Dec 13 04:11:35.604951 systemd[1]: Unnecessary job was removed for dev-vda6.device. Dec 13 04:11:32.349307 /usr/lib/systemd/system-generators/torcx-generator[847]: time="2024-12-13T04:11:32Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Dec 13 04:11:35.608812 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 13 04:11:32.349348 /usr/lib/systemd/system-generators/torcx-generator[847]: time="2024-12-13T04:11:32Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Dec 13 04:11:35.803992 systemd[1]: Finished systemd-remount-fs.service. Dec 13 04:11:32.349360 /usr/lib/systemd/system-generators/torcx-generator[847]: time="2024-12-13T04:11:32Z" level=debug msg="skipped missing lower profile" missing profile=oem Dec 13 04:11:35.804717 systemd[1]: Reached target network-pre.target. Dec 13 04:11:32.349393 /usr/lib/systemd/system-generators/torcx-generator[847]: time="2024-12-13T04:11:32Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Dec 13 04:11:35.807378 systemd[1]: Mounting sys-kernel-config.mount... Dec 13 04:11:32.349409 /usr/lib/systemd/system-generators/torcx-generator[847]: time="2024-12-13T04:11:32Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Dec 13 04:11:35.807822 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 04:11:32.349646 /usr/lib/systemd/system-generators/torcx-generator[847]: time="2024-12-13T04:11:32Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Dec 13 04:11:35.811797 systemd[1]: Starting systemd-hwdb-update.service... Dec 13 04:11:32.349686 /usr/lib/systemd/system-generators/torcx-generator[847]: time="2024-12-13T04:11:32Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Dec 13 04:11:32.349701 /usr/lib/systemd/system-generators/torcx-generator[847]: time="2024-12-13T04:11:32Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Dec 13 04:11:32.351335 /usr/lib/systemd/system-generators/torcx-generator[847]: time="2024-12-13T04:11:32Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Dec 13 04:11:32.351375 /usr/lib/systemd/system-generators/torcx-generator[847]: time="2024-12-13T04:11:32Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Dec 13 04:11:35.828675 systemd-journald[913]: Time spent on flushing to /var/log/journal/4edb8460fa194c8fb3eeacb5f5a58ac2 is 20.203ms for 1051 entries. Dec 13 04:11:35.828675 systemd-journald[913]: System Journal (/var/log/journal/4edb8460fa194c8fb3eeacb5f5a58ac2) is 8.0M, max 584.8M, 576.8M free. Dec 13 04:11:35.864868 systemd-journald[913]: Received client request to flush runtime journal. Dec 13 04:11:35.864920 kernel: fuse: init (API version 7.34) Dec 13 04:11:35.864942 kernel: loop: module loaded Dec 13 04:11:35.835000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:11:35.853000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:11:35.860000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:11:35.860000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:11:32.351396 /usr/lib/systemd/system-generators/torcx-generator[847]: time="2024-12-13T04:11:32Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.6: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.6 Dec 13 04:11:35.813130 systemd[1]: Starting systemd-journal-flush.service... Dec 13 04:11:32.351412 /usr/lib/systemd/system-generators/torcx-generator[847]: time="2024-12-13T04:11:32Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Dec 13 04:11:35.813626 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 04:11:32.351433 /usr/lib/systemd/system-generators/torcx-generator[847]: time="2024-12-13T04:11:32Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.6: no such file or directory" path=/var/lib/torcx/store/3510.3.6 Dec 13 04:11:35.814609 systemd[1]: Starting systemd-random-seed.service... Dec 13 04:11:32.351450 /usr/lib/systemd/system-generators/torcx-generator[847]: time="2024-12-13T04:11:32Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Dec 13 04:11:35.815991 systemd[1]: Starting systemd-sysctl.service... Dec 13 04:11:35.196230 /usr/lib/systemd/system-generators/torcx-generator[847]: time="2024-12-13T04:11:35Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 04:11:35.819792 systemd[1]: Mounted sys-kernel-config.mount. Dec 13 04:11:35.196551 /usr/lib/systemd/system-generators/torcx-generator[847]: time="2024-12-13T04:11:35Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 04:11:35.887405 kernel: kauditd_printk_skb: 95 callbacks suppressed Dec 13 04:11:35.887479 kernel: audit: type=1130 audit(1734063095.869:134): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:11:35.887500 kernel: audit: type=1131 audit(1734063095.869:135): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:11:35.887606 kernel: audit: type=1130 audit(1734063095.874:136): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:11:35.869000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:11:35.869000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:11:35.874000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:11:35.834863 systemd[1]: Finished systemd-random-seed.service. Dec 13 04:11:35.196662 /usr/lib/systemd/system-generators/torcx-generator[847]: time="2024-12-13T04:11:35Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 04:11:35.835561 systemd[1]: Reached target first-boot-complete.target. Dec 13 04:11:35.196865 /usr/lib/systemd/system-generators/torcx-generator[847]: time="2024-12-13T04:11:35Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 04:11:35.853248 systemd[1]: Finished systemd-sysctl.service. Dec 13 04:11:35.196928 /usr/lib/systemd/system-generators/torcx-generator[847]: time="2024-12-13T04:11:35Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Dec 13 04:11:35.859787 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 04:11:35.197001 /usr/lib/systemd/system-generators/torcx-generator[847]: time="2024-12-13T04:11:35Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Dec 13 04:11:35.859916 systemd[1]: Finished modprobe@fuse.service. Dec 13 04:11:35.862414 systemd[1]: Mounting sys-fs-fuse-connections.mount... Dec 13 04:11:35.866404 systemd[1]: Mounted sys-fs-fuse-connections.mount. Dec 13 04:11:35.867405 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 04:11:35.869058 systemd[1]: Finished modprobe@loop.service. Dec 13 04:11:35.869890 systemd[1]: Finished systemd-journal-flush.service. Dec 13 04:11:35.875250 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 04:11:35.904641 systemd[1]: Finished flatcar-tmpfiles.service. Dec 13 04:11:35.904000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:11:35.910556 kernel: audit: type=1130 audit(1734063095.904:137): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:11:35.909444 systemd[1]: Starting systemd-sysusers.service... Dec 13 04:11:35.915998 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 04:11:35.921084 kernel: audit: type=1130 audit(1734063095.916:138): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:11:35.916000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:11:35.917562 systemd[1]: Starting systemd-udev-settle.service... Dec 13 04:11:35.929544 udevadm[958]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Dec 13 04:11:35.951854 systemd[1]: Finished systemd-sysusers.service. Dec 13 04:11:35.952000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:11:35.953362 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Dec 13 04:11:35.956320 kernel: audit: type=1130 audit(1734063095.952:139): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:11:35.995564 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Dec 13 04:11:36.000363 kernel: audit: type=1130 audit(1734063095.995:140): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:11:35.995000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:11:36.629497 systemd[1]: Finished systemd-hwdb-update.service. Dec 13 04:11:36.630000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:11:36.643075 kernel: audit: type=1130 audit(1734063096.630:141): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:11:36.643172 kernel: audit: type=1334 audit(1734063096.639:142): prog-id=21 op=LOAD Dec 13 04:11:36.639000 audit: BPF prog-id=21 op=LOAD Dec 13 04:11:36.641352 systemd[1]: Starting systemd-udevd.service... Dec 13 04:11:36.639000 audit: BPF prog-id=22 op=LOAD Dec 13 04:11:36.639000 audit: BPF prog-id=7 op=UNLOAD Dec 13 04:11:36.639000 audit: BPF prog-id=8 op=UNLOAD Dec 13 04:11:36.647389 kernel: audit: type=1334 audit(1734063096.639:143): prog-id=22 op=LOAD Dec 13 04:11:36.683694 systemd-udevd[961]: Using default interface naming scheme 'v252'. Dec 13 04:11:36.741465 systemd[1]: Started systemd-udevd.service. Dec 13 04:11:36.742000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:11:36.747000 audit: BPF prog-id=23 op=LOAD Dec 13 04:11:36.751596 systemd[1]: Starting systemd-networkd.service... Dec 13 04:11:36.776000 audit: BPF prog-id=24 op=LOAD Dec 13 04:11:36.777000 audit: BPF prog-id=25 op=LOAD Dec 13 04:11:36.778000 audit: BPF prog-id=26 op=LOAD Dec 13 04:11:36.780393 systemd[1]: Starting systemd-userdbd.service... Dec 13 04:11:36.831866 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Dec 13 04:11:36.850690 systemd[1]: Started systemd-userdbd.service. Dec 13 04:11:36.850000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:11:36.877841 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Dec 13 04:11:36.934487 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Dec 13 04:11:36.943000 audit[976]: AVC avc: denied { confidentiality } for pid=976 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Dec 13 04:11:36.943000 audit[976]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55931729ea40 a1=337fc a2=7f1d26498bc5 a3=5 items=110 ppid=961 pid=976 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 04:11:36.943000 audit: CWD cwd="/" Dec 13 04:11:36.943000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:11:36.943000 audit: PATH item=1 name=(null) inode=13894 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:11:36.943000 audit: PATH item=2 name=(null) inode=13894 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:11:36.943000 audit: PATH item=3 name=(null) inode=13895 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:11:36.943000 audit: PATH item=4 name=(null) inode=13894 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:11:36.943000 audit: PATH item=5 name=(null) inode=13896 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:11:36.943000 audit: PATH item=6 name=(null) inode=13894 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:11:36.943000 audit: PATH item=7 name=(null) inode=13897 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:11:36.943000 audit: PATH item=8 name=(null) inode=13897 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:11:36.943000 audit: PATH item=9 name=(null) inode=13898 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:11:36.943000 audit: PATH item=10 name=(null) inode=13897 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:11:36.943000 audit: PATH item=11 name=(null) inode=13899 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:11:36.943000 audit: PATH item=12 name=(null) inode=13897 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:11:36.943000 audit: PATH item=13 name=(null) inode=13900 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:11:36.943000 audit: PATH item=14 name=(null) inode=13897 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:11:36.943000 audit: PATH item=15 name=(null) inode=13901 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:11:36.943000 audit: PATH item=16 name=(null) inode=13897 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:11:36.943000 audit: PATH item=17 name=(null) inode=13902 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:11:36.943000 audit: PATH item=18 name=(null) inode=13894 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:11:36.943000 audit: PATH item=19 name=(null) inode=13903 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:11:36.943000 audit: PATH item=20 name=(null) inode=13903 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:11:36.943000 audit: PATH item=21 name=(null) inode=13904 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:11:36.943000 audit: PATH item=22 name=(null) inode=13903 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:11:36.943000 audit: PATH item=23 name=(null) inode=13905 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:11:36.943000 audit: PATH item=24 name=(null) inode=13903 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:11:36.943000 audit: PATH item=25 name=(null) inode=13906 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:11:36.943000 audit: PATH item=26 name=(null) inode=13903 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:11:36.943000 audit: PATH item=27 name=(null) inode=13907 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:11:36.943000 audit: PATH item=28 name=(null) inode=13903 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:11:36.943000 audit: PATH item=29 name=(null) inode=13908 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:11:36.943000 audit: PATH item=30 name=(null) inode=13894 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:11:36.943000 audit: PATH item=31 name=(null) inode=13909 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:11:36.943000 audit: PATH item=32 name=(null) inode=13909 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:11:36.943000 audit: PATH item=33 name=(null) inode=13910 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:11:36.943000 audit: PATH item=34 name=(null) inode=13909 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:11:36.943000 audit: PATH item=35 name=(null) inode=13911 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:11:36.943000 audit: PATH item=36 name=(null) inode=13909 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:11:36.943000 audit: PATH item=37 name=(null) inode=13912 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:11:36.943000 audit: PATH item=38 name=(null) inode=13909 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:11:36.943000 audit: PATH item=39 name=(null) inode=13913 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:11:36.943000 audit: PATH item=40 name=(null) inode=13909 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:11:36.943000 audit: PATH item=41 name=(null) inode=13914 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:11:36.943000 audit: PATH item=42 name=(null) inode=13894 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:11:36.943000 audit: PATH item=43 name=(null) inode=13915 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:11:36.943000 audit: PATH item=44 name=(null) inode=13915 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:11:36.943000 audit: PATH item=45 name=(null) inode=13916 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:11:36.943000 audit: PATH item=46 name=(null) inode=13915 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:11:36.943000 audit: PATH item=47 name=(null) inode=13917 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:11:36.943000 audit: PATH item=48 name=(null) inode=13915 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:11:36.943000 audit: PATH item=49 name=(null) inode=13918 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:11:36.943000 audit: PATH item=50 name=(null) inode=13915 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:11:36.943000 audit: PATH item=51 name=(null) inode=13919 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:11:36.943000 audit: PATH item=52 name=(null) inode=13915 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:11:36.943000 audit: PATH item=53 name=(null) inode=13920 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:11:36.943000 audit: PATH item=54 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:11:36.943000 audit: PATH item=55 name=(null) inode=13921 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:11:36.943000 audit: PATH item=56 name=(null) inode=13921 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:11:36.943000 audit: PATH item=57 name=(null) inode=13922 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:11:36.943000 audit: PATH item=58 name=(null) inode=13921 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:11:36.943000 audit: PATH item=59 name=(null) inode=13923 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:11:36.943000 audit: PATH item=60 name=(null) inode=13921 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:11:36.943000 audit: PATH item=61 name=(null) inode=13924 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:11:36.943000 audit: PATH item=62 name=(null) inode=13924 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:11:36.943000 audit: PATH item=63 name=(null) inode=13925 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:11:36.943000 audit: PATH item=64 name=(null) inode=13924 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:11:36.943000 audit: PATH item=65 name=(null) inode=13926 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:11:36.943000 audit: PATH item=66 name=(null) inode=13924 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:11:36.943000 audit: PATH item=67 name=(null) inode=13927 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:11:36.943000 audit: PATH item=68 name=(null) inode=13924 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:11:36.943000 audit: PATH item=69 name=(null) inode=13928 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:11:36.943000 audit: PATH item=70 name=(null) inode=13924 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:11:36.943000 audit: PATH item=71 name=(null) inode=13929 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:11:36.943000 audit: PATH item=72 name=(null) inode=13921 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:11:36.943000 audit: PATH item=73 name=(null) inode=13930 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:11:36.943000 audit: PATH item=74 name=(null) inode=13930 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:11:36.943000 audit: PATH item=75 name=(null) inode=13931 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:11:36.943000 audit: PATH item=76 name=(null) inode=13930 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:11:36.943000 audit: PATH item=77 name=(null) inode=13932 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:11:36.943000 audit: PATH item=78 name=(null) inode=13930 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:11:36.943000 audit: PATH item=79 name=(null) inode=13933 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:11:36.943000 audit: PATH item=80 name=(null) inode=13930 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:11:36.951945 systemd-networkd[971]: lo: Link UP Dec 13 04:11:36.952466 systemd-networkd[971]: lo: Gained carrier Dec 13 04:11:36.952880 systemd-networkd[971]: Enumeration completed Dec 13 04:11:36.952995 systemd[1]: Started systemd-networkd.service. Dec 13 04:11:36.953000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:11:36.954438 systemd-networkd[971]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 04:11:36.943000 audit: PATH item=81 name=(null) inode=13934 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:11:36.943000 audit: PATH item=82 name=(null) inode=13930 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:11:36.943000 audit: PATH item=83 name=(null) inode=13935 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:11:36.943000 audit: PATH item=84 name=(null) inode=13921 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:11:36.943000 audit: PATH item=85 name=(null) inode=13936 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:11:36.943000 audit: PATH item=86 name=(null) inode=13936 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:11:36.943000 audit: PATH item=87 name=(null) inode=13937 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:11:36.943000 audit: PATH item=88 name=(null) inode=13936 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:11:36.943000 audit: PATH item=89 name=(null) inode=13938 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:11:36.943000 audit: PATH item=90 name=(null) inode=13936 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:11:36.943000 audit: PATH item=91 name=(null) inode=13939 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:11:36.943000 audit: PATH item=92 name=(null) inode=13936 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:11:36.943000 audit: PATH item=93 name=(null) inode=13940 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:11:36.943000 audit: PATH item=94 name=(null) inode=13936 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:11:36.943000 audit: PATH item=95 name=(null) inode=13941 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:11:36.943000 audit: PATH item=96 name=(null) inode=13921 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:11:36.943000 audit: PATH item=97 name=(null) inode=13942 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:11:36.943000 audit: PATH item=98 name=(null) inode=13942 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:11:36.943000 audit: PATH item=99 name=(null) inode=13943 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:11:36.943000 audit: PATH item=100 name=(null) inode=13942 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:11:36.943000 audit: PATH item=101 name=(null) inode=13944 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:11:36.943000 audit: PATH item=102 name=(null) inode=13942 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:11:36.943000 audit: PATH item=103 name=(null) inode=13945 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:11:36.943000 audit: PATH item=104 name=(null) inode=13942 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:11:36.943000 audit: PATH item=105 name=(null) inode=13946 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:11:36.943000 audit: PATH item=106 name=(null) inode=13942 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:11:36.943000 audit: PATH item=107 name=(null) inode=13947 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:11:36.943000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:11:36.943000 audit: PATH item=109 name=(null) inode=13948 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:11:36.943000 audit: PROCTITLE proctitle="(udev-worker)" Dec 13 04:11:36.956200 systemd-networkd[971]: eth0: Link UP Dec 13 04:11:36.956210 systemd-networkd[971]: eth0: Gained carrier Dec 13 04:11:36.961391 kernel: ACPI: button: Power Button [PWRF] Dec 13 04:11:36.968348 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Dec 13 04:11:36.974483 systemd-networkd[971]: eth0: DHCPv4 address 172.24.4.93/24, gateway 172.24.4.1 acquired from 172.24.4.1 Dec 13 04:11:36.977338 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Dec 13 04:11:36.981318 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 04:11:37.027800 systemd[1]: Finished systemd-udev-settle.service. Dec 13 04:11:37.027000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:11:37.029584 systemd[1]: Starting lvm2-activation-early.service... Dec 13 04:11:37.063387 lvm[990]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 04:11:37.105010 systemd[1]: Finished lvm2-activation-early.service. Dec 13 04:11:37.105000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:11:37.106453 systemd[1]: Reached target cryptsetup.target. Dec 13 04:11:37.110064 systemd[1]: Starting lvm2-activation.service... Dec 13 04:11:37.118822 lvm[991]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 04:11:37.157586 systemd[1]: Finished lvm2-activation.service. Dec 13 04:11:37.158000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:11:37.158919 systemd[1]: Reached target local-fs-pre.target. Dec 13 04:11:37.160071 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 04:11:37.160138 systemd[1]: Reached target local-fs.target. Dec 13 04:11:37.161262 systemd[1]: Reached target machines.target. Dec 13 04:11:37.165018 systemd[1]: Starting ldconfig.service... Dec 13 04:11:37.167808 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 04:11:37.167922 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 04:11:37.170232 systemd[1]: Starting systemd-boot-update.service... Dec 13 04:11:37.174635 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Dec 13 04:11:37.181046 systemd[1]: Starting systemd-machine-id-commit.service... Dec 13 04:11:37.186412 systemd[1]: Starting systemd-sysext.service... Dec 13 04:11:37.207759 systemd[1]: boot.automount: Got automount request for /boot, triggered by 993 (bootctl) Dec 13 04:11:37.211465 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Dec 13 04:11:37.231184 systemd[1]: Unmounting usr-share-oem.mount... Dec 13 04:11:37.243000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:11:37.241424 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Dec 13 04:11:37.245099 systemd[1]: usr-share-oem.mount: Deactivated successfully. Dec 13 04:11:37.245287 systemd[1]: Unmounted usr-share-oem.mount. Dec 13 04:11:37.264318 kernel: loop0: detected capacity change from 0 to 205544 Dec 13 04:11:37.291209 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 04:11:37.292993 systemd[1]: Finished systemd-machine-id-commit.service. Dec 13 04:11:37.293000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:11:37.326788 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 04:11:37.351359 kernel: loop1: detected capacity change from 0 to 205544 Dec 13 04:11:37.387698 (sd-sysext)[1006]: Using extensions 'kubernetes'. Dec 13 04:11:37.390088 (sd-sysext)[1006]: Merged extensions into '/usr'. Dec 13 04:11:37.411674 systemd-fsck[1003]: fsck.fat 4.2 (2021-01-31) Dec 13 04:11:37.411674 systemd-fsck[1003]: /dev/vda1: 789 files, 119291/258078 clusters Dec 13 04:11:37.420958 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Dec 13 04:11:37.421000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:11:37.425380 systemd[1]: Mounting boot.mount... Dec 13 04:11:37.451626 systemd[1]: Mounted boot.mount. Dec 13 04:11:37.455366 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 04:11:37.456847 systemd[1]: Mounting usr-share-oem.mount... Dec 13 04:11:37.457607 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 04:11:37.458801 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 04:11:37.460570 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 04:11:37.463774 systemd[1]: Starting modprobe@loop.service... Dec 13 04:11:37.464285 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 04:11:37.464440 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 04:11:37.464582 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 04:11:37.465993 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 04:11:37.466124 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 04:11:37.468000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:11:37.468000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:11:37.468761 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 04:11:37.468882 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 04:11:37.469000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:11:37.469000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:11:37.469800 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 04:11:37.474239 systemd[1]: Mounted usr-share-oem.mount. Dec 13 04:11:37.476267 systemd[1]: Finished systemd-sysext.service. Dec 13 04:11:37.476000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:11:37.477843 systemd[1]: Starting ensure-sysext.service... Dec 13 04:11:37.479582 systemd[1]: Starting systemd-tmpfiles-setup.service... Dec 13 04:11:37.488117 systemd[1]: Reloading. Dec 13 04:11:37.500918 systemd-tmpfiles[1014]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Dec 13 04:11:37.513049 systemd-tmpfiles[1014]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 04:11:37.514945 systemd-tmpfiles[1014]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 04:11:37.569123 /usr/lib/systemd/system-generators/torcx-generator[1033]: time="2024-12-13T04:11:37Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 04:11:37.570360 /usr/lib/systemd/system-generators/torcx-generator[1033]: time="2024-12-13T04:11:37Z" level=info msg="torcx already run" Dec 13 04:11:37.669894 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 04:11:37.670131 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 04:11:37.697093 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 04:11:37.770000 audit: BPF prog-id=27 op=LOAD Dec 13 04:11:37.770000 audit: BPF prog-id=24 op=UNLOAD Dec 13 04:11:37.770000 audit: BPF prog-id=28 op=LOAD Dec 13 04:11:37.770000 audit: BPF prog-id=29 op=LOAD Dec 13 04:11:37.770000 audit: BPF prog-id=25 op=UNLOAD Dec 13 04:11:37.770000 audit: BPF prog-id=26 op=UNLOAD Dec 13 04:11:37.773000 audit: BPF prog-id=30 op=LOAD Dec 13 04:11:37.773000 audit: BPF prog-id=23 op=UNLOAD Dec 13 04:11:37.774000 audit: BPF prog-id=31 op=LOAD Dec 13 04:11:37.774000 audit: BPF prog-id=32 op=LOAD Dec 13 04:11:37.774000 audit: BPF prog-id=21 op=UNLOAD Dec 13 04:11:37.775000 audit: BPF prog-id=22 op=UNLOAD Dec 13 04:11:37.776000 audit: BPF prog-id=33 op=LOAD Dec 13 04:11:37.776000 audit: BPF prog-id=18 op=UNLOAD Dec 13 04:11:37.776000 audit: BPF prog-id=34 op=LOAD Dec 13 04:11:37.777000 audit: BPF prog-id=35 op=LOAD Dec 13 04:11:37.777000 audit: BPF prog-id=19 op=UNLOAD Dec 13 04:11:37.777000 audit: BPF prog-id=20 op=UNLOAD Dec 13 04:11:37.788100 systemd[1]: Finished systemd-boot-update.service. Dec 13 04:11:37.788000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:11:37.789200 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 04:11:37.789433 systemd[1]: Finished modprobe@loop.service. Dec 13 04:11:37.789000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:11:37.789000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:11:37.791442 systemd[1]: Finished systemd-tmpfiles-setup.service. Dec 13 04:11:37.791000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:11:37.795453 systemd[1]: Starting audit-rules.service... Dec 13 04:11:37.797125 systemd[1]: Starting clean-ca-certificates.service... Dec 13 04:11:37.799061 systemd[1]: Starting systemd-journal-catalog-update.service... Dec 13 04:11:37.804000 audit: BPF prog-id=36 op=LOAD Dec 13 04:11:37.807000 audit: BPF prog-id=37 op=LOAD Dec 13 04:11:37.803446 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 04:11:37.806094 systemd[1]: Starting systemd-resolved.service... Dec 13 04:11:37.809926 systemd[1]: Starting systemd-timesyncd.service... Dec 13 04:11:37.811507 systemd[1]: Starting systemd-update-utmp.service... Dec 13 04:11:37.813689 systemd[1]: Finished clean-ca-certificates.service. Dec 13 04:11:37.814000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:11:37.816530 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 04:11:37.821930 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 04:11:37.822145 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 04:11:37.823560 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 04:11:37.827858 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 04:11:37.829848 systemd[1]: Starting modprobe@loop.service... Dec 13 04:11:37.830402 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 04:11:37.830537 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 04:11:37.830673 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 04:11:37.830774 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 04:11:37.831863 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 04:11:37.831999 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 04:11:37.832000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:11:37.835000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:11:37.835826 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 04:11:37.835944 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 04:11:37.836000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:11:37.836000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:11:37.836794 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 04:11:37.836908 systemd[1]: Finished modprobe@loop.service. Dec 13 04:11:37.837000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:11:37.837000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:11:37.837817 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 04:11:37.837928 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 04:11:37.839775 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 04:11:37.839991 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 04:11:37.842315 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 04:11:37.844406 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 04:11:37.847451 systemd[1]: Starting modprobe@loop.service... Dec 13 04:11:37.847962 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 04:11:37.848110 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 04:11:37.848274 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 04:11:37.848409 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 04:11:37.849450 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 04:11:37.849580 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 04:11:37.851000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:11:37.851000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:11:37.854990 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 04:11:37.855236 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 04:11:37.858143 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 04:11:37.858000 audit[1087]: SYSTEM_BOOT pid=1087 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Dec 13 04:11:37.862621 systemd[1]: Starting modprobe@drm.service... Dec 13 04:11:37.863188 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 04:11:37.863377 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 04:11:37.864757 systemd[1]: Starting systemd-networkd-wait-online.service... Dec 13 04:11:37.865289 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 04:11:37.865447 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 04:11:37.866551 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 04:11:37.866681 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 04:11:37.868000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:11:37.868000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:11:37.872799 systemd[1]: Finished ensure-sysext.service. Dec 13 04:11:37.873000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:11:37.873590 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 04:11:37.873703 systemd[1]: Finished modprobe@loop.service. Dec 13 04:11:37.873000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:11:37.873000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:11:37.875026 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 04:11:37.877685 systemd[1]: Finished systemd-update-utmp.service. Dec 13 04:11:37.877000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:11:37.886856 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 04:11:37.886987 systemd[1]: Finished modprobe@drm.service. Dec 13 04:11:37.887000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:11:37.887000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:11:37.891883 ldconfig[992]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 04:11:37.893064 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 04:11:37.893190 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 04:11:37.893000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:11:37.893000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:11:37.893785 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 04:11:37.911517 systemd[1]: Finished ldconfig.service. Dec 13 04:11:37.911000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:11:37.914272 systemd[1]: Finished systemd-journal-catalog-update.service. Dec 13 04:11:37.916127 systemd[1]: Starting systemd-update-done.service... Dec 13 04:11:37.914000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:11:37.923977 systemd[1]: Finished systemd-update-done.service. Dec 13 04:11:37.924000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:11:37.935000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Dec 13 04:11:37.935000 audit[1111]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffcbbbd5150 a2=420 a3=0 items=0 ppid=1081 pid=1111 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 04:11:37.935000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 13 04:11:37.936520 augenrules[1111]: No rules Dec 13 04:11:37.937522 systemd[1]: Finished audit-rules.service. Dec 13 04:11:37.945152 systemd[1]: Started systemd-timesyncd.service. Dec 13 04:11:37.945732 systemd[1]: Reached target time-set.target. Dec 13 04:11:37.956483 systemd-resolved[1085]: Positive Trust Anchors: Dec 13 04:11:37.956502 systemd-resolved[1085]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 04:11:37.956539 systemd-resolved[1085]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 04:11:38.978605 systemd-timesyncd[1086]: Contacted time server 212.83.158.83:123 (0.flatcar.pool.ntp.org). Dec 13 04:11:38.978996 systemd-timesyncd[1086]: Initial clock synchronization to Fri 2024-12-13 04:11:38.978524 UTC. Dec 13 04:11:38.979345 systemd-resolved[1085]: Using system hostname 'ci-3510-3-6-e-d6f0f5ff51.novalocal'. Dec 13 04:11:38.980863 systemd[1]: Started systemd-resolved.service. Dec 13 04:11:38.981413 systemd[1]: Reached target network.target. Dec 13 04:11:38.981855 systemd[1]: Reached target nss-lookup.target. Dec 13 04:11:38.982280 systemd[1]: Reached target sysinit.target. Dec 13 04:11:38.982799 systemd[1]: Started motdgen.path. Dec 13 04:11:38.983231 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Dec 13 04:11:38.983900 systemd[1]: Started logrotate.timer. Dec 13 04:11:38.984422 systemd[1]: Started mdadm.timer. Dec 13 04:11:38.984825 systemd[1]: Started systemd-tmpfiles-clean.timer. Dec 13 04:11:38.985258 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 04:11:38.985286 systemd[1]: Reached target paths.target. Dec 13 04:11:38.985680 systemd[1]: Reached target timers.target. Dec 13 04:11:38.986369 systemd[1]: Listening on dbus.socket. Dec 13 04:11:38.988197 systemd[1]: Starting docker.socket... Dec 13 04:11:38.991805 systemd[1]: Listening on sshd.socket. Dec 13 04:11:38.992320 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 04:11:38.992730 systemd[1]: Listening on docker.socket. Dec 13 04:11:38.993230 systemd[1]: Reached target sockets.target. Dec 13 04:11:38.993643 systemd[1]: Reached target basic.target. Dec 13 04:11:38.994106 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 04:11:38.994137 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 04:11:38.995128 systemd[1]: Starting containerd.service... Dec 13 04:11:38.997518 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Dec 13 04:11:38.999114 systemd[1]: Starting dbus.service... Dec 13 04:11:39.001388 systemd[1]: Starting enable-oem-cloudinit.service... Dec 13 04:11:39.005147 systemd[1]: Starting extend-filesystems.service... Dec 13 04:11:39.007553 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Dec 13 04:11:39.009095 systemd[1]: Starting motdgen.service... Dec 13 04:11:39.011323 systemd[1]: Starting ssh-key-proc-cmdline.service... Dec 13 04:11:39.014948 systemd[1]: Starting sshd-keygen.service... Dec 13 04:11:39.019899 systemd[1]: Starting systemd-logind.service... Dec 13 04:11:39.020451 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 04:11:39.020523 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 04:11:39.020980 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 04:11:39.022168 systemd[1]: Starting update-engine.service... Dec 13 04:11:39.025400 systemd[1]: Starting update-ssh-keys-after-ignition.service... Dec 13 04:11:39.041780 jq[1124]: false Dec 13 04:11:39.048500 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 04:11:39.048719 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Dec 13 04:11:39.050359 jq[1134]: true Dec 13 04:11:39.059105 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 04:11:39.059344 systemd[1]: Finished ssh-key-proc-cmdline.service. Dec 13 04:11:39.069809 extend-filesystems[1125]: Found loop1 Dec 13 04:11:39.075265 dbus-daemon[1121]: [system] SELinux support is enabled Dec 13 04:11:39.075462 systemd[1]: Started dbus.service. Dec 13 04:11:39.078062 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 04:11:39.078093 systemd[1]: Reached target system-config.target. Dec 13 04:11:39.078551 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 04:11:39.078576 systemd[1]: Reached target user-config.target. Dec 13 04:11:39.079344 jq[1144]: true Dec 13 04:11:39.080989 extend-filesystems[1125]: Found vda Dec 13 04:11:39.081585 extend-filesystems[1125]: Found vda1 Dec 13 04:11:39.083414 extend-filesystems[1125]: Found vda2 Dec 13 04:11:39.088543 extend-filesystems[1125]: Found vda3 Dec 13 04:11:39.089107 extend-filesystems[1125]: Found usr Dec 13 04:11:39.090608 extend-filesystems[1125]: Found vda4 Dec 13 04:11:39.092637 extend-filesystems[1125]: Found vda6 Dec 13 04:11:39.092637 extend-filesystems[1125]: Found vda7 Dec 13 04:11:39.092637 extend-filesystems[1125]: Found vda9 Dec 13 04:11:39.092637 extend-filesystems[1125]: Checking size of /dev/vda9 Dec 13 04:11:39.103270 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 04:11:39.103450 systemd[1]: Finished motdgen.service. Dec 13 04:11:39.128895 extend-filesystems[1125]: Resized partition /dev/vda9 Dec 13 04:11:39.138693 env[1138]: time="2024-12-13T04:11:39.138624054Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Dec 13 04:11:39.147221 extend-filesystems[1171]: resize2fs 1.46.5 (30-Dec-2021) Dec 13 04:11:39.171112 systemd-logind[1131]: Watching system buttons on /dev/input/event1 (Power Button) Dec 13 04:11:39.171138 systemd-logind[1131]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 13 04:11:39.171414 systemd-logind[1131]: New seat seat0. Dec 13 04:11:39.175249 systemd[1]: Started systemd-logind.service. Dec 13 04:11:39.185781 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 4635643 blocks Dec 13 04:11:39.188072 bash[1168]: Updated "/home/core/.ssh/authorized_keys" Dec 13 04:11:39.188327 systemd[1]: Finished update-ssh-keys-after-ignition.service. Dec 13 04:11:39.197944 env[1138]: time="2024-12-13T04:11:39.197510575Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 04:11:39.197944 env[1138]: time="2024-12-13T04:11:39.197683971Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 04:11:39.199636 env[1138]: time="2024-12-13T04:11:39.199583683Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.173-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 04:11:39.199636 env[1138]: time="2024-12-13T04:11:39.199619030Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 04:11:39.200329 env[1138]: time="2024-12-13T04:11:39.199862817Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 04:11:39.200329 env[1138]: time="2024-12-13T04:11:39.199890809Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 04:11:39.200329 env[1138]: time="2024-12-13T04:11:39.199911879Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Dec 13 04:11:39.200329 env[1138]: time="2024-12-13T04:11:39.199924783Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 04:11:39.200329 env[1138]: time="2024-12-13T04:11:39.200002649Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 04:11:39.200329 env[1138]: time="2024-12-13T04:11:39.200237279Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 04:11:39.200491 env[1138]: time="2024-12-13T04:11:39.200350251Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 04:11:39.200491 env[1138]: time="2024-12-13T04:11:39.200369728Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 04:11:39.200491 env[1138]: time="2024-12-13T04:11:39.200422386Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Dec 13 04:11:39.200491 env[1138]: time="2024-12-13T04:11:39.200436453Z" level=info msg="metadata content store policy set" policy=shared Dec 13 04:11:39.205695 update_engine[1133]: I1213 04:11:39.204680 1133 main.cc:92] Flatcar Update Engine starting Dec 13 04:11:39.212908 systemd[1]: Started update-engine.service. Dec 13 04:11:39.280917 update_engine[1133]: I1213 04:11:39.212987 1133 update_check_scheduler.cc:74] Next update check in 2m50s Dec 13 04:11:39.216061 systemd[1]: Started locksmithd.service. Dec 13 04:11:39.320793 kernel: EXT4-fs (vda9): resized filesystem to 4635643 Dec 13 04:11:39.411908 extend-filesystems[1171]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Dec 13 04:11:39.411908 extend-filesystems[1171]: old_desc_blocks = 1, new_desc_blocks = 3 Dec 13 04:11:39.411908 extend-filesystems[1171]: The filesystem on /dev/vda9 is now 4635643 (4k) blocks long. Dec 13 04:11:39.425669 extend-filesystems[1125]: Resized filesystem in /dev/vda9 Dec 13 04:11:39.426990 env[1138]: time="2024-12-13T04:11:39.413241449Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 04:11:39.426990 env[1138]: time="2024-12-13T04:11:39.413344422Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 04:11:39.426990 env[1138]: time="2024-12-13T04:11:39.413381922Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 04:11:39.426990 env[1138]: time="2024-12-13T04:11:39.413484003Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 04:11:39.426990 env[1138]: time="2024-12-13T04:11:39.413535460Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 04:11:39.426990 env[1138]: time="2024-12-13T04:11:39.413572499Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 04:11:39.426990 env[1138]: time="2024-12-13T04:11:39.413605712Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 04:11:39.426990 env[1138]: time="2024-12-13T04:11:39.413642250Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 04:11:39.426990 env[1138]: time="2024-12-13T04:11:39.413675362Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Dec 13 04:11:39.426990 env[1138]: time="2024-12-13T04:11:39.413717592Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 04:11:39.426990 env[1138]: time="2024-12-13T04:11:39.413750583Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 04:11:39.426990 env[1138]: time="2024-12-13T04:11:39.414059042Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 04:11:39.426990 env[1138]: time="2024-12-13T04:11:39.414341993Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 04:11:39.426990 env[1138]: time="2024-12-13T04:11:39.414617439Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 04:11:39.412209 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 04:11:39.428068 env[1138]: time="2024-12-13T04:11:39.415653152Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 04:11:39.428068 env[1138]: time="2024-12-13T04:11:39.415719296Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 04:11:39.428068 env[1138]: time="2024-12-13T04:11:39.415783636Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 04:11:39.428068 env[1138]: time="2024-12-13T04:11:39.415916466Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 04:11:39.428068 env[1138]: time="2024-12-13T04:11:39.416058482Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 04:11:39.428068 env[1138]: time="2024-12-13T04:11:39.416127932Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 04:11:39.428068 env[1138]: time="2024-12-13T04:11:39.416158670Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 04:11:39.428068 env[1138]: time="2024-12-13T04:11:39.416189999Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 04:11:39.428068 env[1138]: time="2024-12-13T04:11:39.416221317Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 04:11:39.428068 env[1138]: time="2024-12-13T04:11:39.416253297Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 04:11:39.428068 env[1138]: time="2024-12-13T04:11:39.416285067Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 04:11:39.428068 env[1138]: time="2024-12-13T04:11:39.416320634Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 04:11:39.428068 env[1138]: time="2024-12-13T04:11:39.416639562Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 04:11:39.428068 env[1138]: time="2024-12-13T04:11:39.416685828Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 04:11:39.428068 env[1138]: time="2024-12-13T04:11:39.416721255Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 04:11:39.412436 systemd[1]: Finished extend-filesystems.service. Dec 13 04:11:39.429012 env[1138]: time="2024-12-13T04:11:39.416797037Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 04:11:39.429012 env[1138]: time="2024-12-13T04:11:39.416841811Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Dec 13 04:11:39.429012 env[1138]: time="2024-12-13T04:11:39.416872438Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 04:11:39.429012 env[1138]: time="2024-12-13T04:11:39.416912944Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Dec 13 04:11:39.429012 env[1138]: time="2024-12-13T04:11:39.416990680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 04:11:39.422625 systemd[1]: Started containerd.service. Dec 13 04:11:39.429380 env[1138]: time="2024-12-13T04:11:39.417461563Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 04:11:39.429380 env[1138]: time="2024-12-13T04:11:39.417614260Z" level=info msg="Connect containerd service" Dec 13 04:11:39.429380 env[1138]: time="2024-12-13T04:11:39.417680404Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 04:11:39.429380 env[1138]: time="2024-12-13T04:11:39.421404128Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 04:11:39.429380 env[1138]: time="2024-12-13T04:11:39.421673152Z" level=info msg="Start subscribing containerd event" Dec 13 04:11:39.429380 env[1138]: time="2024-12-13T04:11:39.421729077Z" level=info msg="Start recovering state" Dec 13 04:11:39.429380 env[1138]: time="2024-12-13T04:11:39.421819617Z" level=info msg="Start event monitor" Dec 13 04:11:39.429380 env[1138]: time="2024-12-13T04:11:39.421836278Z" level=info msg="Start snapshots syncer" Dec 13 04:11:39.429380 env[1138]: time="2024-12-13T04:11:39.421847329Z" level=info msg="Start cni network conf syncer for default" Dec 13 04:11:39.429380 env[1138]: time="2024-12-13T04:11:39.421880842Z" level=info msg="Start streaming server" Dec 13 04:11:39.429380 env[1138]: time="2024-12-13T04:11:39.422331156Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 04:11:39.429380 env[1138]: time="2024-12-13T04:11:39.422459878Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 04:11:39.435167 env[1138]: time="2024-12-13T04:11:39.433801689Z" level=info msg="containerd successfully booted in 0.297242s" Dec 13 04:11:39.456597 locksmithd[1178]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 04:11:39.717020 sshd_keygen[1147]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 04:11:39.760859 systemd[1]: Finished sshd-keygen.service. Dec 13 04:11:39.763196 systemd[1]: Starting issuegen.service... Dec 13 04:11:39.775852 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 04:11:39.776006 systemd[1]: Finished issuegen.service. Dec 13 04:11:39.777870 systemd[1]: Starting systemd-user-sessions.service... Dec 13 04:11:39.791997 systemd[1]: Finished systemd-user-sessions.service. Dec 13 04:11:39.794079 systemd[1]: Started getty@tty1.service. Dec 13 04:11:39.796833 systemd[1]: Started serial-getty@ttyS0.service. Dec 13 04:11:39.797999 systemd[1]: Reached target getty.target. Dec 13 04:11:39.989191 systemd-networkd[971]: eth0: Gained IPv6LL Dec 13 04:11:39.993082 systemd[1]: Finished systemd-networkd-wait-online.service. Dec 13 04:11:39.995080 systemd[1]: Reached target network-online.target. Dec 13 04:11:40.000149 systemd[1]: Starting kubelet.service... Dec 13 04:11:40.056868 systemd[1]: Created slice system-sshd.slice. Dec 13 04:11:40.061464 systemd[1]: Started sshd@0-172.24.4.93:22-172.24.4.1:60156.service. Dec 13 04:11:41.067059 sshd[1200]: Accepted publickey for core from 172.24.4.1 port 60156 ssh2: RSA SHA256:i/IC0j0y8y59VaoiLkU9hl7M0K2qZ9B1gqKErvsmQpM Dec 13 04:11:41.072016 sshd[1200]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 04:11:41.104833 systemd[1]: Created slice user-500.slice. Dec 13 04:11:41.109437 systemd[1]: Starting user-runtime-dir@500.service... Dec 13 04:11:41.117352 systemd-logind[1131]: New session 1 of user core. Dec 13 04:11:41.128032 systemd[1]: Finished user-runtime-dir@500.service. Dec 13 04:11:41.132549 systemd[1]: Starting user@500.service... Dec 13 04:11:41.136404 (systemd)[1204]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 04:11:41.232818 systemd[1204]: Queued start job for default target default.target. Dec 13 04:11:41.233987 systemd[1204]: Reached target paths.target. Dec 13 04:11:41.234111 systemd[1204]: Reached target sockets.target. Dec 13 04:11:41.234197 systemd[1204]: Reached target timers.target. Dec 13 04:11:41.234281 systemd[1204]: Reached target basic.target. Dec 13 04:11:41.234449 systemd[1]: Started user@500.service. Dec 13 04:11:41.235880 systemd[1]: Started session-1.scope. Dec 13 04:11:41.237035 systemd[1204]: Reached target default.target. Dec 13 04:11:41.237214 systemd[1204]: Startup finished in 93ms. Dec 13 04:11:41.720864 systemd[1]: Started sshd@1-172.24.4.93:22-172.24.4.1:38500.service. Dec 13 04:11:41.816197 systemd[1]: Started kubelet.service. Dec 13 04:11:43.287702 kubelet[1216]: E1213 04:11:43.287654 1216 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 04:11:43.290034 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 04:11:43.290349 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 04:11:43.291017 systemd[1]: kubelet.service: Consumed 2.140s CPU time. Dec 13 04:11:43.481340 sshd[1213]: Accepted publickey for core from 172.24.4.1 port 38500 ssh2: RSA SHA256:i/IC0j0y8y59VaoiLkU9hl7M0K2qZ9B1gqKErvsmQpM Dec 13 04:11:43.484331 sshd[1213]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 04:11:43.495228 systemd-logind[1131]: New session 2 of user core. Dec 13 04:11:43.496188 systemd[1]: Started session-2.scope. Dec 13 04:11:44.222295 sshd[1213]: pam_unix(sshd:session): session closed for user core Dec 13 04:11:44.229865 systemd[1]: Started sshd@2-172.24.4.93:22-172.24.4.1:38514.service. Dec 13 04:11:44.232372 systemd[1]: sshd@1-172.24.4.93:22-172.24.4.1:38500.service: Deactivated successfully. Dec 13 04:11:44.234247 systemd[1]: session-2.scope: Deactivated successfully. Dec 13 04:11:44.238186 systemd-logind[1131]: Session 2 logged out. Waiting for processes to exit. Dec 13 04:11:44.241031 systemd-logind[1131]: Removed session 2. Dec 13 04:11:45.419319 sshd[1227]: Accepted publickey for core from 172.24.4.1 port 38514 ssh2: RSA SHA256:i/IC0j0y8y59VaoiLkU9hl7M0K2qZ9B1gqKErvsmQpM Dec 13 04:11:45.422220 sshd[1227]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 04:11:45.433659 systemd-logind[1131]: New session 3 of user core. Dec 13 04:11:45.434434 systemd[1]: Started session-3.scope. Dec 13 04:11:46.060846 sshd[1227]: pam_unix(sshd:session): session closed for user core Dec 13 04:11:46.066036 systemd[1]: sshd@2-172.24.4.93:22-172.24.4.1:38514.service: Deactivated successfully. Dec 13 04:11:46.067457 systemd[1]: session-3.scope: Deactivated successfully. Dec 13 04:11:46.068883 systemd-logind[1131]: Session 3 logged out. Waiting for processes to exit. Dec 13 04:11:46.070908 systemd-logind[1131]: Removed session 3. Dec 13 04:11:46.121365 coreos-metadata[1120]: Dec 13 04:11:46.121 WARN failed to locate config-drive, using the metadata service API instead Dec 13 04:11:46.215362 coreos-metadata[1120]: Dec 13 04:11:46.215 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Dec 13 04:11:46.699356 coreos-metadata[1120]: Dec 13 04:11:46.699 INFO Fetch successful Dec 13 04:11:46.699356 coreos-metadata[1120]: Dec 13 04:11:46.699 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Dec 13 04:11:46.714060 coreos-metadata[1120]: Dec 13 04:11:46.713 INFO Fetch successful Dec 13 04:11:46.720477 unknown[1120]: wrote ssh authorized keys file for user: core Dec 13 04:11:46.754974 update-ssh-keys[1235]: Updated "/home/core/.ssh/authorized_keys" Dec 13 04:11:46.756555 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Dec 13 04:11:46.757455 systemd[1]: Reached target multi-user.target. Dec 13 04:11:46.760274 systemd[1]: Starting systemd-update-utmp-runlevel.service... Dec 13 04:11:46.777375 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Dec 13 04:11:46.778101 systemd[1]: Finished systemd-update-utmp-runlevel.service. Dec 13 04:11:46.778694 systemd[1]: Startup finished in 933ms (kernel) + 7.215s (initrd) + 13.818s (userspace) = 21.967s. Dec 13 04:11:53.526456 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 04:11:53.527677 systemd[1]: Stopped kubelet.service. Dec 13 04:11:53.528013 systemd[1]: kubelet.service: Consumed 2.140s CPU time. Dec 13 04:11:53.530734 systemd[1]: Starting kubelet.service... Dec 13 04:11:53.789052 systemd[1]: Started kubelet.service. Dec 13 04:11:53.833102 kubelet[1241]: E1213 04:11:53.833063 1241 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 04:11:53.839350 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 04:11:53.839647 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 04:11:56.070536 systemd[1]: Started sshd@3-172.24.4.93:22-172.24.4.1:45272.service. Dec 13 04:11:57.257613 sshd[1248]: Accepted publickey for core from 172.24.4.1 port 45272 ssh2: RSA SHA256:i/IC0j0y8y59VaoiLkU9hl7M0K2qZ9B1gqKErvsmQpM Dec 13 04:11:57.261322 sshd[1248]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 04:11:57.270888 systemd-logind[1131]: New session 4 of user core. Dec 13 04:11:57.271704 systemd[1]: Started session-4.scope. Dec 13 04:11:57.899555 sshd[1248]: pam_unix(sshd:session): session closed for user core Dec 13 04:11:57.907139 systemd[1]: Started sshd@4-172.24.4.93:22-172.24.4.1:45288.service. Dec 13 04:11:57.911204 systemd[1]: sshd@3-172.24.4.93:22-172.24.4.1:45272.service: Deactivated successfully. Dec 13 04:11:57.912698 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 04:11:57.915492 systemd-logind[1131]: Session 4 logged out. Waiting for processes to exit. Dec 13 04:11:57.918217 systemd-logind[1131]: Removed session 4. Dec 13 04:11:59.122463 sshd[1253]: Accepted publickey for core from 172.24.4.1 port 45288 ssh2: RSA SHA256:i/IC0j0y8y59VaoiLkU9hl7M0K2qZ9B1gqKErvsmQpM Dec 13 04:11:59.126017 sshd[1253]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 04:11:59.135981 systemd-logind[1131]: New session 5 of user core. Dec 13 04:11:59.136867 systemd[1]: Started session-5.scope. Dec 13 04:11:59.903667 sshd[1253]: pam_unix(sshd:session): session closed for user core Dec 13 04:11:59.908476 systemd[1]: sshd@4-172.24.4.93:22-172.24.4.1:45288.service: Deactivated successfully. Dec 13 04:11:59.909956 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 04:11:59.913015 systemd-logind[1131]: Session 5 logged out. Waiting for processes to exit. Dec 13 04:11:59.915432 systemd[1]: Started sshd@5-172.24.4.93:22-172.24.4.1:45294.service. Dec 13 04:11:59.918472 systemd-logind[1131]: Removed session 5. Dec 13 04:12:01.124482 sshd[1260]: Accepted publickey for core from 172.24.4.1 port 45294 ssh2: RSA SHA256:i/IC0j0y8y59VaoiLkU9hl7M0K2qZ9B1gqKErvsmQpM Dec 13 04:12:01.127033 sshd[1260]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 04:12:01.137350 systemd-logind[1131]: New session 6 of user core. Dec 13 04:12:01.138100 systemd[1]: Started session-6.scope. Dec 13 04:12:01.904491 sshd[1260]: pam_unix(sshd:session): session closed for user core Dec 13 04:12:01.911998 systemd[1]: Started sshd@6-172.24.4.93:22-172.24.4.1:45302.service. Dec 13 04:12:01.915536 systemd[1]: sshd@5-172.24.4.93:22-172.24.4.1:45294.service: Deactivated successfully. Dec 13 04:12:01.917181 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 04:12:01.920251 systemd-logind[1131]: Session 6 logged out. Waiting for processes to exit. Dec 13 04:12:01.922901 systemd-logind[1131]: Removed session 6. Dec 13 04:12:03.130278 sshd[1265]: Accepted publickey for core from 172.24.4.1 port 45302 ssh2: RSA SHA256:i/IC0j0y8y59VaoiLkU9hl7M0K2qZ9B1gqKErvsmQpM Dec 13 04:12:03.134686 sshd[1265]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 04:12:03.146072 systemd-logind[1131]: New session 7 of user core. Dec 13 04:12:03.146638 systemd[1]: Started session-7.scope. Dec 13 04:12:03.632259 sudo[1269]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 04:12:03.633492 sudo[1269]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Dec 13 04:12:03.665501 systemd[1]: Starting coreos-metadata.service... Dec 13 04:12:04.026484 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 04:12:04.027261 systemd[1]: Stopped kubelet.service. Dec 13 04:12:04.030492 systemd[1]: Starting kubelet.service... Dec 13 04:12:04.372885 systemd[1]: Started kubelet.service. Dec 13 04:12:04.681840 kubelet[1280]: E1213 04:12:04.681262 1280 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 04:12:04.686318 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 04:12:04.686594 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 04:12:10.731325 coreos-metadata[1273]: Dec 13 04:12:10.731 WARN failed to locate config-drive, using the metadata service API instead Dec 13 04:12:10.826926 coreos-metadata[1273]: Dec 13 04:12:10.826 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Dec 13 04:12:11.266123 coreos-metadata[1273]: Dec 13 04:12:11.265 INFO Fetch successful Dec 13 04:12:11.266485 coreos-metadata[1273]: Dec 13 04:12:11.266 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Dec 13 04:12:11.281371 coreos-metadata[1273]: Dec 13 04:12:11.281 INFO Fetch successful Dec 13 04:12:11.281611 coreos-metadata[1273]: Dec 13 04:12:11.281 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Dec 13 04:12:11.295936 coreos-metadata[1273]: Dec 13 04:12:11.295 INFO Fetch successful Dec 13 04:12:11.296214 coreos-metadata[1273]: Dec 13 04:12:11.296 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Dec 13 04:12:11.309901 coreos-metadata[1273]: Dec 13 04:12:11.309 INFO Fetch successful Dec 13 04:12:11.310165 coreos-metadata[1273]: Dec 13 04:12:11.310 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Dec 13 04:12:11.326750 coreos-metadata[1273]: Dec 13 04:12:11.326 INFO Fetch successful Dec 13 04:12:11.343367 systemd[1]: Finished coreos-metadata.service. Dec 13 04:12:12.701345 systemd[1]: Stopped kubelet.service. Dec 13 04:12:12.707488 systemd[1]: Starting kubelet.service... Dec 13 04:12:12.769950 systemd[1]: Reloading. Dec 13 04:12:12.891096 /usr/lib/systemd/system-generators/torcx-generator[1338]: time="2024-12-13T04:12:12Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 04:12:12.891126 /usr/lib/systemd/system-generators/torcx-generator[1338]: time="2024-12-13T04:12:12Z" level=info msg="torcx already run" Dec 13 04:12:13.044865 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 04:12:13.045021 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 04:12:13.067782 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 04:12:13.168961 systemd[1]: Started kubelet.service. Dec 13 04:12:13.171969 systemd[1]: Stopping kubelet.service... Dec 13 04:12:13.172644 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 04:12:13.173067 systemd[1]: Stopped kubelet.service. Dec 13 04:12:13.176567 systemd[1]: Starting kubelet.service... Dec 13 04:12:13.325483 systemd[1]: Started kubelet.service. Dec 13 04:12:13.856551 kubelet[1392]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 04:12:13.857246 kubelet[1392]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 04:12:13.857381 kubelet[1392]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 04:12:13.864809 kubelet[1392]: I1213 04:12:13.864690 1392 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 04:12:14.332950 kubelet[1392]: I1213 04:12:14.332893 1392 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Dec 13 04:12:14.332950 kubelet[1392]: I1213 04:12:14.332954 1392 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 04:12:14.333990 kubelet[1392]: I1213 04:12:14.333958 1392 server.go:929] "Client rotation is on, will bootstrap in background" Dec 13 04:12:14.394329 kubelet[1392]: I1213 04:12:14.394285 1392 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 04:12:14.412928 kubelet[1392]: E1213 04:12:14.412870 1392 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Dec 13 04:12:14.413365 kubelet[1392]: I1213 04:12:14.413309 1392 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Dec 13 04:12:14.424796 kubelet[1392]: I1213 04:12:14.424682 1392 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 04:12:14.425025 kubelet[1392]: I1213 04:12:14.424911 1392 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Dec 13 04:12:14.425256 kubelet[1392]: I1213 04:12:14.425144 1392 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 04:12:14.425692 kubelet[1392]: I1213 04:12:14.425233 1392 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172.24.4.93","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 13 04:12:14.425692 kubelet[1392]: I1213 04:12:14.425691 1392 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 04:12:14.426058 kubelet[1392]: I1213 04:12:14.425715 1392 container_manager_linux.go:300] "Creating device plugin manager" Dec 13 04:12:14.426058 kubelet[1392]: I1213 04:12:14.426004 1392 state_mem.go:36] "Initialized new in-memory state store" Dec 13 04:12:14.436733 kubelet[1392]: I1213 04:12:14.436693 1392 kubelet.go:408] "Attempting to sync node with API server" Dec 13 04:12:14.436991 kubelet[1392]: I1213 04:12:14.436965 1392 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 04:12:14.437221 kubelet[1392]: I1213 04:12:14.437196 1392 kubelet.go:314] "Adding apiserver pod source" Dec 13 04:12:14.437387 kubelet[1392]: I1213 04:12:14.437364 1392 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 04:12:14.437599 kubelet[1392]: E1213 04:12:14.437368 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:12:14.437599 kubelet[1392]: E1213 04:12:14.437280 1392 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:12:14.449316 kubelet[1392]: I1213 04:12:14.449282 1392 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Dec 13 04:12:14.454189 kubelet[1392]: I1213 04:12:14.454152 1392 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 04:12:14.455953 kubelet[1392]: W1213 04:12:14.455376 1392 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "172.24.4.93" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Dec 13 04:12:14.455953 kubelet[1392]: E1213 04:12:14.455470 1392 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"172.24.4.93\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Dec 13 04:12:14.455953 kubelet[1392]: W1213 04:12:14.455679 1392 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Dec 13 04:12:14.455953 kubelet[1392]: E1213 04:12:14.455716 1392 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Dec 13 04:12:14.456427 kubelet[1392]: W1213 04:12:14.456143 1392 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 04:12:14.458689 kubelet[1392]: I1213 04:12:14.458649 1392 server.go:1269] "Started kubelet" Dec 13 04:12:14.467015 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Dec 13 04:12:14.467193 kubelet[1392]: I1213 04:12:14.466534 1392 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 04:12:14.477521 kubelet[1392]: I1213 04:12:14.477467 1392 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 04:12:14.480843 kubelet[1392]: I1213 04:12:14.480736 1392 server.go:460] "Adding debug handlers to kubelet server" Dec 13 04:12:14.491175 kubelet[1392]: I1213 04:12:14.491105 1392 volume_manager.go:289] "Starting Kubelet Volume Manager" Dec 13 04:12:14.492233 kubelet[1392]: E1213 04:12:14.492149 1392 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.24.4.93\" not found" Dec 13 04:12:14.492443 kubelet[1392]: E1213 04:12:14.489591 1392 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 04:12:14.497028 kubelet[1392]: I1213 04:12:14.496912 1392 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 13 04:12:14.497716 kubelet[1392]: I1213 04:12:14.484981 1392 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 13 04:12:14.507668 kubelet[1392]: I1213 04:12:14.498893 1392 reconciler.go:26] "Reconciler: start to sync state" Dec 13 04:12:14.507937 kubelet[1392]: I1213 04:12:14.481739 1392 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 04:12:14.508405 kubelet[1392]: I1213 04:12:14.508378 1392 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 04:12:14.516875 kubelet[1392]: I1213 04:12:14.516831 1392 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 04:12:14.544987 kubelet[1392]: I1213 04:12:14.544955 1392 factory.go:221] Registration of the containerd container factory successfully Dec 13 04:12:14.544987 kubelet[1392]: I1213 04:12:14.544977 1392 factory.go:221] Registration of the systemd container factory successfully Dec 13 04:12:14.551729 kubelet[1392]: E1213 04:12:14.551696 1392 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"172.24.4.93\" not found" node="172.24.4.93" Dec 13 04:12:14.574239 kubelet[1392]: I1213 04:12:14.574210 1392 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 04:12:14.574424 kubelet[1392]: I1213 04:12:14.574411 1392 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 04:12:14.574495 kubelet[1392]: I1213 04:12:14.574486 1392 state_mem.go:36] "Initialized new in-memory state store" Dec 13 04:12:14.580070 kubelet[1392]: I1213 04:12:14.580038 1392 policy_none.go:49] "None policy: Start" Dec 13 04:12:14.581136 kubelet[1392]: I1213 04:12:14.581124 1392 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 04:12:14.581231 kubelet[1392]: I1213 04:12:14.581222 1392 state_mem.go:35] "Initializing new in-memory state store" Dec 13 04:12:14.592075 systemd[1]: Created slice kubepods.slice. Dec 13 04:12:14.594144 kubelet[1392]: E1213 04:12:14.594128 1392 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.24.4.93\" not found" Dec 13 04:12:14.599507 systemd[1]: Created slice kubepods-burstable.slice. Dec 13 04:12:14.603012 systemd[1]: Created slice kubepods-besteffort.slice. Dec 13 04:12:14.608558 kubelet[1392]: I1213 04:12:14.608526 1392 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 04:12:14.608695 kubelet[1392]: I1213 04:12:14.608673 1392 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 13 04:12:14.608735 kubelet[1392]: I1213 04:12:14.608691 1392 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 04:12:14.609862 kubelet[1392]: I1213 04:12:14.609239 1392 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 04:12:14.612330 kubelet[1392]: E1213 04:12:14.612312 1392 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172.24.4.93\" not found" Dec 13 04:12:14.647633 kubelet[1392]: I1213 04:12:14.647542 1392 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 04:12:14.648614 kubelet[1392]: I1213 04:12:14.648595 1392 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 04:12:14.648681 kubelet[1392]: I1213 04:12:14.648639 1392 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 04:12:14.648681 kubelet[1392]: I1213 04:12:14.648661 1392 kubelet.go:2321] "Starting kubelet main sync loop" Dec 13 04:12:14.648742 kubelet[1392]: E1213 04:12:14.648709 1392 kubelet.go:2345] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Dec 13 04:12:14.710245 kubelet[1392]: I1213 04:12:14.710179 1392 kubelet_node_status.go:72] "Attempting to register node" node="172.24.4.93" Dec 13 04:12:14.718570 kubelet[1392]: I1213 04:12:14.718481 1392 kubelet_node_status.go:75] "Successfully registered node" node="172.24.4.93" Dec 13 04:12:14.718570 kubelet[1392]: E1213 04:12:14.718553 1392 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"172.24.4.93\": node \"172.24.4.93\" not found" Dec 13 04:12:14.748804 kubelet[1392]: E1213 04:12:14.748740 1392 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.24.4.93\" not found" Dec 13 04:12:14.850853 kubelet[1392]: E1213 04:12:14.849606 1392 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.24.4.93\" not found" Dec 13 04:12:14.878794 sudo[1269]: pam_unix(sudo:session): session closed for user root Dec 13 04:12:14.950128 kubelet[1392]: E1213 04:12:14.949974 1392 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.24.4.93\" not found" Dec 13 04:12:15.051173 kubelet[1392]: E1213 04:12:15.051024 1392 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.24.4.93\" not found" Dec 13 04:12:15.152270 kubelet[1392]: E1213 04:12:15.151945 1392 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.24.4.93\" not found" Dec 13 04:12:15.192586 sshd[1265]: pam_unix(sshd:session): session closed for user core Dec 13 04:12:15.199749 systemd[1]: sshd@6-172.24.4.93:22-172.24.4.1:45302.service: Deactivated successfully. Dec 13 04:12:15.200691 systemd-logind[1131]: Session 7 logged out. Waiting for processes to exit. Dec 13 04:12:15.201404 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 04:12:15.201684 systemd[1]: session-7.scope: Consumed 1.151s CPU time. Dec 13 04:12:15.204349 systemd-logind[1131]: Removed session 7. Dec 13 04:12:15.253102 kubelet[1392]: E1213 04:12:15.252879 1392 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.24.4.93\" not found" Dec 13 04:12:15.338042 kubelet[1392]: I1213 04:12:15.337973 1392 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Dec 13 04:12:15.338476 kubelet[1392]: W1213 04:12:15.338261 1392 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Dec 13 04:12:15.338476 kubelet[1392]: W1213 04:12:15.338328 1392 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Dec 13 04:12:15.353643 kubelet[1392]: E1213 04:12:15.353532 1392 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.24.4.93\" not found" Dec 13 04:12:15.439720 kubelet[1392]: I1213 04:12:15.438653 1392 apiserver.go:52] "Watching apiserver" Dec 13 04:12:15.440030 kubelet[1392]: E1213 04:12:15.439022 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:12:15.457713 kubelet[1392]: I1213 04:12:15.457671 1392 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Dec 13 04:12:15.459068 env[1138]: time="2024-12-13T04:12:15.458847198Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 04:12:15.459824 kubelet[1392]: I1213 04:12:15.459372 1392 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Dec 13 04:12:15.465524 systemd[1]: Created slice kubepods-besteffort-pod50b4072d_f237_4b6d_8ee0_429c0919b026.slice. Dec 13 04:12:15.482797 systemd[1]: Created slice kubepods-burstable-podecfe4f83_a1d3_40da_aca7_af579fc21da1.slice. Dec 13 04:12:15.498862 kubelet[1392]: I1213 04:12:15.498825 1392 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 13 04:12:15.513839 kubelet[1392]: I1213 04:12:15.513733 1392 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ecfe4f83-a1d3-40da-aca7-af579fc21da1-lib-modules\") pod \"cilium-nrsg5\" (UID: \"ecfe4f83-a1d3-40da-aca7-af579fc21da1\") " pod="kube-system/cilium-nrsg5" Dec 13 04:12:15.513990 kubelet[1392]: I1213 04:12:15.513853 1392 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ecfe4f83-a1d3-40da-aca7-af579fc21da1-clustermesh-secrets\") pod \"cilium-nrsg5\" (UID: \"ecfe4f83-a1d3-40da-aca7-af579fc21da1\") " pod="kube-system/cilium-nrsg5" Dec 13 04:12:15.513990 kubelet[1392]: I1213 04:12:15.513906 1392 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ecfe4f83-a1d3-40da-aca7-af579fc21da1-host-proc-sys-net\") pod \"cilium-nrsg5\" (UID: \"ecfe4f83-a1d3-40da-aca7-af579fc21da1\") " pod="kube-system/cilium-nrsg5" Dec 13 04:12:15.513990 kubelet[1392]: I1213 04:12:15.513953 1392 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ecfe4f83-a1d3-40da-aca7-af579fc21da1-host-proc-sys-kernel\") pod \"cilium-nrsg5\" (UID: \"ecfe4f83-a1d3-40da-aca7-af579fc21da1\") " pod="kube-system/cilium-nrsg5" Dec 13 04:12:15.514199 kubelet[1392]: I1213 04:12:15.513998 1392 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/50b4072d-f237-4b6d-8ee0-429c0919b026-lib-modules\") pod \"kube-proxy-kl5lk\" (UID: \"50b4072d-f237-4b6d-8ee0-429c0919b026\") " pod="kube-system/kube-proxy-kl5lk" Dec 13 04:12:15.514199 kubelet[1392]: I1213 04:12:15.514036 1392 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ecfe4f83-a1d3-40da-aca7-af579fc21da1-hostproc\") pod \"cilium-nrsg5\" (UID: \"ecfe4f83-a1d3-40da-aca7-af579fc21da1\") " pod="kube-system/cilium-nrsg5" Dec 13 04:12:15.514199 kubelet[1392]: I1213 04:12:15.514086 1392 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ecfe4f83-a1d3-40da-aca7-af579fc21da1-xtables-lock\") pod \"cilium-nrsg5\" (UID: \"ecfe4f83-a1d3-40da-aca7-af579fc21da1\") " pod="kube-system/cilium-nrsg5" Dec 13 04:12:15.514199 kubelet[1392]: I1213 04:12:15.514129 1392 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ecfe4f83-a1d3-40da-aca7-af579fc21da1-cilium-config-path\") pod \"cilium-nrsg5\" (UID: \"ecfe4f83-a1d3-40da-aca7-af579fc21da1\") " pod="kube-system/cilium-nrsg5" Dec 13 04:12:15.514441 kubelet[1392]: I1213 04:12:15.514198 1392 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ecfe4f83-a1d3-40da-aca7-af579fc21da1-hubble-tls\") pod \"cilium-nrsg5\" (UID: \"ecfe4f83-a1d3-40da-aca7-af579fc21da1\") " pod="kube-system/cilium-nrsg5" Dec 13 04:12:15.514441 kubelet[1392]: I1213 04:12:15.514241 1392 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ft77\" (UniqueName: \"kubernetes.io/projected/ecfe4f83-a1d3-40da-aca7-af579fc21da1-kube-api-access-6ft77\") pod \"cilium-nrsg5\" (UID: \"ecfe4f83-a1d3-40da-aca7-af579fc21da1\") " pod="kube-system/cilium-nrsg5" Dec 13 04:12:15.514441 kubelet[1392]: I1213 04:12:15.514293 1392 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/50b4072d-f237-4b6d-8ee0-429c0919b026-kube-proxy\") pod \"kube-proxy-kl5lk\" (UID: \"50b4072d-f237-4b6d-8ee0-429c0919b026\") " pod="kube-system/kube-proxy-kl5lk" Dec 13 04:12:15.514441 kubelet[1392]: I1213 04:12:15.514331 1392 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/50b4072d-f237-4b6d-8ee0-429c0919b026-xtables-lock\") pod \"kube-proxy-kl5lk\" (UID: \"50b4072d-f237-4b6d-8ee0-429c0919b026\") " pod="kube-system/kube-proxy-kl5lk" Dec 13 04:12:15.514441 kubelet[1392]: I1213 04:12:15.514370 1392 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bf4mk\" (UniqueName: \"kubernetes.io/projected/50b4072d-f237-4b6d-8ee0-429c0919b026-kube-api-access-bf4mk\") pod \"kube-proxy-kl5lk\" (UID: \"50b4072d-f237-4b6d-8ee0-429c0919b026\") " pod="kube-system/kube-proxy-kl5lk" Dec 13 04:12:15.514743 kubelet[1392]: I1213 04:12:15.514408 1392 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ecfe4f83-a1d3-40da-aca7-af579fc21da1-cilium-run\") pod \"cilium-nrsg5\" (UID: \"ecfe4f83-a1d3-40da-aca7-af579fc21da1\") " pod="kube-system/cilium-nrsg5" Dec 13 04:12:15.514743 kubelet[1392]: I1213 04:12:15.514446 1392 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ecfe4f83-a1d3-40da-aca7-af579fc21da1-bpf-maps\") pod \"cilium-nrsg5\" (UID: \"ecfe4f83-a1d3-40da-aca7-af579fc21da1\") " pod="kube-system/cilium-nrsg5" Dec 13 04:12:15.514743 kubelet[1392]: I1213 04:12:15.514484 1392 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ecfe4f83-a1d3-40da-aca7-af579fc21da1-cilium-cgroup\") pod \"cilium-nrsg5\" (UID: \"ecfe4f83-a1d3-40da-aca7-af579fc21da1\") " pod="kube-system/cilium-nrsg5" Dec 13 04:12:15.514743 kubelet[1392]: I1213 04:12:15.514523 1392 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ecfe4f83-a1d3-40da-aca7-af579fc21da1-cni-path\") pod \"cilium-nrsg5\" (UID: \"ecfe4f83-a1d3-40da-aca7-af579fc21da1\") " pod="kube-system/cilium-nrsg5" Dec 13 04:12:15.514743 kubelet[1392]: I1213 04:12:15.514561 1392 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ecfe4f83-a1d3-40da-aca7-af579fc21da1-etc-cni-netd\") pod \"cilium-nrsg5\" (UID: \"ecfe4f83-a1d3-40da-aca7-af579fc21da1\") " pod="kube-system/cilium-nrsg5" Dec 13 04:12:15.615970 kubelet[1392]: I1213 04:12:15.615853 1392 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Dec 13 04:12:15.779749 env[1138]: time="2024-12-13T04:12:15.779647470Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kl5lk,Uid:50b4072d-f237-4b6d-8ee0-429c0919b026,Namespace:kube-system,Attempt:0,}" Dec 13 04:12:15.798306 env[1138]: time="2024-12-13T04:12:15.797110840Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-nrsg5,Uid:ecfe4f83-a1d3-40da-aca7-af579fc21da1,Namespace:kube-system,Attempt:0,}" Dec 13 04:12:16.440887 kubelet[1392]: E1213 04:12:16.440821 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:12:16.664535 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2968542019.mount: Deactivated successfully. Dec 13 04:12:16.689648 env[1138]: time="2024-12-13T04:12:16.689494578Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 04:12:16.694244 env[1138]: time="2024-12-13T04:12:16.694091006Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 04:12:16.699669 env[1138]: time="2024-12-13T04:12:16.699606131Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 04:12:16.704597 env[1138]: time="2024-12-13T04:12:16.704544898Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 04:12:16.712799 env[1138]: time="2024-12-13T04:12:16.712686550Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 04:12:16.720267 env[1138]: time="2024-12-13T04:12:16.720216328Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 04:12:16.727185 env[1138]: time="2024-12-13T04:12:16.727106491Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 04:12:16.730110 env[1138]: time="2024-12-13T04:12:16.730043588Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 04:12:16.769802 env[1138]: time="2024-12-13T04:12:16.766622361Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 04:12:16.769802 env[1138]: time="2024-12-13T04:12:16.766701809Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 04:12:16.769802 env[1138]: time="2024-12-13T04:12:16.766730122Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 04:12:16.769802 env[1138]: time="2024-12-13T04:12:16.766885914Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c2cfe5660f684aa829b1e522b63df69058bc8c81924518bc0cf60911d8c15ee8 pid=1445 runtime=io.containerd.runc.v2 Dec 13 04:12:16.789346 systemd[1]: Started cri-containerd-c2cfe5660f684aa829b1e522b63df69058bc8c81924518bc0cf60911d8c15ee8.scope. Dec 13 04:12:16.801686 env[1138]: time="2024-12-13T04:12:16.801601395Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 04:12:16.801890 env[1138]: time="2024-12-13T04:12:16.801673930Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 04:12:16.801890 env[1138]: time="2024-12-13T04:12:16.801689098Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 04:12:16.802182 env[1138]: time="2024-12-13T04:12:16.802139360Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f69482162c46d0439047a90d53b92486daf9155d677b59a7519117286ebe738b pid=1470 runtime=io.containerd.runc.v2 Dec 13 04:12:16.821429 systemd[1]: Started cri-containerd-f69482162c46d0439047a90d53b92486daf9155d677b59a7519117286ebe738b.scope. Dec 13 04:12:16.829017 env[1138]: time="2024-12-13T04:12:16.828961052Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-nrsg5,Uid:ecfe4f83-a1d3-40da-aca7-af579fc21da1,Namespace:kube-system,Attempt:0,} returns sandbox id \"c2cfe5660f684aa829b1e522b63df69058bc8c81924518bc0cf60911d8c15ee8\"" Dec 13 04:12:16.831534 env[1138]: time="2024-12-13T04:12:16.831503322Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Dec 13 04:12:16.853360 env[1138]: time="2024-12-13T04:12:16.853301800Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kl5lk,Uid:50b4072d-f237-4b6d-8ee0-429c0919b026,Namespace:kube-system,Attempt:0,} returns sandbox id \"f69482162c46d0439047a90d53b92486daf9155d677b59a7519117286ebe738b\"" Dec 13 04:12:17.441533 kubelet[1392]: E1213 04:12:17.441396 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:12:18.442255 kubelet[1392]: E1213 04:12:18.442125 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:12:19.442407 kubelet[1392]: E1213 04:12:19.442361 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:12:20.443551 kubelet[1392]: E1213 04:12:20.443420 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:12:21.444028 kubelet[1392]: E1213 04:12:21.443927 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:12:22.444381 kubelet[1392]: E1213 04:12:22.444312 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:12:23.356670 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1006469425.mount: Deactivated successfully. Dec 13 04:12:23.444712 kubelet[1392]: E1213 04:12:23.444678 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:12:24.445885 kubelet[1392]: E1213 04:12:24.445811 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:12:24.819313 update_engine[1133]: I1213 04:12:24.819249 1133 update_attempter.cc:509] Updating boot flags... Dec 13 04:12:25.446518 kubelet[1392]: E1213 04:12:25.446462 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:12:26.446993 kubelet[1392]: E1213 04:12:26.446884 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:12:27.447716 kubelet[1392]: E1213 04:12:27.447616 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:12:28.021993 env[1138]: time="2024-12-13T04:12:28.021912995Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 04:12:28.024348 env[1138]: time="2024-12-13T04:12:28.024287812Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 04:12:28.027036 env[1138]: time="2024-12-13T04:12:28.026998086Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 04:12:28.028764 env[1138]: time="2024-12-13T04:12:28.028684763Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Dec 13 04:12:28.031447 env[1138]: time="2024-12-13T04:12:28.031422699Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.4\"" Dec 13 04:12:28.033143 env[1138]: time="2024-12-13T04:12:28.033072338Z" level=info msg="CreateContainer within sandbox \"c2cfe5660f684aa829b1e522b63df69058bc8c81924518bc0cf60911d8c15ee8\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 04:12:28.067247 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount655984794.mount: Deactivated successfully. Dec 13 04:12:28.083638 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1302991224.mount: Deactivated successfully. Dec 13 04:12:28.098492 env[1138]: time="2024-12-13T04:12:28.098415984Z" level=info msg="CreateContainer within sandbox \"c2cfe5660f684aa829b1e522b63df69058bc8c81924518bc0cf60911d8c15ee8\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"b65afb64283eb6ab7b3a0997fd08a97e03f3cac16936e26ff183455e9fa641c8\"" Dec 13 04:12:28.101385 env[1138]: time="2024-12-13T04:12:28.101315053Z" level=info msg="StartContainer for \"b65afb64283eb6ab7b3a0997fd08a97e03f3cac16936e26ff183455e9fa641c8\"" Dec 13 04:12:28.143521 systemd[1]: Started cri-containerd-b65afb64283eb6ab7b3a0997fd08a97e03f3cac16936e26ff183455e9fa641c8.scope. Dec 13 04:12:28.198150 systemd[1]: cri-containerd-b65afb64283eb6ab7b3a0997fd08a97e03f3cac16936e26ff183455e9fa641c8.scope: Deactivated successfully. Dec 13 04:12:28.198740 env[1138]: time="2024-12-13T04:12:28.198701210Z" level=info msg="StartContainer for \"b65afb64283eb6ab7b3a0997fd08a97e03f3cac16936e26ff183455e9fa641c8\" returns successfully" Dec 13 04:12:28.448727 kubelet[1392]: E1213 04:12:28.448646 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:12:29.030013 env[1138]: time="2024-12-13T04:12:29.029862545Z" level=info msg="shim disconnected" id=b65afb64283eb6ab7b3a0997fd08a97e03f3cac16936e26ff183455e9fa641c8 Dec 13 04:12:29.030013 env[1138]: time="2024-12-13T04:12:29.029989493Z" level=warning msg="cleaning up after shim disconnected" id=b65afb64283eb6ab7b3a0997fd08a97e03f3cac16936e26ff183455e9fa641c8 namespace=k8s.io Dec 13 04:12:29.030013 env[1138]: time="2024-12-13T04:12:29.030015622Z" level=info msg="cleaning up dead shim" Dec 13 04:12:29.048970 env[1138]: time="2024-12-13T04:12:29.048905832Z" level=warning msg="cleanup warnings time=\"2024-12-13T04:12:29Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1587 runtime=io.containerd.runc.v2\n" Dec 13 04:12:29.057018 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b65afb64283eb6ab7b3a0997fd08a97e03f3cac16936e26ff183455e9fa641c8-rootfs.mount: Deactivated successfully. Dec 13 04:12:29.450343 kubelet[1392]: E1213 04:12:29.449516 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:12:29.756598 env[1138]: time="2024-12-13T04:12:29.756197847Z" level=info msg="CreateContainer within sandbox \"c2cfe5660f684aa829b1e522b63df69058bc8c81924518bc0cf60911d8c15ee8\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 04:12:29.871184 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1229871947.mount: Deactivated successfully. Dec 13 04:12:29.895627 env[1138]: time="2024-12-13T04:12:29.895590497Z" level=info msg="CreateContainer within sandbox \"c2cfe5660f684aa829b1e522b63df69058bc8c81924518bc0cf60911d8c15ee8\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"5a3bf45b93bb5ed03cded91596a20251ca8e0751a7632e639fe26ee10709925c\"" Dec 13 04:12:29.910930 env[1138]: time="2024-12-13T04:12:29.910901406Z" level=info msg="StartContainer for \"5a3bf45b93bb5ed03cded91596a20251ca8e0751a7632e639fe26ee10709925c\"" Dec 13 04:12:29.952433 systemd[1]: Started cri-containerd-5a3bf45b93bb5ed03cded91596a20251ca8e0751a7632e639fe26ee10709925c.scope. Dec 13 04:12:30.005635 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 04:12:30.006225 systemd[1]: Stopped systemd-sysctl.service. Dec 13 04:12:30.006475 systemd[1]: Stopping systemd-sysctl.service... Dec 13 04:12:30.010413 systemd[1]: Starting systemd-sysctl.service... Dec 13 04:12:30.016041 systemd[1]: cri-containerd-5a3bf45b93bb5ed03cded91596a20251ca8e0751a7632e639fe26ee10709925c.scope: Deactivated successfully. Dec 13 04:12:30.019529 systemd[1]: Finished systemd-sysctl.service. Dec 13 04:12:30.057611 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2997248669.mount: Deactivated successfully. Dec 13 04:12:30.066817 env[1138]: time="2024-12-13T04:12:30.066706300Z" level=info msg="StartContainer for \"5a3bf45b93bb5ed03cded91596a20251ca8e0751a7632e639fe26ee10709925c\" returns successfully" Dec 13 04:12:30.111823 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5a3bf45b93bb5ed03cded91596a20251ca8e0751a7632e639fe26ee10709925c-rootfs.mount: Deactivated successfully. Dec 13 04:12:30.224329 env[1138]: time="2024-12-13T04:12:30.224210835Z" level=info msg="shim disconnected" id=5a3bf45b93bb5ed03cded91596a20251ca8e0751a7632e639fe26ee10709925c Dec 13 04:12:30.224329 env[1138]: time="2024-12-13T04:12:30.224308899Z" level=warning msg="cleaning up after shim disconnected" id=5a3bf45b93bb5ed03cded91596a20251ca8e0751a7632e639fe26ee10709925c namespace=k8s.io Dec 13 04:12:30.224329 env[1138]: time="2024-12-13T04:12:30.224334126Z" level=info msg="cleaning up dead shim" Dec 13 04:12:30.247401 env[1138]: time="2024-12-13T04:12:30.247315210Z" level=warning msg="cleanup warnings time=\"2024-12-13T04:12:30Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1651 runtime=io.containerd.runc.v2\n" Dec 13 04:12:30.449930 kubelet[1392]: E1213 04:12:30.449836 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:12:30.626209 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount331992513.mount: Deactivated successfully. Dec 13 04:12:30.761185 env[1138]: time="2024-12-13T04:12:30.761054163Z" level=info msg="CreateContainer within sandbox \"c2cfe5660f684aa829b1e522b63df69058bc8c81924518bc0cf60911d8c15ee8\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 04:12:30.788973 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1409448905.mount: Deactivated successfully. Dec 13 04:12:30.803881 env[1138]: time="2024-12-13T04:12:30.803720600Z" level=info msg="CreateContainer within sandbox \"c2cfe5660f684aa829b1e522b63df69058bc8c81924518bc0cf60911d8c15ee8\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"75b916288e76f94163490d05e83a460fc6f471d0775270e7a132fcad8bc0ec73\"" Dec 13 04:12:30.804680 env[1138]: time="2024-12-13T04:12:30.804605488Z" level=info msg="StartContainer for \"75b916288e76f94163490d05e83a460fc6f471d0775270e7a132fcad8bc0ec73\"" Dec 13 04:12:30.845335 systemd[1]: Started cri-containerd-75b916288e76f94163490d05e83a460fc6f471d0775270e7a132fcad8bc0ec73.scope. Dec 13 04:12:30.884322 systemd[1]: cri-containerd-75b916288e76f94163490d05e83a460fc6f471d0775270e7a132fcad8bc0ec73.scope: Deactivated successfully. Dec 13 04:12:30.886555 env[1138]: time="2024-12-13T04:12:30.886463425Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podecfe4f83_a1d3_40da_aca7_af579fc21da1.slice/cri-containerd-75b916288e76f94163490d05e83a460fc6f471d0775270e7a132fcad8bc0ec73.scope/memory.events\": no such file or directory" Dec 13 04:12:30.890864 env[1138]: time="2024-12-13T04:12:30.890825203Z" level=info msg="StartContainer for \"75b916288e76f94163490d05e83a460fc6f471d0775270e7a132fcad8bc0ec73\" returns successfully" Dec 13 04:12:31.180960 env[1138]: time="2024-12-13T04:12:31.180839530Z" level=info msg="shim disconnected" id=75b916288e76f94163490d05e83a460fc6f471d0775270e7a132fcad8bc0ec73 Dec 13 04:12:31.180960 env[1138]: time="2024-12-13T04:12:31.180930130Z" level=warning msg="cleaning up after shim disconnected" id=75b916288e76f94163490d05e83a460fc6f471d0775270e7a132fcad8bc0ec73 namespace=k8s.io Dec 13 04:12:31.180960 env[1138]: time="2024-12-13T04:12:31.180954876Z" level=info msg="cleaning up dead shim" Dec 13 04:12:31.202504 env[1138]: time="2024-12-13T04:12:31.202407802Z" level=warning msg="cleanup warnings time=\"2024-12-13T04:12:31Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1708 runtime=io.containerd.runc.v2\n" Dec 13 04:12:31.451122 kubelet[1392]: E1213 04:12:31.450921 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:12:31.777606 env[1138]: time="2024-12-13T04:12:31.777550694Z" level=info msg="CreateContainer within sandbox \"c2cfe5660f684aa829b1e522b63df69058bc8c81924518bc0cf60911d8c15ee8\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 04:12:31.802272 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1679309152.mount: Deactivated successfully. Dec 13 04:12:31.820894 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount556056111.mount: Deactivated successfully. Dec 13 04:12:31.835083 env[1138]: time="2024-12-13T04:12:31.834932107Z" level=info msg="CreateContainer within sandbox \"c2cfe5660f684aa829b1e522b63df69058bc8c81924518bc0cf60911d8c15ee8\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"da6cc7cc7d080559b7252f82bbd6f7f064abe8768e2dce0ecd826babc0c0e091\"" Dec 13 04:12:31.836568 env[1138]: time="2024-12-13T04:12:31.836515713Z" level=info msg="StartContainer for \"da6cc7cc7d080559b7252f82bbd6f7f064abe8768e2dce0ecd826babc0c0e091\"" Dec 13 04:12:31.869595 systemd[1]: Started cri-containerd-da6cc7cc7d080559b7252f82bbd6f7f064abe8768e2dce0ecd826babc0c0e091.scope. Dec 13 04:12:31.888260 env[1138]: time="2024-12-13T04:12:31.888201530Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.31.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 04:12:31.889417 env[1138]: time="2024-12-13T04:12:31.889378414Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ebf80573666f86f115452db568feb34f6f771c3bdc7bfed14b9577f992cfa300,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 04:12:31.892668 env[1138]: time="2024-12-13T04:12:31.892643779Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.31.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 04:12:31.895695 env[1138]: time="2024-12-13T04:12:31.895629791Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:1739b3febca392035bf6edfe31efdfa55226be7b57389b2001ae357f7dcb99cf,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 04:12:31.896298 env[1138]: time="2024-12-13T04:12:31.896268828Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.4\" returns image reference \"sha256:ebf80573666f86f115452db568feb34f6f771c3bdc7bfed14b9577f992cfa300\"" Dec 13 04:12:31.900892 env[1138]: time="2024-12-13T04:12:31.899778762Z" level=info msg="CreateContainer within sandbox \"f69482162c46d0439047a90d53b92486daf9155d677b59a7519117286ebe738b\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 04:12:31.910472 systemd[1]: cri-containerd-da6cc7cc7d080559b7252f82bbd6f7f064abe8768e2dce0ecd826babc0c0e091.scope: Deactivated successfully. Dec 13 04:12:31.912933 env[1138]: time="2024-12-13T04:12:31.912816638Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podecfe4f83_a1d3_40da_aca7_af579fc21da1.slice/cri-containerd-da6cc7cc7d080559b7252f82bbd6f7f064abe8768e2dce0ecd826babc0c0e091.scope/memory.events\": no such file or directory" Dec 13 04:12:31.916634 env[1138]: time="2024-12-13T04:12:31.916551182Z" level=info msg="StartContainer for \"da6cc7cc7d080559b7252f82bbd6f7f064abe8768e2dce0ecd826babc0c0e091\" returns successfully" Dec 13 04:12:31.927679 env[1138]: time="2024-12-13T04:12:31.927610113Z" level=info msg="CreateContainer within sandbox \"f69482162c46d0439047a90d53b92486daf9155d677b59a7519117286ebe738b\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"8e2d6e60b4fc43d394381d4b60b3f943f364648aa5ca2d89344b9b270bdfca43\"" Dec 13 04:12:31.928171 env[1138]: time="2024-12-13T04:12:31.928137041Z" level=info msg="StartContainer for \"8e2d6e60b4fc43d394381d4b60b3f943f364648aa5ca2d89344b9b270bdfca43\"" Dec 13 04:12:31.951383 systemd[1]: Started cri-containerd-8e2d6e60b4fc43d394381d4b60b3f943f364648aa5ca2d89344b9b270bdfca43.scope. Dec 13 04:12:32.243165 env[1138]: time="2024-12-13T04:12:32.242911115Z" level=info msg="StartContainer for \"8e2d6e60b4fc43d394381d4b60b3f943f364648aa5ca2d89344b9b270bdfca43\" returns successfully" Dec 13 04:12:32.247836 env[1138]: time="2024-12-13T04:12:32.247223843Z" level=info msg="shim disconnected" id=da6cc7cc7d080559b7252f82bbd6f7f064abe8768e2dce0ecd826babc0c0e091 Dec 13 04:12:32.248001 env[1138]: time="2024-12-13T04:12:32.247836010Z" level=warning msg="cleaning up after shim disconnected" id=da6cc7cc7d080559b7252f82bbd6f7f064abe8768e2dce0ecd826babc0c0e091 namespace=k8s.io Dec 13 04:12:32.248001 env[1138]: time="2024-12-13T04:12:32.247868950Z" level=info msg="cleaning up dead shim" Dec 13 04:12:32.275164 env[1138]: time="2024-12-13T04:12:32.275095916Z" level=warning msg="cleanup warnings time=\"2024-12-13T04:12:32Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1801 runtime=io.containerd.runc.v2\n" Dec 13 04:12:32.452892 kubelet[1392]: E1213 04:12:32.452832 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:12:32.793818 env[1138]: time="2024-12-13T04:12:32.793234227Z" level=info msg="CreateContainer within sandbox \"c2cfe5660f684aa829b1e522b63df69058bc8c81924518bc0cf60911d8c15ee8\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 04:12:32.833141 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4206912259.mount: Deactivated successfully. Dec 13 04:12:32.842789 env[1138]: time="2024-12-13T04:12:32.840921494Z" level=info msg="CreateContainer within sandbox \"c2cfe5660f684aa829b1e522b63df69058bc8c81924518bc0cf60911d8c15ee8\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"37cd3a9cf2629e830cd2f16308bd1865e874fbd97cdd5dd056e3270d117b62e3\"" Dec 13 04:12:32.844210 env[1138]: time="2024-12-13T04:12:32.844173295Z" level=info msg="StartContainer for \"37cd3a9cf2629e830cd2f16308bd1865e874fbd97cdd5dd056e3270d117b62e3\"" Dec 13 04:12:32.847195 kubelet[1392]: I1213 04:12:32.847126 1392 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-kl5lk" podStartSLOduration=3.803514886 podStartE2EDuration="18.847109995s" podCreationTimestamp="2024-12-13 04:12:14 +0000 UTC" firstStartedPulling="2024-12-13 04:12:16.85432411 +0000 UTC m=+3.520313070" lastFinishedPulling="2024-12-13 04:12:31.897919219 +0000 UTC m=+18.563908179" observedRunningTime="2024-12-13 04:12:32.803259916 +0000 UTC m=+19.469248886" watchObservedRunningTime="2024-12-13 04:12:32.847109995 +0000 UTC m=+19.513098965" Dec 13 04:12:32.869174 systemd[1]: Started cri-containerd-37cd3a9cf2629e830cd2f16308bd1865e874fbd97cdd5dd056e3270d117b62e3.scope. Dec 13 04:12:32.914689 env[1138]: time="2024-12-13T04:12:32.914607475Z" level=info msg="StartContainer for \"37cd3a9cf2629e830cd2f16308bd1865e874fbd97cdd5dd056e3270d117b62e3\" returns successfully" Dec 13 04:12:33.088999 kubelet[1392]: I1213 04:12:33.088806 1392 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Dec 13 04:12:33.373796 kernel: Initializing XFRM netlink socket Dec 13 04:12:33.453413 kubelet[1392]: E1213 04:12:33.453215 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:12:33.832777 kubelet[1392]: I1213 04:12:33.832657 1392 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-nrsg5" podStartSLOduration=8.633651001 podStartE2EDuration="19.832626278s" podCreationTimestamp="2024-12-13 04:12:14 +0000 UTC" firstStartedPulling="2024-12-13 04:12:16.8311324 +0000 UTC m=+3.497121360" lastFinishedPulling="2024-12-13 04:12:28.030107667 +0000 UTC m=+14.696096637" observedRunningTime="2024-12-13 04:12:33.832025782 +0000 UTC m=+20.498014803" watchObservedRunningTime="2024-12-13 04:12:33.832626278 +0000 UTC m=+20.498615288" Dec 13 04:12:34.437904 kubelet[1392]: E1213 04:12:34.437828 1392 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:12:34.455093 kubelet[1392]: E1213 04:12:34.455048 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:12:35.112899 systemd-networkd[971]: cilium_host: Link UP Dec 13 04:12:35.113264 systemd-networkd[971]: cilium_net: Link UP Dec 13 04:12:35.113273 systemd-networkd[971]: cilium_net: Gained carrier Dec 13 04:12:35.116596 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Dec 13 04:12:35.117395 systemd-networkd[971]: cilium_host: Gained carrier Dec 13 04:12:35.228893 systemd-networkd[971]: cilium_net: Gained IPv6LL Dec 13 04:12:35.265629 systemd-networkd[971]: cilium_vxlan: Link UP Dec 13 04:12:35.265641 systemd-networkd[971]: cilium_vxlan: Gained carrier Dec 13 04:12:35.455943 kubelet[1392]: E1213 04:12:35.455830 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:12:35.566945 kernel: NET: Registered PF_ALG protocol family Dec 13 04:12:35.733226 systemd-networkd[971]: cilium_host: Gained IPv6LL Dec 13 04:12:36.456978 kubelet[1392]: E1213 04:12:36.456896 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:12:36.490048 systemd-networkd[971]: lxc_health: Link UP Dec 13 04:12:36.504870 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Dec 13 04:12:36.505038 systemd-networkd[971]: lxc_health: Gained carrier Dec 13 04:12:37.280176 systemd-networkd[971]: cilium_vxlan: Gained IPv6LL Dec 13 04:12:37.458787 kubelet[1392]: E1213 04:12:37.458744 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:12:38.460282 kubelet[1392]: E1213 04:12:38.460221 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:12:38.484951 systemd-networkd[971]: lxc_health: Gained IPv6LL Dec 13 04:12:39.462191 kubelet[1392]: E1213 04:12:39.462156 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:12:40.463307 kubelet[1392]: E1213 04:12:40.463263 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:12:41.212647 systemd[1]: Created slice kubepods-besteffort-pod53f31e21_2605_44c6_87f2_0558ded61572.slice. Dec 13 04:12:41.307527 kubelet[1392]: I1213 04:12:41.307469 1392 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kv69v\" (UniqueName: \"kubernetes.io/projected/53f31e21-2605-44c6-87f2-0558ded61572-kube-api-access-kv69v\") pod \"nginx-deployment-8587fbcb89-hrltn\" (UID: \"53f31e21-2605-44c6-87f2-0558ded61572\") " pod="default/nginx-deployment-8587fbcb89-hrltn" Dec 13 04:12:41.322191 kubelet[1392]: I1213 04:12:41.322160 1392 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 04:12:41.464834 kubelet[1392]: E1213 04:12:41.464668 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:12:41.523141 env[1138]: time="2024-12-13T04:12:41.523041749Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-hrltn,Uid:53f31e21-2605-44c6-87f2-0558ded61572,Namespace:default,Attempt:0,}" Dec 13 04:12:41.618003 systemd-networkd[971]: lxcc30eb3b99ef3: Link UP Dec 13 04:12:41.628922 kernel: eth0: renamed from tmp204df Dec 13 04:12:41.636234 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 04:12:41.636366 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcc30eb3b99ef3: link becomes ready Dec 13 04:12:41.636326 systemd-networkd[971]: lxcc30eb3b99ef3: Gained carrier Dec 13 04:12:41.993995 env[1138]: time="2024-12-13T04:12:41.993911274Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 04:12:41.994269 env[1138]: time="2024-12-13T04:12:41.994244640Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 04:12:41.994361 env[1138]: time="2024-12-13T04:12:41.994339718Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 04:12:41.994603 env[1138]: time="2024-12-13T04:12:41.994576050Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/204dff49d66669ad7e44f9ba443871ff621d9787aa3a6861575bdb90bcc65b50 pid=2449 runtime=io.containerd.runc.v2 Dec 13 04:12:42.023274 systemd[1]: Started cri-containerd-204dff49d66669ad7e44f9ba443871ff621d9787aa3a6861575bdb90bcc65b50.scope. Dec 13 04:12:42.073468 env[1138]: time="2024-12-13T04:12:42.073409176Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-hrltn,Uid:53f31e21-2605-44c6-87f2-0558ded61572,Namespace:default,Attempt:0,} returns sandbox id \"204dff49d66669ad7e44f9ba443871ff621d9787aa3a6861575bdb90bcc65b50\"" Dec 13 04:12:42.075613 env[1138]: time="2024-12-13T04:12:42.075581168Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Dec 13 04:12:42.465959 kubelet[1392]: E1213 04:12:42.465851 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:12:42.837649 systemd-networkd[971]: lxcc30eb3b99ef3: Gained IPv6LL Dec 13 04:12:43.466127 kubelet[1392]: E1213 04:12:43.466042 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:12:44.467344 kubelet[1392]: E1213 04:12:44.467177 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:12:45.467888 kubelet[1392]: E1213 04:12:45.467729 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:12:46.208670 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1063243822.mount: Deactivated successfully. Dec 13 04:12:46.468914 kubelet[1392]: E1213 04:12:46.468794 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:12:47.469444 kubelet[1392]: E1213 04:12:47.469397 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:12:48.292582 env[1138]: time="2024-12-13T04:12:48.292419757Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 04:12:48.297305 env[1138]: time="2024-12-13T04:12:48.297208132Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 04:12:48.301803 env[1138]: time="2024-12-13T04:12:48.301689393Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 04:12:48.306007 env[1138]: time="2024-12-13T04:12:48.305952555Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx@sha256:e04edf30a4ea4c5a4107110797c72d3ee8a654415f00acd4019be17218afd9a1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 04:12:48.308248 env[1138]: time="2024-12-13T04:12:48.308188917Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3\"" Dec 13 04:12:48.315063 env[1138]: time="2024-12-13T04:12:48.315002460Z" level=info msg="CreateContainer within sandbox \"204dff49d66669ad7e44f9ba443871ff621d9787aa3a6861575bdb90bcc65b50\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Dec 13 04:12:48.353338 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1865526843.mount: Deactivated successfully. Dec 13 04:12:48.361133 env[1138]: time="2024-12-13T04:12:48.361026726Z" level=info msg="CreateContainer within sandbox \"204dff49d66669ad7e44f9ba443871ff621d9787aa3a6861575bdb90bcc65b50\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"afa60fe639c7e742b1f446cfb359a7f5f7e1829866c2bb4bc10e9bd20a6684fc\"" Dec 13 04:12:48.363183 env[1138]: time="2024-12-13T04:12:48.363124189Z" level=info msg="StartContainer for \"afa60fe639c7e742b1f446cfb359a7f5f7e1829866c2bb4bc10e9bd20a6684fc\"" Dec 13 04:12:48.417718 systemd[1]: Started cri-containerd-afa60fe639c7e742b1f446cfb359a7f5f7e1829866c2bb4bc10e9bd20a6684fc.scope. Dec 13 04:12:48.463249 env[1138]: time="2024-12-13T04:12:48.463184582Z" level=info msg="StartContainer for \"afa60fe639c7e742b1f446cfb359a7f5f7e1829866c2bb4bc10e9bd20a6684fc\" returns successfully" Dec 13 04:12:48.469887 kubelet[1392]: E1213 04:12:48.469815 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:12:48.977991 kubelet[1392]: I1213 04:12:48.977889 1392 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-8587fbcb89-hrltn" podStartSLOduration=1.7409085869999998 podStartE2EDuration="7.977857545s" podCreationTimestamp="2024-12-13 04:12:41 +0000 UTC" firstStartedPulling="2024-12-13 04:12:42.074972967 +0000 UTC m=+28.740961927" lastFinishedPulling="2024-12-13 04:12:48.311921875 +0000 UTC m=+34.977910885" observedRunningTime="2024-12-13 04:12:48.977090146 +0000 UTC m=+35.643079166" watchObservedRunningTime="2024-12-13 04:12:48.977857545 +0000 UTC m=+35.643846556" Dec 13 04:12:49.337696 systemd[1]: run-containerd-runc-k8s.io-afa60fe639c7e742b1f446cfb359a7f5f7e1829866c2bb4bc10e9bd20a6684fc-runc.BXUmfw.mount: Deactivated successfully. Dec 13 04:12:49.471045 kubelet[1392]: E1213 04:12:49.470963 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:12:50.471962 kubelet[1392]: E1213 04:12:50.471893 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:12:51.473815 kubelet[1392]: E1213 04:12:51.473699 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:12:52.475323 kubelet[1392]: E1213 04:12:52.475223 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:12:53.476502 kubelet[1392]: E1213 04:12:53.476434 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:12:54.437595 kubelet[1392]: E1213 04:12:54.437531 1392 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:12:54.477785 kubelet[1392]: E1213 04:12:54.477689 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:12:55.477998 kubelet[1392]: E1213 04:12:55.477906 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:12:56.479904 kubelet[1392]: E1213 04:12:56.479833 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:12:56.969331 systemd[1]: Created slice kubepods-besteffort-pod9b3125e4_d8a3_4859_b3a0_5690ecc28e1c.slice. Dec 13 04:12:57.038216 kubelet[1392]: I1213 04:12:57.038108 1392 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/9b3125e4-d8a3-4859-b3a0-5690ecc28e1c-data\") pod \"nfs-server-provisioner-0\" (UID: \"9b3125e4-d8a3-4859-b3a0-5690ecc28e1c\") " pod="default/nfs-server-provisioner-0" Dec 13 04:12:57.038216 kubelet[1392]: I1213 04:12:57.038208 1392 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-942k5\" (UniqueName: \"kubernetes.io/projected/9b3125e4-d8a3-4859-b3a0-5690ecc28e1c-kube-api-access-942k5\") pod \"nfs-server-provisioner-0\" (UID: \"9b3125e4-d8a3-4859-b3a0-5690ecc28e1c\") " pod="default/nfs-server-provisioner-0" Dec 13 04:12:57.280418 env[1138]: time="2024-12-13T04:12:57.280296477Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:9b3125e4-d8a3-4859-b3a0-5690ecc28e1c,Namespace:default,Attempt:0,}" Dec 13 04:12:57.398374 systemd-networkd[971]: lxc230f65268205: Link UP Dec 13 04:12:57.404979 kernel: eth0: renamed from tmp85470 Dec 13 04:12:57.415092 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 04:12:57.415282 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc230f65268205: link becomes ready Dec 13 04:12:57.415576 systemd-networkd[971]: lxc230f65268205: Gained carrier Dec 13 04:12:57.480310 kubelet[1392]: E1213 04:12:57.480218 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:12:57.650540 env[1138]: time="2024-12-13T04:12:57.650105860Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 04:12:57.650540 env[1138]: time="2024-12-13T04:12:57.650174909Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 04:12:57.650540 env[1138]: time="2024-12-13T04:12:57.650188525Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 04:12:57.651895 env[1138]: time="2024-12-13T04:12:57.651840794Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8547034beb58637bba07ae7e884809b6180423fe19858adbb508aabf80a35b31 pid=2573 runtime=io.containerd.runc.v2 Dec 13 04:12:57.681620 systemd[1]: Started cri-containerd-8547034beb58637bba07ae7e884809b6180423fe19858adbb508aabf80a35b31.scope. Dec 13 04:12:57.727515 env[1138]: time="2024-12-13T04:12:57.727447415Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:9b3125e4-d8a3-4859-b3a0-5690ecc28e1c,Namespace:default,Attempt:0,} returns sandbox id \"8547034beb58637bba07ae7e884809b6180423fe19858adbb508aabf80a35b31\"" Dec 13 04:12:57.729785 env[1138]: time="2024-12-13T04:12:57.729649514Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Dec 13 04:12:58.481234 kubelet[1392]: E1213 04:12:58.481177 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:12:58.645318 systemd-networkd[971]: lxc230f65268205: Gained IPv6LL Dec 13 04:12:59.481701 kubelet[1392]: E1213 04:12:59.481631 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:13:00.482603 kubelet[1392]: E1213 04:13:00.482534 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:13:01.483040 kubelet[1392]: E1213 04:13:01.482981 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:13:01.796148 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1843903588.mount: Deactivated successfully. Dec 13 04:13:02.483647 kubelet[1392]: E1213 04:13:02.483567 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:13:03.484075 kubelet[1392]: E1213 04:13:03.483962 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:13:04.485198 kubelet[1392]: E1213 04:13:04.485104 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:13:05.188904 env[1138]: time="2024-12-13T04:13:05.188732619Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 04:13:05.193829 env[1138]: time="2024-12-13T04:13:05.192480076Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 04:13:05.196941 env[1138]: time="2024-12-13T04:13:05.196870079Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 04:13:05.201332 env[1138]: time="2024-12-13T04:13:05.201270059Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 04:13:05.203425 env[1138]: time="2024-12-13T04:13:05.203364147Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Dec 13 04:13:05.209955 env[1138]: time="2024-12-13T04:13:05.209851532Z" level=info msg="CreateContainer within sandbox \"8547034beb58637bba07ae7e884809b6180423fe19858adbb508aabf80a35b31\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Dec 13 04:13:05.241727 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1971981029.mount: Deactivated successfully. Dec 13 04:13:05.258321 env[1138]: time="2024-12-13T04:13:05.258156042Z" level=info msg="CreateContainer within sandbox \"8547034beb58637bba07ae7e884809b6180423fe19858adbb508aabf80a35b31\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"22ddfc134d0e774d9ede82dbe29d94d83d1579e177b23f856e9d69a7ba4da54e\"" Dec 13 04:13:05.260692 env[1138]: time="2024-12-13T04:13:05.260630381Z" level=info msg="StartContainer for \"22ddfc134d0e774d9ede82dbe29d94d83d1579e177b23f856e9d69a7ba4da54e\"" Dec 13 04:13:05.312339 systemd[1]: Started cri-containerd-22ddfc134d0e774d9ede82dbe29d94d83d1579e177b23f856e9d69a7ba4da54e.scope. Dec 13 04:13:05.344563 env[1138]: time="2024-12-13T04:13:05.344520653Z" level=info msg="StartContainer for \"22ddfc134d0e774d9ede82dbe29d94d83d1579e177b23f856e9d69a7ba4da54e\" returns successfully" Dec 13 04:13:05.486182 kubelet[1392]: E1213 04:13:05.485968 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:13:06.117782 kubelet[1392]: I1213 04:13:06.117643 1392 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=2.640223211 podStartE2EDuration="10.117613647s" podCreationTimestamp="2024-12-13 04:12:56 +0000 UTC" firstStartedPulling="2024-12-13 04:12:57.729223024 +0000 UTC m=+44.395211984" lastFinishedPulling="2024-12-13 04:13:05.206613409 +0000 UTC m=+51.872602420" observedRunningTime="2024-12-13 04:13:06.116200276 +0000 UTC m=+52.782189336" watchObservedRunningTime="2024-12-13 04:13:06.117613647 +0000 UTC m=+52.783602657" Dec 13 04:13:06.487011 kubelet[1392]: E1213 04:13:06.486808 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:13:07.487356 kubelet[1392]: E1213 04:13:07.487165 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:13:08.488225 kubelet[1392]: E1213 04:13:08.488144 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:13:09.489429 kubelet[1392]: E1213 04:13:09.489359 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:13:10.490149 kubelet[1392]: E1213 04:13:10.490076 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:13:11.490322 kubelet[1392]: E1213 04:13:11.490274 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:13:12.491616 kubelet[1392]: E1213 04:13:12.491562 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:13:13.492902 kubelet[1392]: E1213 04:13:13.492841 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:13:14.438117 kubelet[1392]: E1213 04:13:14.438072 1392 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:13:14.493380 kubelet[1392]: E1213 04:13:14.493340 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:13:15.040847 systemd[1]: Created slice kubepods-besteffort-pod3a28997b_52e1_4a35_9d62_be3f5355d670.slice. Dec 13 04:13:15.076596 kubelet[1392]: I1213 04:13:15.076543 1392 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-ed029ec7-2cef-4c32-b976-2dbeb596615e\" (UniqueName: \"kubernetes.io/nfs/3a28997b-52e1-4a35-9d62-be3f5355d670-pvc-ed029ec7-2cef-4c32-b976-2dbeb596615e\") pod \"test-pod-1\" (UID: \"3a28997b-52e1-4a35-9d62-be3f5355d670\") " pod="default/test-pod-1" Dec 13 04:13:15.076965 kubelet[1392]: I1213 04:13:15.076925 1392 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zp6cq\" (UniqueName: \"kubernetes.io/projected/3a28997b-52e1-4a35-9d62-be3f5355d670-kube-api-access-zp6cq\") pod \"test-pod-1\" (UID: \"3a28997b-52e1-4a35-9d62-be3f5355d670\") " pod="default/test-pod-1" Dec 13 04:13:15.274935 kernel: FS-Cache: Loaded Dec 13 04:13:15.355386 kernel: RPC: Registered named UNIX socket transport module. Dec 13 04:13:15.355548 kernel: RPC: Registered udp transport module. Dec 13 04:13:15.355607 kernel: RPC: Registered tcp transport module. Dec 13 04:13:15.355657 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Dec 13 04:13:15.439824 kernel: FS-Cache: Netfs 'nfs' registered for caching Dec 13 04:13:15.494729 kubelet[1392]: E1213 04:13:15.494668 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:13:15.738472 kernel: NFS: Registering the id_resolver key type Dec 13 04:13:15.738733 kernel: Key type id_resolver registered Dec 13 04:13:15.738848 kernel: Key type id_legacy registered Dec 13 04:13:15.832262 nfsidmap[2708]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'novalocal' Dec 13 04:13:15.843634 nfsidmap[2709]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'novalocal' Dec 13 04:13:15.949811 env[1138]: time="2024-12-13T04:13:15.949323769Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:3a28997b-52e1-4a35-9d62-be3f5355d670,Namespace:default,Attempt:0,}" Dec 13 04:13:16.036102 systemd-networkd[971]: lxcb5c9fd19a165: Link UP Dec 13 04:13:16.040846 kernel: eth0: renamed from tmpa82be Dec 13 04:13:16.051961 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 04:13:16.052208 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcb5c9fd19a165: link becomes ready Dec 13 04:13:16.055035 systemd-networkd[971]: lxcb5c9fd19a165: Gained carrier Dec 13 04:13:16.347633 env[1138]: time="2024-12-13T04:13:16.347487496Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 04:13:16.348019 env[1138]: time="2024-12-13T04:13:16.347598366Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 04:13:16.348019 env[1138]: time="2024-12-13T04:13:16.347632059Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 04:13:16.350276 env[1138]: time="2024-12-13T04:13:16.348357692Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a82be40b09a2c82c09c428ef1c4cf138eca3bd892d03811936e3f60ad94499c1 pid=2735 runtime=io.containerd.runc.v2 Dec 13 04:13:16.376657 systemd[1]: Started cri-containerd-a82be40b09a2c82c09c428ef1c4cf138eca3bd892d03811936e3f60ad94499c1.scope. Dec 13 04:13:16.432949 env[1138]: time="2024-12-13T04:13:16.432879869Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:3a28997b-52e1-4a35-9d62-be3f5355d670,Namespace:default,Attempt:0,} returns sandbox id \"a82be40b09a2c82c09c428ef1c4cf138eca3bd892d03811936e3f60ad94499c1\"" Dec 13 04:13:16.436491 env[1138]: time="2024-12-13T04:13:16.436365805Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Dec 13 04:13:16.495683 kubelet[1392]: E1213 04:13:16.495649 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:13:16.981018 env[1138]: time="2024-12-13T04:13:16.980914646Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 04:13:16.985232 env[1138]: time="2024-12-13T04:13:16.985148608Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 04:13:16.990422 env[1138]: time="2024-12-13T04:13:16.990337236Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 04:13:16.995212 env[1138]: time="2024-12-13T04:13:16.995116970Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx@sha256:e04edf30a4ea4c5a4107110797c72d3ee8a654415f00acd4019be17218afd9a1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 04:13:17.000904 env[1138]: time="2024-12-13T04:13:17.000822395Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3\"" Dec 13 04:13:17.010104 env[1138]: time="2024-12-13T04:13:17.010013281Z" level=info msg="CreateContainer within sandbox \"a82be40b09a2c82c09c428ef1c4cf138eca3bd892d03811936e3f60ad94499c1\" for container &ContainerMetadata{Name:test,Attempt:0,}" Dec 13 04:13:17.056478 env[1138]: time="2024-12-13T04:13:17.056374271Z" level=info msg="CreateContainer within sandbox \"a82be40b09a2c82c09c428ef1c4cf138eca3bd892d03811936e3f60ad94499c1\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"6a80ee91fd2e7b34c9b067bd2ba55ad0543ed138817b232072522606c4f8333a\"" Dec 13 04:13:17.058123 env[1138]: time="2024-12-13T04:13:17.057840295Z" level=info msg="StartContainer for \"6a80ee91fd2e7b34c9b067bd2ba55ad0543ed138817b232072522606c4f8333a\"" Dec 13 04:13:17.100714 systemd[1]: Started cri-containerd-6a80ee91fd2e7b34c9b067bd2ba55ad0543ed138817b232072522606c4f8333a.scope. Dec 13 04:13:17.150845 env[1138]: time="2024-12-13T04:13:17.150804491Z" level=info msg="StartContainer for \"6a80ee91fd2e7b34c9b067bd2ba55ad0543ed138817b232072522606c4f8333a\" returns successfully" Dec 13 04:13:17.208476 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount163230302.mount: Deactivated successfully. Dec 13 04:13:17.496891 kubelet[1392]: E1213 04:13:17.496732 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:13:18.037351 systemd-networkd[971]: lxcb5c9fd19a165: Gained IPv6LL Dec 13 04:13:18.497811 kubelet[1392]: E1213 04:13:18.497539 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:13:19.498357 kubelet[1392]: E1213 04:13:19.498290 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:13:20.500363 kubelet[1392]: E1213 04:13:20.500261 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:13:21.501309 kubelet[1392]: E1213 04:13:21.501227 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:13:22.501517 kubelet[1392]: E1213 04:13:22.501443 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:13:23.504553 kubelet[1392]: E1213 04:13:23.504494 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:13:24.505392 kubelet[1392]: E1213 04:13:24.505314 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:13:25.506344 kubelet[1392]: E1213 04:13:25.506288 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:13:26.507554 kubelet[1392]: E1213 04:13:26.507504 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:13:27.213814 kubelet[1392]: I1213 04:13:27.213669 1392 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=27.643710462 podStartE2EDuration="28.213604989s" podCreationTimestamp="2024-12-13 04:12:59 +0000 UTC" firstStartedPulling="2024-12-13 04:13:16.43601977 +0000 UTC m=+63.102008740" lastFinishedPulling="2024-12-13 04:13:17.005914257 +0000 UTC m=+63.671903267" observedRunningTime="2024-12-13 04:13:18.154432971 +0000 UTC m=+64.820422071" watchObservedRunningTime="2024-12-13 04:13:27.213604989 +0000 UTC m=+73.879593999" Dec 13 04:13:27.265190 systemd[1]: run-containerd-runc-k8s.io-37cd3a9cf2629e830cd2f16308bd1865e874fbd97cdd5dd056e3270d117b62e3-runc.GXI0cy.mount: Deactivated successfully. Dec 13 04:13:27.302126 env[1138]: time="2024-12-13T04:13:27.301975742Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 04:13:27.314323 env[1138]: time="2024-12-13T04:13:27.314250122Z" level=info msg="StopContainer for \"37cd3a9cf2629e830cd2f16308bd1865e874fbd97cdd5dd056e3270d117b62e3\" with timeout 2 (s)" Dec 13 04:13:27.314907 env[1138]: time="2024-12-13T04:13:27.314847921Z" level=info msg="Stop container \"37cd3a9cf2629e830cd2f16308bd1865e874fbd97cdd5dd056e3270d117b62e3\" with signal terminated" Dec 13 04:13:27.328401 systemd-networkd[971]: lxc_health: Link DOWN Dec 13 04:13:27.328415 systemd-networkd[971]: lxc_health: Lost carrier Dec 13 04:13:27.382744 systemd[1]: cri-containerd-37cd3a9cf2629e830cd2f16308bd1865e874fbd97cdd5dd056e3270d117b62e3.scope: Deactivated successfully. Dec 13 04:13:27.383449 systemd[1]: cri-containerd-37cd3a9cf2629e830cd2f16308bd1865e874fbd97cdd5dd056e3270d117b62e3.scope: Consumed 8.749s CPU time. Dec 13 04:13:27.427648 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-37cd3a9cf2629e830cd2f16308bd1865e874fbd97cdd5dd056e3270d117b62e3-rootfs.mount: Deactivated successfully. Dec 13 04:13:27.444148 env[1138]: time="2024-12-13T04:13:27.444083317Z" level=info msg="shim disconnected" id=37cd3a9cf2629e830cd2f16308bd1865e874fbd97cdd5dd056e3270d117b62e3 Dec 13 04:13:27.444611 env[1138]: time="2024-12-13T04:13:27.444551561Z" level=warning msg="cleaning up after shim disconnected" id=37cd3a9cf2629e830cd2f16308bd1865e874fbd97cdd5dd056e3270d117b62e3 namespace=k8s.io Dec 13 04:13:27.444817 env[1138]: time="2024-12-13T04:13:27.444734637Z" level=info msg="cleaning up dead shim" Dec 13 04:13:27.456402 env[1138]: time="2024-12-13T04:13:27.456327462Z" level=warning msg="cleanup warnings time=\"2024-12-13T04:13:27Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2866 runtime=io.containerd.runc.v2\n" Dec 13 04:13:27.460511 env[1138]: time="2024-12-13T04:13:27.460465863Z" level=info msg="StopContainer for \"37cd3a9cf2629e830cd2f16308bd1865e874fbd97cdd5dd056e3270d117b62e3\" returns successfully" Dec 13 04:13:27.461565 env[1138]: time="2024-12-13T04:13:27.461522949Z" level=info msg="StopPodSandbox for \"c2cfe5660f684aa829b1e522b63df69058bc8c81924518bc0cf60911d8c15ee8\"" Dec 13 04:13:27.462007 env[1138]: time="2024-12-13T04:13:27.461960175Z" level=info msg="Container to stop \"da6cc7cc7d080559b7252f82bbd6f7f064abe8768e2dce0ecd826babc0c0e091\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 04:13:27.462155 env[1138]: time="2024-12-13T04:13:27.462121009Z" level=info msg="Container to stop \"b65afb64283eb6ab7b3a0997fd08a97e03f3cac16936e26ff183455e9fa641c8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 04:13:27.462287 env[1138]: time="2024-12-13T04:13:27.462254740Z" level=info msg="Container to stop \"5a3bf45b93bb5ed03cded91596a20251ca8e0751a7632e639fe26ee10709925c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 04:13:27.462413 env[1138]: time="2024-12-13T04:13:27.462380588Z" level=info msg="Container to stop \"75b916288e76f94163490d05e83a460fc6f471d0775270e7a132fcad8bc0ec73\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 04:13:27.462538 env[1138]: time="2024-12-13T04:13:27.462505363Z" level=info msg="Container to stop \"37cd3a9cf2629e830cd2f16308bd1865e874fbd97cdd5dd056e3270d117b62e3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 04:13:27.466038 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c2cfe5660f684aa829b1e522b63df69058bc8c81924518bc0cf60911d8c15ee8-shm.mount: Deactivated successfully. Dec 13 04:13:27.475691 systemd[1]: cri-containerd-c2cfe5660f684aa829b1e522b63df69058bc8c81924518bc0cf60911d8c15ee8.scope: Deactivated successfully. Dec 13 04:13:27.509188 kubelet[1392]: E1213 04:13:27.509088 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:13:27.514422 env[1138]: time="2024-12-13T04:13:27.514341860Z" level=info msg="shim disconnected" id=c2cfe5660f684aa829b1e522b63df69058bc8c81924518bc0cf60911d8c15ee8 Dec 13 04:13:27.514811 env[1138]: time="2024-12-13T04:13:27.514727757Z" level=warning msg="cleaning up after shim disconnected" id=c2cfe5660f684aa829b1e522b63df69058bc8c81924518bc0cf60911d8c15ee8 namespace=k8s.io Dec 13 04:13:27.514974 env[1138]: time="2024-12-13T04:13:27.514943986Z" level=info msg="cleaning up dead shim" Dec 13 04:13:27.524023 env[1138]: time="2024-12-13T04:13:27.523970606Z" level=warning msg="cleanup warnings time=\"2024-12-13T04:13:27Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2896 runtime=io.containerd.runc.v2\n" Dec 13 04:13:27.525174 env[1138]: time="2024-12-13T04:13:27.525128712Z" level=info msg="TearDown network for sandbox \"c2cfe5660f684aa829b1e522b63df69058bc8c81924518bc0cf60911d8c15ee8\" successfully" Dec 13 04:13:27.525332 env[1138]: time="2024-12-13T04:13:27.525293263Z" level=info msg="StopPodSandbox for \"c2cfe5660f684aa829b1e522b63df69058bc8c81924518bc0cf60911d8c15ee8\" returns successfully" Dec 13 04:13:27.665792 kubelet[1392]: I1213 04:13:27.665661 1392 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ecfe4f83-a1d3-40da-aca7-af579fc21da1-clustermesh-secrets\") pod \"ecfe4f83-a1d3-40da-aca7-af579fc21da1\" (UID: \"ecfe4f83-a1d3-40da-aca7-af579fc21da1\") " Dec 13 04:13:27.665792 kubelet[1392]: I1213 04:13:27.665788 1392 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ecfe4f83-a1d3-40da-aca7-af579fc21da1-xtables-lock\") pod \"ecfe4f83-a1d3-40da-aca7-af579fc21da1\" (UID: \"ecfe4f83-a1d3-40da-aca7-af579fc21da1\") " Dec 13 04:13:27.666147 kubelet[1392]: I1213 04:13:27.665844 1392 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ecfe4f83-a1d3-40da-aca7-af579fc21da1-host-proc-sys-net\") pod \"ecfe4f83-a1d3-40da-aca7-af579fc21da1\" (UID: \"ecfe4f83-a1d3-40da-aca7-af579fc21da1\") " Dec 13 04:13:27.666147 kubelet[1392]: I1213 04:13:27.665890 1392 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ecfe4f83-a1d3-40da-aca7-af579fc21da1-cni-path\") pod \"ecfe4f83-a1d3-40da-aca7-af579fc21da1\" (UID: \"ecfe4f83-a1d3-40da-aca7-af579fc21da1\") " Dec 13 04:13:27.666147 kubelet[1392]: I1213 04:13:27.665931 1392 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ecfe4f83-a1d3-40da-aca7-af579fc21da1-etc-cni-netd\") pod \"ecfe4f83-a1d3-40da-aca7-af579fc21da1\" (UID: \"ecfe4f83-a1d3-40da-aca7-af579fc21da1\") " Dec 13 04:13:27.666147 kubelet[1392]: I1213 04:13:27.665982 1392 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ecfe4f83-a1d3-40da-aca7-af579fc21da1-cilium-config-path\") pod \"ecfe4f83-a1d3-40da-aca7-af579fc21da1\" (UID: \"ecfe4f83-a1d3-40da-aca7-af579fc21da1\") " Dec 13 04:13:27.666147 kubelet[1392]: I1213 04:13:27.666033 1392 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ft77\" (UniqueName: \"kubernetes.io/projected/ecfe4f83-a1d3-40da-aca7-af579fc21da1-kube-api-access-6ft77\") pod \"ecfe4f83-a1d3-40da-aca7-af579fc21da1\" (UID: \"ecfe4f83-a1d3-40da-aca7-af579fc21da1\") " Dec 13 04:13:27.666147 kubelet[1392]: I1213 04:13:27.666073 1392 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ecfe4f83-a1d3-40da-aca7-af579fc21da1-bpf-maps\") pod \"ecfe4f83-a1d3-40da-aca7-af579fc21da1\" (UID: \"ecfe4f83-a1d3-40da-aca7-af579fc21da1\") " Dec 13 04:13:27.666545 kubelet[1392]: I1213 04:13:27.666114 1392 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ecfe4f83-a1d3-40da-aca7-af579fc21da1-lib-modules\") pod \"ecfe4f83-a1d3-40da-aca7-af579fc21da1\" (UID: \"ecfe4f83-a1d3-40da-aca7-af579fc21da1\") " Dec 13 04:13:27.666545 kubelet[1392]: I1213 04:13:27.666152 1392 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ecfe4f83-a1d3-40da-aca7-af579fc21da1-host-proc-sys-kernel\") pod \"ecfe4f83-a1d3-40da-aca7-af579fc21da1\" (UID: \"ecfe4f83-a1d3-40da-aca7-af579fc21da1\") " Dec 13 04:13:27.666545 kubelet[1392]: I1213 04:13:27.666191 1392 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ecfe4f83-a1d3-40da-aca7-af579fc21da1-hostproc\") pod \"ecfe4f83-a1d3-40da-aca7-af579fc21da1\" (UID: \"ecfe4f83-a1d3-40da-aca7-af579fc21da1\") " Dec 13 04:13:27.666545 kubelet[1392]: I1213 04:13:27.666232 1392 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ecfe4f83-a1d3-40da-aca7-af579fc21da1-hubble-tls\") pod \"ecfe4f83-a1d3-40da-aca7-af579fc21da1\" (UID: \"ecfe4f83-a1d3-40da-aca7-af579fc21da1\") " Dec 13 04:13:27.666545 kubelet[1392]: I1213 04:13:27.666268 1392 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ecfe4f83-a1d3-40da-aca7-af579fc21da1-cilium-run\") pod \"ecfe4f83-a1d3-40da-aca7-af579fc21da1\" (UID: \"ecfe4f83-a1d3-40da-aca7-af579fc21da1\") " Dec 13 04:13:27.666545 kubelet[1392]: I1213 04:13:27.666306 1392 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ecfe4f83-a1d3-40da-aca7-af579fc21da1-cilium-cgroup\") pod \"ecfe4f83-a1d3-40da-aca7-af579fc21da1\" (UID: \"ecfe4f83-a1d3-40da-aca7-af579fc21da1\") " Dec 13 04:13:27.668593 kubelet[1392]: I1213 04:13:27.668518 1392 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ecfe4f83-a1d3-40da-aca7-af579fc21da1-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "ecfe4f83-a1d3-40da-aca7-af579fc21da1" (UID: "ecfe4f83-a1d3-40da-aca7-af579fc21da1"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 04:13:27.668732 kubelet[1392]: I1213 04:13:27.668634 1392 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ecfe4f83-a1d3-40da-aca7-af579fc21da1-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "ecfe4f83-a1d3-40da-aca7-af579fc21da1" (UID: "ecfe4f83-a1d3-40da-aca7-af579fc21da1"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 04:13:27.668732 kubelet[1392]: I1213 04:13:27.668686 1392 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ecfe4f83-a1d3-40da-aca7-af579fc21da1-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "ecfe4f83-a1d3-40da-aca7-af579fc21da1" (UID: "ecfe4f83-a1d3-40da-aca7-af579fc21da1"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 04:13:27.669170 kubelet[1392]: I1213 04:13:27.668729 1392 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ecfe4f83-a1d3-40da-aca7-af579fc21da1-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "ecfe4f83-a1d3-40da-aca7-af579fc21da1" (UID: "ecfe4f83-a1d3-40da-aca7-af579fc21da1"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 04:13:27.669170 kubelet[1392]: I1213 04:13:27.668841 1392 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ecfe4f83-a1d3-40da-aca7-af579fc21da1-hostproc" (OuterVolumeSpecName: "hostproc") pod "ecfe4f83-a1d3-40da-aca7-af579fc21da1" (UID: "ecfe4f83-a1d3-40da-aca7-af579fc21da1"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 04:13:27.674840 kubelet[1392]: I1213 04:13:27.674692 1392 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ecfe4f83-a1d3-40da-aca7-af579fc21da1-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "ecfe4f83-a1d3-40da-aca7-af579fc21da1" (UID: "ecfe4f83-a1d3-40da-aca7-af579fc21da1"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 04:13:27.675020 kubelet[1392]: I1213 04:13:27.674896 1392 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ecfe4f83-a1d3-40da-aca7-af579fc21da1-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "ecfe4f83-a1d3-40da-aca7-af579fc21da1" (UID: "ecfe4f83-a1d3-40da-aca7-af579fc21da1"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 04:13:27.675020 kubelet[1392]: I1213 04:13:27.674953 1392 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ecfe4f83-a1d3-40da-aca7-af579fc21da1-cni-path" (OuterVolumeSpecName: "cni-path") pod "ecfe4f83-a1d3-40da-aca7-af579fc21da1" (UID: "ecfe4f83-a1d3-40da-aca7-af579fc21da1"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 04:13:27.675020 kubelet[1392]: I1213 04:13:27.674992 1392 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ecfe4f83-a1d3-40da-aca7-af579fc21da1-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "ecfe4f83-a1d3-40da-aca7-af579fc21da1" (UID: "ecfe4f83-a1d3-40da-aca7-af579fc21da1"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 04:13:27.675237 kubelet[1392]: I1213 04:13:27.675025 1392 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ecfe4f83-a1d3-40da-aca7-af579fc21da1-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "ecfe4f83-a1d3-40da-aca7-af579fc21da1" (UID: "ecfe4f83-a1d3-40da-aca7-af579fc21da1"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 04:13:27.680127 kubelet[1392]: I1213 04:13:27.680062 1392 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ecfe4f83-a1d3-40da-aca7-af579fc21da1-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "ecfe4f83-a1d3-40da-aca7-af579fc21da1" (UID: "ecfe4f83-a1d3-40da-aca7-af579fc21da1"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 04:13:27.680444 kubelet[1392]: I1213 04:13:27.680371 1392 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ecfe4f83-a1d3-40da-aca7-af579fc21da1-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "ecfe4f83-a1d3-40da-aca7-af579fc21da1" (UID: "ecfe4f83-a1d3-40da-aca7-af579fc21da1"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 04:13:27.680562 kubelet[1392]: I1213 04:13:27.680465 1392 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ecfe4f83-a1d3-40da-aca7-af579fc21da1-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "ecfe4f83-a1d3-40da-aca7-af579fc21da1" (UID: "ecfe4f83-a1d3-40da-aca7-af579fc21da1"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 04:13:27.680813 kubelet[1392]: I1213 04:13:27.680719 1392 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ecfe4f83-a1d3-40da-aca7-af579fc21da1-kube-api-access-6ft77" (OuterVolumeSpecName: "kube-api-access-6ft77") pod "ecfe4f83-a1d3-40da-aca7-af579fc21da1" (UID: "ecfe4f83-a1d3-40da-aca7-af579fc21da1"). InnerVolumeSpecName "kube-api-access-6ft77". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 04:13:27.771403 kubelet[1392]: I1213 04:13:27.766904 1392 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ecfe4f83-a1d3-40da-aca7-af579fc21da1-clustermesh-secrets\") on node \"172.24.4.93\" DevicePath \"\"" Dec 13 04:13:27.771403 kubelet[1392]: I1213 04:13:27.766965 1392 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ecfe4f83-a1d3-40da-aca7-af579fc21da1-xtables-lock\") on node \"172.24.4.93\" DevicePath \"\"" Dec 13 04:13:27.771403 kubelet[1392]: I1213 04:13:27.766990 1392 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ecfe4f83-a1d3-40da-aca7-af579fc21da1-host-proc-sys-net\") on node \"172.24.4.93\" DevicePath \"\"" Dec 13 04:13:27.771403 kubelet[1392]: I1213 04:13:27.767013 1392 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ecfe4f83-a1d3-40da-aca7-af579fc21da1-bpf-maps\") on node \"172.24.4.93\" DevicePath \"\"" Dec 13 04:13:27.771403 kubelet[1392]: I1213 04:13:27.767036 1392 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ecfe4f83-a1d3-40da-aca7-af579fc21da1-cni-path\") on node \"172.24.4.93\" DevicePath \"\"" Dec 13 04:13:27.771403 kubelet[1392]: I1213 04:13:27.767057 1392 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ecfe4f83-a1d3-40da-aca7-af579fc21da1-etc-cni-netd\") on node \"172.24.4.93\" DevicePath \"\"" Dec 13 04:13:27.771403 kubelet[1392]: I1213 04:13:27.767079 1392 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ecfe4f83-a1d3-40da-aca7-af579fc21da1-cilium-config-path\") on node \"172.24.4.93\" DevicePath \"\"" Dec 13 04:13:27.771403 kubelet[1392]: I1213 04:13:27.767100 1392 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-6ft77\" (UniqueName: \"kubernetes.io/projected/ecfe4f83-a1d3-40da-aca7-af579fc21da1-kube-api-access-6ft77\") on node \"172.24.4.93\" DevicePath \"\"" Dec 13 04:13:27.772075 kubelet[1392]: I1213 04:13:27.767122 1392 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ecfe4f83-a1d3-40da-aca7-af579fc21da1-hubble-tls\") on node \"172.24.4.93\" DevicePath \"\"" Dec 13 04:13:27.772075 kubelet[1392]: I1213 04:13:27.767142 1392 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ecfe4f83-a1d3-40da-aca7-af579fc21da1-cilium-run\") on node \"172.24.4.93\" DevicePath \"\"" Dec 13 04:13:27.772075 kubelet[1392]: I1213 04:13:27.767162 1392 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ecfe4f83-a1d3-40da-aca7-af579fc21da1-cilium-cgroup\") on node \"172.24.4.93\" DevicePath \"\"" Dec 13 04:13:27.772075 kubelet[1392]: I1213 04:13:27.767182 1392 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ecfe4f83-a1d3-40da-aca7-af579fc21da1-lib-modules\") on node \"172.24.4.93\" DevicePath \"\"" Dec 13 04:13:27.772075 kubelet[1392]: I1213 04:13:27.767206 1392 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ecfe4f83-a1d3-40da-aca7-af579fc21da1-host-proc-sys-kernel\") on node \"172.24.4.93\" DevicePath \"\"" Dec 13 04:13:27.772075 kubelet[1392]: I1213 04:13:27.767243 1392 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ecfe4f83-a1d3-40da-aca7-af579fc21da1-hostproc\") on node \"172.24.4.93\" DevicePath \"\"" Dec 13 04:13:28.171590 kubelet[1392]: I1213 04:13:28.171537 1392 scope.go:117] "RemoveContainer" containerID="37cd3a9cf2629e830cd2f16308bd1865e874fbd97cdd5dd056e3270d117b62e3" Dec 13 04:13:28.174076 env[1138]: time="2024-12-13T04:13:28.173964472Z" level=info msg="RemoveContainer for \"37cd3a9cf2629e830cd2f16308bd1865e874fbd97cdd5dd056e3270d117b62e3\"" Dec 13 04:13:28.180405 systemd[1]: Removed slice kubepods-burstable-podecfe4f83_a1d3_40da_aca7_af579fc21da1.slice. Dec 13 04:13:28.180703 systemd[1]: kubepods-burstable-podecfe4f83_a1d3_40da_aca7_af579fc21da1.slice: Consumed 8.866s CPU time. Dec 13 04:13:28.186095 env[1138]: time="2024-12-13T04:13:28.185851338Z" level=info msg="RemoveContainer for \"37cd3a9cf2629e830cd2f16308bd1865e874fbd97cdd5dd056e3270d117b62e3\" returns successfully" Dec 13 04:13:28.187435 kubelet[1392]: I1213 04:13:28.187394 1392 scope.go:117] "RemoveContainer" containerID="da6cc7cc7d080559b7252f82bbd6f7f064abe8768e2dce0ecd826babc0c0e091" Dec 13 04:13:28.190508 env[1138]: time="2024-12-13T04:13:28.190440880Z" level=info msg="RemoveContainer for \"da6cc7cc7d080559b7252f82bbd6f7f064abe8768e2dce0ecd826babc0c0e091\"" Dec 13 04:13:28.195607 env[1138]: time="2024-12-13T04:13:28.195528643Z" level=info msg="RemoveContainer for \"da6cc7cc7d080559b7252f82bbd6f7f064abe8768e2dce0ecd826babc0c0e091\" returns successfully" Dec 13 04:13:28.196049 kubelet[1392]: I1213 04:13:28.195983 1392 scope.go:117] "RemoveContainer" containerID="75b916288e76f94163490d05e83a460fc6f471d0775270e7a132fcad8bc0ec73" Dec 13 04:13:28.198696 env[1138]: time="2024-12-13T04:13:28.198626409Z" level=info msg="RemoveContainer for \"75b916288e76f94163490d05e83a460fc6f471d0775270e7a132fcad8bc0ec73\"" Dec 13 04:13:28.203504 env[1138]: time="2024-12-13T04:13:28.203395731Z" level=info msg="RemoveContainer for \"75b916288e76f94163490d05e83a460fc6f471d0775270e7a132fcad8bc0ec73\" returns successfully" Dec 13 04:13:28.203787 kubelet[1392]: I1213 04:13:28.203701 1392 scope.go:117] "RemoveContainer" containerID="5a3bf45b93bb5ed03cded91596a20251ca8e0751a7632e639fe26ee10709925c" Dec 13 04:13:28.209924 env[1138]: time="2024-12-13T04:13:28.209307048Z" level=info msg="RemoveContainer for \"5a3bf45b93bb5ed03cded91596a20251ca8e0751a7632e639fe26ee10709925c\"" Dec 13 04:13:28.214425 env[1138]: time="2024-12-13T04:13:28.214362689Z" level=info msg="RemoveContainer for \"5a3bf45b93bb5ed03cded91596a20251ca8e0751a7632e639fe26ee10709925c\" returns successfully" Dec 13 04:13:28.215036 kubelet[1392]: I1213 04:13:28.214992 1392 scope.go:117] "RemoveContainer" containerID="b65afb64283eb6ab7b3a0997fd08a97e03f3cac16936e26ff183455e9fa641c8" Dec 13 04:13:28.217738 env[1138]: time="2024-12-13T04:13:28.217685802Z" level=info msg="RemoveContainer for \"b65afb64283eb6ab7b3a0997fd08a97e03f3cac16936e26ff183455e9fa641c8\"" Dec 13 04:13:28.222336 env[1138]: time="2024-12-13T04:13:28.222280835Z" level=info msg="RemoveContainer for \"b65afb64283eb6ab7b3a0997fd08a97e03f3cac16936e26ff183455e9fa641c8\" returns successfully" Dec 13 04:13:28.222817 kubelet[1392]: I1213 04:13:28.222742 1392 scope.go:117] "RemoveContainer" containerID="37cd3a9cf2629e830cd2f16308bd1865e874fbd97cdd5dd056e3270d117b62e3" Dec 13 04:13:28.223502 env[1138]: time="2024-12-13T04:13:28.223350484Z" level=error msg="ContainerStatus for \"37cd3a9cf2629e830cd2f16308bd1865e874fbd97cdd5dd056e3270d117b62e3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"37cd3a9cf2629e830cd2f16308bd1865e874fbd97cdd5dd056e3270d117b62e3\": not found" Dec 13 04:13:28.223938 kubelet[1392]: E1213 04:13:28.223899 1392 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"37cd3a9cf2629e830cd2f16308bd1865e874fbd97cdd5dd056e3270d117b62e3\": not found" containerID="37cd3a9cf2629e830cd2f16308bd1865e874fbd97cdd5dd056e3270d117b62e3" Dec 13 04:13:28.224423 kubelet[1392]: I1213 04:13:28.224184 1392 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"37cd3a9cf2629e830cd2f16308bd1865e874fbd97cdd5dd056e3270d117b62e3"} err="failed to get container status \"37cd3a9cf2629e830cd2f16308bd1865e874fbd97cdd5dd056e3270d117b62e3\": rpc error: code = NotFound desc = an error occurred when try to find container \"37cd3a9cf2629e830cd2f16308bd1865e874fbd97cdd5dd056e3270d117b62e3\": not found" Dec 13 04:13:28.224423 kubelet[1392]: I1213 04:13:28.224412 1392 scope.go:117] "RemoveContainer" containerID="da6cc7cc7d080559b7252f82bbd6f7f064abe8768e2dce0ecd826babc0c0e091" Dec 13 04:13:28.225025 env[1138]: time="2024-12-13T04:13:28.224918533Z" level=error msg="ContainerStatus for \"da6cc7cc7d080559b7252f82bbd6f7f064abe8768e2dce0ecd826babc0c0e091\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"da6cc7cc7d080559b7252f82bbd6f7f064abe8768e2dce0ecd826babc0c0e091\": not found" Dec 13 04:13:28.225505 kubelet[1392]: E1213 04:13:28.225469 1392 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"da6cc7cc7d080559b7252f82bbd6f7f064abe8768e2dce0ecd826babc0c0e091\": not found" containerID="da6cc7cc7d080559b7252f82bbd6f7f064abe8768e2dce0ecd826babc0c0e091" Dec 13 04:13:28.225821 kubelet[1392]: I1213 04:13:28.225734 1392 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"da6cc7cc7d080559b7252f82bbd6f7f064abe8768e2dce0ecd826babc0c0e091"} err="failed to get container status \"da6cc7cc7d080559b7252f82bbd6f7f064abe8768e2dce0ecd826babc0c0e091\": rpc error: code = NotFound desc = an error occurred when try to find container \"da6cc7cc7d080559b7252f82bbd6f7f064abe8768e2dce0ecd826babc0c0e091\": not found" Dec 13 04:13:28.226000 kubelet[1392]: I1213 04:13:28.225974 1392 scope.go:117] "RemoveContainer" containerID="75b916288e76f94163490d05e83a460fc6f471d0775270e7a132fcad8bc0ec73" Dec 13 04:13:28.226541 env[1138]: time="2024-12-13T04:13:28.226438862Z" level=error msg="ContainerStatus for \"75b916288e76f94163490d05e83a460fc6f471d0775270e7a132fcad8bc0ec73\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"75b916288e76f94163490d05e83a460fc6f471d0775270e7a132fcad8bc0ec73\": not found" Dec 13 04:13:28.226969 kubelet[1392]: E1213 04:13:28.226931 1392 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"75b916288e76f94163490d05e83a460fc6f471d0775270e7a132fcad8bc0ec73\": not found" containerID="75b916288e76f94163490d05e83a460fc6f471d0775270e7a132fcad8bc0ec73" Dec 13 04:13:28.227224 kubelet[1392]: I1213 04:13:28.227180 1392 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"75b916288e76f94163490d05e83a460fc6f471d0775270e7a132fcad8bc0ec73"} err="failed to get container status \"75b916288e76f94163490d05e83a460fc6f471d0775270e7a132fcad8bc0ec73\": rpc error: code = NotFound desc = an error occurred when try to find container \"75b916288e76f94163490d05e83a460fc6f471d0775270e7a132fcad8bc0ec73\": not found" Dec 13 04:13:28.227479 kubelet[1392]: I1213 04:13:28.227449 1392 scope.go:117] "RemoveContainer" containerID="5a3bf45b93bb5ed03cded91596a20251ca8e0751a7632e639fe26ee10709925c" Dec 13 04:13:28.228139 env[1138]: time="2024-12-13T04:13:28.228044303Z" level=error msg="ContainerStatus for \"5a3bf45b93bb5ed03cded91596a20251ca8e0751a7632e639fe26ee10709925c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5a3bf45b93bb5ed03cded91596a20251ca8e0751a7632e639fe26ee10709925c\": not found" Dec 13 04:13:28.228710 kubelet[1392]: E1213 04:13:28.228604 1392 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5a3bf45b93bb5ed03cded91596a20251ca8e0751a7632e639fe26ee10709925c\": not found" containerID="5a3bf45b93bb5ed03cded91596a20251ca8e0751a7632e639fe26ee10709925c" Dec 13 04:13:28.228888 kubelet[1392]: I1213 04:13:28.228745 1392 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5a3bf45b93bb5ed03cded91596a20251ca8e0751a7632e639fe26ee10709925c"} err="failed to get container status \"5a3bf45b93bb5ed03cded91596a20251ca8e0751a7632e639fe26ee10709925c\": rpc error: code = NotFound desc = an error occurred when try to find container \"5a3bf45b93bb5ed03cded91596a20251ca8e0751a7632e639fe26ee10709925c\": not found" Dec 13 04:13:28.228888 kubelet[1392]: I1213 04:13:28.228813 1392 scope.go:117] "RemoveContainer" containerID="b65afb64283eb6ab7b3a0997fd08a97e03f3cac16936e26ff183455e9fa641c8" Dec 13 04:13:28.229364 env[1138]: time="2024-12-13T04:13:28.229267391Z" level=error msg="ContainerStatus for \"b65afb64283eb6ab7b3a0997fd08a97e03f3cac16936e26ff183455e9fa641c8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b65afb64283eb6ab7b3a0997fd08a97e03f3cac16936e26ff183455e9fa641c8\": not found" Dec 13 04:13:28.230043 kubelet[1392]: E1213 04:13:28.230003 1392 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b65afb64283eb6ab7b3a0997fd08a97e03f3cac16936e26ff183455e9fa641c8\": not found" containerID="b65afb64283eb6ab7b3a0997fd08a97e03f3cac16936e26ff183455e9fa641c8" Dec 13 04:13:28.230268 kubelet[1392]: I1213 04:13:28.230225 1392 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b65afb64283eb6ab7b3a0997fd08a97e03f3cac16936e26ff183455e9fa641c8"} err="failed to get container status \"b65afb64283eb6ab7b3a0997fd08a97e03f3cac16936e26ff183455e9fa641c8\": rpc error: code = NotFound desc = an error occurred when try to find container \"b65afb64283eb6ab7b3a0997fd08a97e03f3cac16936e26ff183455e9fa641c8\": not found" Dec 13 04:13:28.246684 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c2cfe5660f684aa829b1e522b63df69058bc8c81924518bc0cf60911d8c15ee8-rootfs.mount: Deactivated successfully. Dec 13 04:13:28.246934 systemd[1]: var-lib-kubelet-pods-ecfe4f83\x2da1d3\x2d40da\x2daca7\x2daf579fc21da1-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d6ft77.mount: Deactivated successfully. Dec 13 04:13:28.247092 systemd[1]: var-lib-kubelet-pods-ecfe4f83\x2da1d3\x2d40da\x2daca7\x2daf579fc21da1-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 04:13:28.247242 systemd[1]: var-lib-kubelet-pods-ecfe4f83\x2da1d3\x2d40da\x2daca7\x2daf579fc21da1-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 04:13:28.509456 kubelet[1392]: E1213 04:13:28.509300 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:13:28.655634 kubelet[1392]: I1213 04:13:28.655563 1392 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ecfe4f83-a1d3-40da-aca7-af579fc21da1" path="/var/lib/kubelet/pods/ecfe4f83-a1d3-40da-aca7-af579fc21da1/volumes" Dec 13 04:13:29.510516 kubelet[1392]: E1213 04:13:29.510410 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:13:29.730587 kubelet[1392]: E1213 04:13:29.730494 1392 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 04:13:30.511136 kubelet[1392]: E1213 04:13:30.511032 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:13:31.511963 kubelet[1392]: E1213 04:13:31.511859 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:13:32.194409 kubelet[1392]: E1213 04:13:32.194325 1392 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ecfe4f83-a1d3-40da-aca7-af579fc21da1" containerName="mount-cgroup" Dec 13 04:13:32.194867 kubelet[1392]: E1213 04:13:32.194734 1392 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ecfe4f83-a1d3-40da-aca7-af579fc21da1" containerName="cilium-agent" Dec 13 04:13:32.195208 kubelet[1392]: E1213 04:13:32.195123 1392 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ecfe4f83-a1d3-40da-aca7-af579fc21da1" containerName="apply-sysctl-overwrites" Dec 13 04:13:32.195208 kubelet[1392]: E1213 04:13:32.195180 1392 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ecfe4f83-a1d3-40da-aca7-af579fc21da1" containerName="mount-bpf-fs" Dec 13 04:13:32.195208 kubelet[1392]: E1213 04:13:32.195199 1392 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ecfe4f83-a1d3-40da-aca7-af579fc21da1" containerName="clean-cilium-state" Dec 13 04:13:32.195515 kubelet[1392]: I1213 04:13:32.195254 1392 memory_manager.go:354] "RemoveStaleState removing state" podUID="ecfe4f83-a1d3-40da-aca7-af579fc21da1" containerName="cilium-agent" Dec 13 04:13:32.207366 systemd[1]: Created slice kubepods-besteffort-pod38102156_d2fc_4dab_82a9_b5d503a8ba09.slice. Dec 13 04:13:32.218410 systemd[1]: Created slice kubepods-burstable-pod5d7bce7b_30db_48b5_9250_9ecd6e3d1735.slice. Dec 13 04:13:32.299218 kubelet[1392]: I1213 04:13:32.299158 1392 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5d7bce7b-30db-48b5-9250-9ecd6e3d1735-bpf-maps\") pod \"cilium-mr5ff\" (UID: \"5d7bce7b-30db-48b5-9250-9ecd6e3d1735\") " pod="kube-system/cilium-mr5ff" Dec 13 04:13:32.299624 kubelet[1392]: I1213 04:13:32.299584 1392 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5d7bce7b-30db-48b5-9250-9ecd6e3d1735-hostproc\") pod \"cilium-mr5ff\" (UID: \"5d7bce7b-30db-48b5-9250-9ecd6e3d1735\") " pod="kube-system/cilium-mr5ff" Dec 13 04:13:32.299934 kubelet[1392]: I1213 04:13:32.299893 1392 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5d7bce7b-30db-48b5-9250-9ecd6e3d1735-cni-path\") pod \"cilium-mr5ff\" (UID: \"5d7bce7b-30db-48b5-9250-9ecd6e3d1735\") " pod="kube-system/cilium-mr5ff" Dec 13 04:13:32.300169 kubelet[1392]: I1213 04:13:32.300136 1392 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5d7bce7b-30db-48b5-9250-9ecd6e3d1735-lib-modules\") pod \"cilium-mr5ff\" (UID: \"5d7bce7b-30db-48b5-9250-9ecd6e3d1735\") " pod="kube-system/cilium-mr5ff" Dec 13 04:13:32.300385 kubelet[1392]: I1213 04:13:32.300350 1392 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5d7bce7b-30db-48b5-9250-9ecd6e3d1735-cilium-config-path\") pod \"cilium-mr5ff\" (UID: \"5d7bce7b-30db-48b5-9250-9ecd6e3d1735\") " pod="kube-system/cilium-mr5ff" Dec 13 04:13:32.300588 kubelet[1392]: I1213 04:13:32.300555 1392 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-msg77\" (UniqueName: \"kubernetes.io/projected/5d7bce7b-30db-48b5-9250-9ecd6e3d1735-kube-api-access-msg77\") pod \"cilium-mr5ff\" (UID: \"5d7bce7b-30db-48b5-9250-9ecd6e3d1735\") " pod="kube-system/cilium-mr5ff" Dec 13 04:13:32.300820 kubelet[1392]: I1213 04:13:32.300748 1392 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5d7bce7b-30db-48b5-9250-9ecd6e3d1735-cilium-run\") pod \"cilium-mr5ff\" (UID: \"5d7bce7b-30db-48b5-9250-9ecd6e3d1735\") " pod="kube-system/cilium-mr5ff" Dec 13 04:13:32.301062 kubelet[1392]: I1213 04:13:32.301024 1392 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ksbq2\" (UniqueName: \"kubernetes.io/projected/38102156-d2fc-4dab-82a9-b5d503a8ba09-kube-api-access-ksbq2\") pod \"cilium-operator-5d85765b45-pd98k\" (UID: \"38102156-d2fc-4dab-82a9-b5d503a8ba09\") " pod="kube-system/cilium-operator-5d85765b45-pd98k" Dec 13 04:13:32.301280 kubelet[1392]: I1213 04:13:32.301247 1392 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5d7bce7b-30db-48b5-9250-9ecd6e3d1735-host-proc-sys-kernel\") pod \"cilium-mr5ff\" (UID: \"5d7bce7b-30db-48b5-9250-9ecd6e3d1735\") " pod="kube-system/cilium-mr5ff" Dec 13 04:13:32.301490 kubelet[1392]: I1213 04:13:32.301458 1392 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5d7bce7b-30db-48b5-9250-9ecd6e3d1735-hubble-tls\") pod \"cilium-mr5ff\" (UID: \"5d7bce7b-30db-48b5-9250-9ecd6e3d1735\") " pod="kube-system/cilium-mr5ff" Dec 13 04:13:32.301705 kubelet[1392]: I1213 04:13:32.301669 1392 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5d7bce7b-30db-48b5-9250-9ecd6e3d1735-host-proc-sys-net\") pod \"cilium-mr5ff\" (UID: \"5d7bce7b-30db-48b5-9250-9ecd6e3d1735\") " pod="kube-system/cilium-mr5ff" Dec 13 04:13:32.302017 kubelet[1392]: I1213 04:13:32.301982 1392 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5d7bce7b-30db-48b5-9250-9ecd6e3d1735-xtables-lock\") pod \"cilium-mr5ff\" (UID: \"5d7bce7b-30db-48b5-9250-9ecd6e3d1735\") " pod="kube-system/cilium-mr5ff" Dec 13 04:13:32.302254 kubelet[1392]: I1213 04:13:32.302219 1392 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5d7bce7b-30db-48b5-9250-9ecd6e3d1735-cilium-cgroup\") pod \"cilium-mr5ff\" (UID: \"5d7bce7b-30db-48b5-9250-9ecd6e3d1735\") " pod="kube-system/cilium-mr5ff" Dec 13 04:13:32.302456 kubelet[1392]: I1213 04:13:32.302424 1392 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5d7bce7b-30db-48b5-9250-9ecd6e3d1735-clustermesh-secrets\") pod \"cilium-mr5ff\" (UID: \"5d7bce7b-30db-48b5-9250-9ecd6e3d1735\") " pod="kube-system/cilium-mr5ff" Dec 13 04:13:32.302667 kubelet[1392]: I1213 04:13:32.302633 1392 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/5d7bce7b-30db-48b5-9250-9ecd6e3d1735-cilium-ipsec-secrets\") pod \"cilium-mr5ff\" (UID: \"5d7bce7b-30db-48b5-9250-9ecd6e3d1735\") " pod="kube-system/cilium-mr5ff" Dec 13 04:13:32.302928 kubelet[1392]: I1213 04:13:32.302892 1392 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/38102156-d2fc-4dab-82a9-b5d503a8ba09-cilium-config-path\") pod \"cilium-operator-5d85765b45-pd98k\" (UID: \"38102156-d2fc-4dab-82a9-b5d503a8ba09\") " pod="kube-system/cilium-operator-5d85765b45-pd98k" Dec 13 04:13:32.303148 kubelet[1392]: I1213 04:13:32.303115 1392 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5d7bce7b-30db-48b5-9250-9ecd6e3d1735-etc-cni-netd\") pod \"cilium-mr5ff\" (UID: \"5d7bce7b-30db-48b5-9250-9ecd6e3d1735\") " pod="kube-system/cilium-mr5ff" Dec 13 04:13:32.512896 kubelet[1392]: E1213 04:13:32.512827 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:13:32.515884 env[1138]: time="2024-12-13T04:13:32.515821913Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-pd98k,Uid:38102156-d2fc-4dab-82a9-b5d503a8ba09,Namespace:kube-system,Attempt:0,}" Dec 13 04:13:32.530799 env[1138]: time="2024-12-13T04:13:32.530723160Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mr5ff,Uid:5d7bce7b-30db-48b5-9250-9ecd6e3d1735,Namespace:kube-system,Attempt:0,}" Dec 13 04:13:32.544561 env[1138]: time="2024-12-13T04:13:32.544433781Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 04:13:32.544561 env[1138]: time="2024-12-13T04:13:32.544484147Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 04:13:32.544983 env[1138]: time="2024-12-13T04:13:32.544534321Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 04:13:32.544983 env[1138]: time="2024-12-13T04:13:32.544712828Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4743a9b9a632190250e665bebe6b013ffdd7d2ad7cd78f822f3cdf4f38e2ce70 pid=2929 runtime=io.containerd.runc.v2 Dec 13 04:13:32.548737 env[1138]: time="2024-12-13T04:13:32.548658330Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 04:13:32.548737 env[1138]: time="2024-12-13T04:13:32.548731178Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 04:13:32.549013 env[1138]: time="2024-12-13T04:13:32.548785109Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 04:13:32.549013 env[1138]: time="2024-12-13T04:13:32.548937146Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ca5d73338343df90a8b550845894dd0326144a40ffdad8706e9d052597a4b901 pid=2940 runtime=io.containerd.runc.v2 Dec 13 04:13:32.564146 systemd[1]: Started cri-containerd-ca5d73338343df90a8b550845894dd0326144a40ffdad8706e9d052597a4b901.scope. Dec 13 04:13:32.573720 systemd[1]: Started cri-containerd-4743a9b9a632190250e665bebe6b013ffdd7d2ad7cd78f822f3cdf4f38e2ce70.scope. Dec 13 04:13:32.607970 env[1138]: time="2024-12-13T04:13:32.607912287Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mr5ff,Uid:5d7bce7b-30db-48b5-9250-9ecd6e3d1735,Namespace:kube-system,Attempt:0,} returns sandbox id \"ca5d73338343df90a8b550845894dd0326144a40ffdad8706e9d052597a4b901\"" Dec 13 04:13:32.610610 env[1138]: time="2024-12-13T04:13:32.610578877Z" level=info msg="CreateContainer within sandbox \"ca5d73338343df90a8b550845894dd0326144a40ffdad8706e9d052597a4b901\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 04:13:32.625950 env[1138]: time="2024-12-13T04:13:32.625901629Z" level=info msg="CreateContainer within sandbox \"ca5d73338343df90a8b550845894dd0326144a40ffdad8706e9d052597a4b901\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"30d7379536440996cbc427dd4fa736f952862ece3819184f80bf4b46d027da52\"" Dec 13 04:13:32.626625 env[1138]: time="2024-12-13T04:13:32.626595618Z" level=info msg="StartContainer for \"30d7379536440996cbc427dd4fa736f952862ece3819184f80bf4b46d027da52\"" Dec 13 04:13:32.648358 systemd[1]: Started cri-containerd-30d7379536440996cbc427dd4fa736f952862ece3819184f80bf4b46d027da52.scope. Dec 13 04:13:32.656329 env[1138]: time="2024-12-13T04:13:32.656084571Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-pd98k,Uid:38102156-d2fc-4dab-82a9-b5d503a8ba09,Namespace:kube-system,Attempt:0,} returns sandbox id \"4743a9b9a632190250e665bebe6b013ffdd7d2ad7cd78f822f3cdf4f38e2ce70\"" Dec 13 04:13:32.660803 env[1138]: time="2024-12-13T04:13:32.660737618Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Dec 13 04:13:32.664733 systemd[1]: cri-containerd-30d7379536440996cbc427dd4fa736f952862ece3819184f80bf4b46d027da52.scope: Deactivated successfully. Dec 13 04:13:32.710556 env[1138]: time="2024-12-13T04:13:32.710471217Z" level=info msg="shim disconnected" id=30d7379536440996cbc427dd4fa736f952862ece3819184f80bf4b46d027da52 Dec 13 04:13:32.710556 env[1138]: time="2024-12-13T04:13:32.710536531Z" level=warning msg="cleaning up after shim disconnected" id=30d7379536440996cbc427dd4fa736f952862ece3819184f80bf4b46d027da52 namespace=k8s.io Dec 13 04:13:32.710556 env[1138]: time="2024-12-13T04:13:32.710546649Z" level=info msg="cleaning up dead shim" Dec 13 04:13:32.721730 env[1138]: time="2024-12-13T04:13:32.721647649Z" level=warning msg="cleanup warnings time=\"2024-12-13T04:13:32Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3028 runtime=io.containerd.runc.v2\ntime=\"2024-12-13T04:13:32Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/30d7379536440996cbc427dd4fa736f952862ece3819184f80bf4b46d027da52/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Dec 13 04:13:32.722682 env[1138]: time="2024-12-13T04:13:32.722484928Z" level=error msg="copy shim log" error="read /proc/self/fd/66: file already closed" Dec 13 04:13:32.723897 env[1138]: time="2024-12-13T04:13:32.723829745Z" level=error msg="Failed to pipe stdout of container \"30d7379536440996cbc427dd4fa736f952862ece3819184f80bf4b46d027da52\"" error="reading from a closed fifo" Dec 13 04:13:32.724914 env[1138]: time="2024-12-13T04:13:32.724859086Z" level=error msg="Failed to pipe stderr of container \"30d7379536440996cbc427dd4fa736f952862ece3819184f80bf4b46d027da52\"" error="reading from a closed fifo" Dec 13 04:13:32.727802 env[1138]: time="2024-12-13T04:13:32.727704774Z" level=error msg="StartContainer for \"30d7379536440996cbc427dd4fa736f952862ece3819184f80bf4b46d027da52\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Dec 13 04:13:32.727955 kubelet[1392]: E1213 04:13:32.727922 1392 log.go:32] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="30d7379536440996cbc427dd4fa736f952862ece3819184f80bf4b46d027da52" Dec 13 04:13:32.731975 kubelet[1392]: E1213 04:13:32.731934 1392 kuberuntime_manager.go:1272] "Unhandled Error" err=< Dec 13 04:13:32.731975 kubelet[1392]: init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Dec 13 04:13:32.731975 kubelet[1392]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Dec 13 04:13:32.731975 kubelet[1392]: rm /hostbin/cilium-mount Dec 13 04:13:32.732370 kubelet[1392]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-msg77,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:&AppArmorProfile{Type:Unconfined,LocalhostProfile:nil,},},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-mr5ff_kube-system(5d7bce7b-30db-48b5-9250-9ecd6e3d1735): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Dec 13 04:13:32.732370 kubelet[1392]: > logger="UnhandledError" Dec 13 04:13:32.733126 kubelet[1392]: E1213 04:13:32.733077 1392 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-mr5ff" podUID="5d7bce7b-30db-48b5-9250-9ecd6e3d1735" Dec 13 04:13:33.195541 env[1138]: time="2024-12-13T04:13:33.195418279Z" level=info msg="CreateContainer within sandbox \"ca5d73338343df90a8b550845894dd0326144a40ffdad8706e9d052597a4b901\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:1,}" Dec 13 04:13:33.224332 env[1138]: time="2024-12-13T04:13:33.224251107Z" level=info msg="CreateContainer within sandbox \"ca5d73338343df90a8b550845894dd0326144a40ffdad8706e9d052597a4b901\" for &ContainerMetadata{Name:mount-cgroup,Attempt:1,} returns container id \"96bbaac251044e3c9c369f41fb42433eced157dcf1f661e7c831bde0eaab39ed\"" Dec 13 04:13:33.226510 env[1138]: time="2024-12-13T04:13:33.226456828Z" level=info msg="StartContainer for \"96bbaac251044e3c9c369f41fb42433eced157dcf1f661e7c831bde0eaab39ed\"" Dec 13 04:13:33.262512 systemd[1]: Started cri-containerd-96bbaac251044e3c9c369f41fb42433eced157dcf1f661e7c831bde0eaab39ed.scope. Dec 13 04:13:33.285881 systemd[1]: cri-containerd-96bbaac251044e3c9c369f41fb42433eced157dcf1f661e7c831bde0eaab39ed.scope: Deactivated successfully. Dec 13 04:13:33.300078 env[1138]: time="2024-12-13T04:13:33.299978736Z" level=info msg="shim disconnected" id=96bbaac251044e3c9c369f41fb42433eced157dcf1f661e7c831bde0eaab39ed Dec 13 04:13:33.300522 env[1138]: time="2024-12-13T04:13:33.300479320Z" level=warning msg="cleaning up after shim disconnected" id=96bbaac251044e3c9c369f41fb42433eced157dcf1f661e7c831bde0eaab39ed namespace=k8s.io Dec 13 04:13:33.300692 env[1138]: time="2024-12-13T04:13:33.300657006Z" level=info msg="cleaning up dead shim" Dec 13 04:13:33.309449 env[1138]: time="2024-12-13T04:13:33.309381019Z" level=warning msg="cleanup warnings time=\"2024-12-13T04:13:33Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3064 runtime=io.containerd.runc.v2\ntime=\"2024-12-13T04:13:33Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/96bbaac251044e3c9c369f41fb42433eced157dcf1f661e7c831bde0eaab39ed/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Dec 13 04:13:33.310175 env[1138]: time="2024-12-13T04:13:33.310075709Z" level=error msg="copy shim log" error="read /proc/self/fd/71: file already closed" Dec 13 04:13:33.310871 env[1138]: time="2024-12-13T04:13:33.310801868Z" level=error msg="Failed to pipe stdout of container \"96bbaac251044e3c9c369f41fb42433eced157dcf1f661e7c831bde0eaab39ed\"" error="reading from a closed fifo" Dec 13 04:13:33.311051 env[1138]: time="2024-12-13T04:13:33.310887289Z" level=error msg="Failed to pipe stderr of container \"96bbaac251044e3c9c369f41fb42433eced157dcf1f661e7c831bde0eaab39ed\"" error="reading from a closed fifo" Dec 13 04:13:33.314204 env[1138]: time="2024-12-13T04:13:33.314085912Z" level=error msg="StartContainer for \"96bbaac251044e3c9c369f41fb42433eced157dcf1f661e7c831bde0eaab39ed\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Dec 13 04:13:33.314819 kubelet[1392]: E1213 04:13:33.314540 1392 log.go:32] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="96bbaac251044e3c9c369f41fb42433eced157dcf1f661e7c831bde0eaab39ed" Dec 13 04:13:33.316807 kubelet[1392]: E1213 04:13:33.315030 1392 kuberuntime_manager.go:1272] "Unhandled Error" err=< Dec 13 04:13:33.316807 kubelet[1392]: init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Dec 13 04:13:33.316807 kubelet[1392]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Dec 13 04:13:33.316807 kubelet[1392]: rm /hostbin/cilium-mount Dec 13 04:13:33.316807 kubelet[1392]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-msg77,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:&AppArmorProfile{Type:Unconfined,LocalhostProfile:nil,},},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-mr5ff_kube-system(5d7bce7b-30db-48b5-9250-9ecd6e3d1735): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Dec 13 04:13:33.316807 kubelet[1392]: > logger="UnhandledError" Dec 13 04:13:33.316807 kubelet[1392]: E1213 04:13:33.316194 1392 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-mr5ff" podUID="5d7bce7b-30db-48b5-9250-9ecd6e3d1735" Dec 13 04:13:33.513502 kubelet[1392]: E1213 04:13:33.513285 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:13:34.202410 kubelet[1392]: I1213 04:13:34.202339 1392 scope.go:117] "RemoveContainer" containerID="30d7379536440996cbc427dd4fa736f952862ece3819184f80bf4b46d027da52" Dec 13 04:13:34.203135 kubelet[1392]: I1213 04:13:34.203083 1392 scope.go:117] "RemoveContainer" containerID="30d7379536440996cbc427dd4fa736f952862ece3819184f80bf4b46d027da52" Dec 13 04:13:34.206572 env[1138]: time="2024-12-13T04:13:34.206508731Z" level=info msg="RemoveContainer for \"30d7379536440996cbc427dd4fa736f952862ece3819184f80bf4b46d027da52\"" Dec 13 04:13:34.208378 env[1138]: time="2024-12-13T04:13:34.208222241Z" level=info msg="RemoveContainer for \"30d7379536440996cbc427dd4fa736f952862ece3819184f80bf4b46d027da52\"" Dec 13 04:13:34.209186 env[1138]: time="2024-12-13T04:13:34.208963950Z" level=error msg="RemoveContainer for \"30d7379536440996cbc427dd4fa736f952862ece3819184f80bf4b46d027da52\" failed" error="failed to set removing state for container \"30d7379536440996cbc427dd4fa736f952862ece3819184f80bf4b46d027da52\": container is already in removing state" Dec 13 04:13:34.210894 kubelet[1392]: E1213 04:13:34.210720 1392 log.go:32] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"30d7379536440996cbc427dd4fa736f952862ece3819184f80bf4b46d027da52\": container is already in removing state" containerID="30d7379536440996cbc427dd4fa736f952862ece3819184f80bf4b46d027da52" Dec 13 04:13:34.210894 kubelet[1392]: I1213 04:13:34.210828 1392 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"30d7379536440996cbc427dd4fa736f952862ece3819184f80bf4b46d027da52"} err="rpc error: code = Unknown desc = failed to set removing state for container \"30d7379536440996cbc427dd4fa736f952862ece3819184f80bf4b46d027da52\": container is already in removing state" Dec 13 04:13:34.214705 env[1138]: time="2024-12-13T04:13:34.214634654Z" level=info msg="RemoveContainer for \"30d7379536440996cbc427dd4fa736f952862ece3819184f80bf4b46d027da52\" returns successfully" Dec 13 04:13:34.215323 kubelet[1392]: E1213 04:13:34.215251 1392 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mount-cgroup pod=cilium-mr5ff_kube-system(5d7bce7b-30db-48b5-9250-9ecd6e3d1735)\"" pod="kube-system/cilium-mr5ff" podUID="5d7bce7b-30db-48b5-9250-9ecd6e3d1735" Dec 13 04:13:34.438457 kubelet[1392]: E1213 04:13:34.438286 1392 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:13:34.514093 kubelet[1392]: E1213 04:13:34.513931 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:13:34.732517 kubelet[1392]: E1213 04:13:34.732461 1392 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 04:13:34.963663 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1419046575.mount: Deactivated successfully. Dec 13 04:13:35.207139 env[1138]: time="2024-12-13T04:13:35.207010754Z" level=info msg="StopPodSandbox for \"ca5d73338343df90a8b550845894dd0326144a40ffdad8706e9d052597a4b901\"" Dec 13 04:13:35.208357 env[1138]: time="2024-12-13T04:13:35.208301598Z" level=info msg="Container to stop \"96bbaac251044e3c9c369f41fb42433eced157dcf1f661e7c831bde0eaab39ed\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 04:13:35.213325 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ca5d73338343df90a8b550845894dd0326144a40ffdad8706e9d052597a4b901-shm.mount: Deactivated successfully. Dec 13 04:13:35.233577 systemd[1]: cri-containerd-ca5d73338343df90a8b550845894dd0326144a40ffdad8706e9d052597a4b901.scope: Deactivated successfully. Dec 13 04:13:35.278829 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ca5d73338343df90a8b550845894dd0326144a40ffdad8706e9d052597a4b901-rootfs.mount: Deactivated successfully. Dec 13 04:13:35.388153 env[1138]: time="2024-12-13T04:13:35.388089786Z" level=info msg="shim disconnected" id=ca5d73338343df90a8b550845894dd0326144a40ffdad8706e9d052597a4b901 Dec 13 04:13:35.388153 env[1138]: time="2024-12-13T04:13:35.388134881Z" level=warning msg="cleaning up after shim disconnected" id=ca5d73338343df90a8b550845894dd0326144a40ffdad8706e9d052597a4b901 namespace=k8s.io Dec 13 04:13:35.388153 env[1138]: time="2024-12-13T04:13:35.388145150Z" level=info msg="cleaning up dead shim" Dec 13 04:13:35.409660 env[1138]: time="2024-12-13T04:13:35.409594231Z" level=warning msg="cleanup warnings time=\"2024-12-13T04:13:35Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3096 runtime=io.containerd.runc.v2\n" Dec 13 04:13:35.410479 env[1138]: time="2024-12-13T04:13:35.410421130Z" level=info msg="TearDown network for sandbox \"ca5d73338343df90a8b550845894dd0326144a40ffdad8706e9d052597a4b901\" successfully" Dec 13 04:13:35.410668 env[1138]: time="2024-12-13T04:13:35.410623872Z" level=info msg="StopPodSandbox for \"ca5d73338343df90a8b550845894dd0326144a40ffdad8706e9d052597a4b901\" returns successfully" Dec 13 04:13:35.514840 kubelet[1392]: E1213 04:13:35.514670 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:13:35.533492 kubelet[1392]: I1213 04:13:35.532815 1392 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5d7bce7b-30db-48b5-9250-9ecd6e3d1735-cni-path\") pod \"5d7bce7b-30db-48b5-9250-9ecd6e3d1735\" (UID: \"5d7bce7b-30db-48b5-9250-9ecd6e3d1735\") " Dec 13 04:13:35.533492 kubelet[1392]: I1213 04:13:35.532850 1392 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5d7bce7b-30db-48b5-9250-9ecd6e3d1735-cilium-run\") pod \"5d7bce7b-30db-48b5-9250-9ecd6e3d1735\" (UID: \"5d7bce7b-30db-48b5-9250-9ecd6e3d1735\") " Dec 13 04:13:35.533492 kubelet[1392]: I1213 04:13:35.532877 1392 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5d7bce7b-30db-48b5-9250-9ecd6e3d1735-host-proc-sys-kernel\") pod \"5d7bce7b-30db-48b5-9250-9ecd6e3d1735\" (UID: \"5d7bce7b-30db-48b5-9250-9ecd6e3d1735\") " Dec 13 04:13:35.533492 kubelet[1392]: I1213 04:13:35.532895 1392 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5d7bce7b-30db-48b5-9250-9ecd6e3d1735-hostproc\") pod \"5d7bce7b-30db-48b5-9250-9ecd6e3d1735\" (UID: \"5d7bce7b-30db-48b5-9250-9ecd6e3d1735\") " Dec 13 04:13:35.533492 kubelet[1392]: I1213 04:13:35.532920 1392 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-msg77\" (UniqueName: \"kubernetes.io/projected/5d7bce7b-30db-48b5-9250-9ecd6e3d1735-kube-api-access-msg77\") pod \"5d7bce7b-30db-48b5-9250-9ecd6e3d1735\" (UID: \"5d7bce7b-30db-48b5-9250-9ecd6e3d1735\") " Dec 13 04:13:35.533492 kubelet[1392]: I1213 04:13:35.532939 1392 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5d7bce7b-30db-48b5-9250-9ecd6e3d1735-host-proc-sys-net\") pod \"5d7bce7b-30db-48b5-9250-9ecd6e3d1735\" (UID: \"5d7bce7b-30db-48b5-9250-9ecd6e3d1735\") " Dec 13 04:13:35.533492 kubelet[1392]: I1213 04:13:35.532957 1392 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5d7bce7b-30db-48b5-9250-9ecd6e3d1735-xtables-lock\") pod \"5d7bce7b-30db-48b5-9250-9ecd6e3d1735\" (UID: \"5d7bce7b-30db-48b5-9250-9ecd6e3d1735\") " Dec 13 04:13:35.533492 kubelet[1392]: I1213 04:13:35.532977 1392 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5d7bce7b-30db-48b5-9250-9ecd6e3d1735-bpf-maps\") pod \"5d7bce7b-30db-48b5-9250-9ecd6e3d1735\" (UID: \"5d7bce7b-30db-48b5-9250-9ecd6e3d1735\") " Dec 13 04:13:35.533492 kubelet[1392]: I1213 04:13:35.532995 1392 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5d7bce7b-30db-48b5-9250-9ecd6e3d1735-lib-modules\") pod \"5d7bce7b-30db-48b5-9250-9ecd6e3d1735\" (UID: \"5d7bce7b-30db-48b5-9250-9ecd6e3d1735\") " Dec 13 04:13:35.533492 kubelet[1392]: I1213 04:13:35.533014 1392 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5d7bce7b-30db-48b5-9250-9ecd6e3d1735-clustermesh-secrets\") pod \"5d7bce7b-30db-48b5-9250-9ecd6e3d1735\" (UID: \"5d7bce7b-30db-48b5-9250-9ecd6e3d1735\") " Dec 13 04:13:35.533492 kubelet[1392]: I1213 04:13:35.533036 1392 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5d7bce7b-30db-48b5-9250-9ecd6e3d1735-cilium-config-path\") pod \"5d7bce7b-30db-48b5-9250-9ecd6e3d1735\" (UID: \"5d7bce7b-30db-48b5-9250-9ecd6e3d1735\") " Dec 13 04:13:35.533492 kubelet[1392]: I1213 04:13:35.533055 1392 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/5d7bce7b-30db-48b5-9250-9ecd6e3d1735-cilium-ipsec-secrets\") pod \"5d7bce7b-30db-48b5-9250-9ecd6e3d1735\" (UID: \"5d7bce7b-30db-48b5-9250-9ecd6e3d1735\") " Dec 13 04:13:35.533492 kubelet[1392]: I1213 04:13:35.533076 1392 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5d7bce7b-30db-48b5-9250-9ecd6e3d1735-etc-cni-netd\") pod \"5d7bce7b-30db-48b5-9250-9ecd6e3d1735\" (UID: \"5d7bce7b-30db-48b5-9250-9ecd6e3d1735\") " Dec 13 04:13:35.533492 kubelet[1392]: I1213 04:13:35.533095 1392 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5d7bce7b-30db-48b5-9250-9ecd6e3d1735-hubble-tls\") pod \"5d7bce7b-30db-48b5-9250-9ecd6e3d1735\" (UID: \"5d7bce7b-30db-48b5-9250-9ecd6e3d1735\") " Dec 13 04:13:35.533492 kubelet[1392]: I1213 04:13:35.533115 1392 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5d7bce7b-30db-48b5-9250-9ecd6e3d1735-cilium-cgroup\") pod \"5d7bce7b-30db-48b5-9250-9ecd6e3d1735\" (UID: \"5d7bce7b-30db-48b5-9250-9ecd6e3d1735\") " Dec 13 04:13:35.541962 kubelet[1392]: I1213 04:13:35.532934 1392 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5d7bce7b-30db-48b5-9250-9ecd6e3d1735-cni-path" (OuterVolumeSpecName: "cni-path") pod "5d7bce7b-30db-48b5-9250-9ecd6e3d1735" (UID: "5d7bce7b-30db-48b5-9250-9ecd6e3d1735"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 04:13:35.542021 kubelet[1392]: I1213 04:13:35.533003 1392 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5d7bce7b-30db-48b5-9250-9ecd6e3d1735-hostproc" (OuterVolumeSpecName: "hostproc") pod "5d7bce7b-30db-48b5-9250-9ecd6e3d1735" (UID: "5d7bce7b-30db-48b5-9250-9ecd6e3d1735"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 04:13:35.542021 kubelet[1392]: I1213 04:13:35.533033 1392 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5d7bce7b-30db-48b5-9250-9ecd6e3d1735-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "5d7bce7b-30db-48b5-9250-9ecd6e3d1735" (UID: "5d7bce7b-30db-48b5-9250-9ecd6e3d1735"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 04:13:35.542021 kubelet[1392]: I1213 04:13:35.533066 1392 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5d7bce7b-30db-48b5-9250-9ecd6e3d1735-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "5d7bce7b-30db-48b5-9250-9ecd6e3d1735" (UID: "5d7bce7b-30db-48b5-9250-9ecd6e3d1735"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 04:13:35.542021 kubelet[1392]: I1213 04:13:35.533109 1392 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5d7bce7b-30db-48b5-9250-9ecd6e3d1735-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "5d7bce7b-30db-48b5-9250-9ecd6e3d1735" (UID: "5d7bce7b-30db-48b5-9250-9ecd6e3d1735"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 04:13:35.542021 kubelet[1392]: I1213 04:13:35.533167 1392 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5d7bce7b-30db-48b5-9250-9ecd6e3d1735-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "5d7bce7b-30db-48b5-9250-9ecd6e3d1735" (UID: "5d7bce7b-30db-48b5-9250-9ecd6e3d1735"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 04:13:35.542162 kubelet[1392]: I1213 04:13:35.533183 1392 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5d7bce7b-30db-48b5-9250-9ecd6e3d1735-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "5d7bce7b-30db-48b5-9250-9ecd6e3d1735" (UID: "5d7bce7b-30db-48b5-9250-9ecd6e3d1735"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 04:13:35.542162 kubelet[1392]: I1213 04:13:35.539881 1392 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5d7bce7b-30db-48b5-9250-9ecd6e3d1735-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "5d7bce7b-30db-48b5-9250-9ecd6e3d1735" (UID: "5d7bce7b-30db-48b5-9250-9ecd6e3d1735"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 04:13:35.542162 kubelet[1392]: I1213 04:13:35.541875 1392 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5d7bce7b-30db-48b5-9250-9ecd6e3d1735-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "5d7bce7b-30db-48b5-9250-9ecd6e3d1735" (UID: "5d7bce7b-30db-48b5-9250-9ecd6e3d1735"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 04:13:35.542162 kubelet[1392]: I1213 04:13:35.541917 1392 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5d7bce7b-30db-48b5-9250-9ecd6e3d1735-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "5d7bce7b-30db-48b5-9250-9ecd6e3d1735" (UID: "5d7bce7b-30db-48b5-9250-9ecd6e3d1735"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 04:13:35.542162 kubelet[1392]: I1213 04:13:35.542093 1392 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5d7bce7b-30db-48b5-9250-9ecd6e3d1735-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "5d7bce7b-30db-48b5-9250-9ecd6e3d1735" (UID: "5d7bce7b-30db-48b5-9250-9ecd6e3d1735"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 04:13:35.546237 systemd[1]: var-lib-kubelet-pods-5d7bce7b\x2d30db\x2d48b5\x2d9250\x2d9ecd6e3d1735-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Dec 13 04:13:35.559946 kubelet[1392]: I1213 04:13:35.559893 1392 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d7bce7b-30db-48b5-9250-9ecd6e3d1735-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "5d7bce7b-30db-48b5-9250-9ecd6e3d1735" (UID: "5d7bce7b-30db-48b5-9250-9ecd6e3d1735"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 04:13:35.565066 kubelet[1392]: I1213 04:13:35.565036 1392 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d7bce7b-30db-48b5-9250-9ecd6e3d1735-kube-api-access-msg77" (OuterVolumeSpecName: "kube-api-access-msg77") pod "5d7bce7b-30db-48b5-9250-9ecd6e3d1735" (UID: "5d7bce7b-30db-48b5-9250-9ecd6e3d1735"). InnerVolumeSpecName "kube-api-access-msg77". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 04:13:35.566499 kubelet[1392]: I1213 04:13:35.566472 1392 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d7bce7b-30db-48b5-9250-9ecd6e3d1735-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "5d7bce7b-30db-48b5-9250-9ecd6e3d1735" (UID: "5d7bce7b-30db-48b5-9250-9ecd6e3d1735"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 04:13:35.567228 kubelet[1392]: I1213 04:13:35.567166 1392 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d7bce7b-30db-48b5-9250-9ecd6e3d1735-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "5d7bce7b-30db-48b5-9250-9ecd6e3d1735" (UID: "5d7bce7b-30db-48b5-9250-9ecd6e3d1735"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 04:13:35.634312 kubelet[1392]: I1213 04:13:35.634260 1392 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5d7bce7b-30db-48b5-9250-9ecd6e3d1735-hubble-tls\") on node \"172.24.4.93\" DevicePath \"\"" Dec 13 04:13:35.634312 kubelet[1392]: I1213 04:13:35.634318 1392 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5d7bce7b-30db-48b5-9250-9ecd6e3d1735-cilium-cgroup\") on node \"172.24.4.93\" DevicePath \"\"" Dec 13 04:13:35.634468 kubelet[1392]: I1213 04:13:35.634343 1392 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5d7bce7b-30db-48b5-9250-9ecd6e3d1735-etc-cni-netd\") on node \"172.24.4.93\" DevicePath \"\"" Dec 13 04:13:35.634468 kubelet[1392]: I1213 04:13:35.634367 1392 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5d7bce7b-30db-48b5-9250-9ecd6e3d1735-hostproc\") on node \"172.24.4.93\" DevicePath \"\"" Dec 13 04:13:35.634468 kubelet[1392]: I1213 04:13:35.634388 1392 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5d7bce7b-30db-48b5-9250-9ecd6e3d1735-cni-path\") on node \"172.24.4.93\" DevicePath \"\"" Dec 13 04:13:35.634468 kubelet[1392]: I1213 04:13:35.634409 1392 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5d7bce7b-30db-48b5-9250-9ecd6e3d1735-cilium-run\") on node \"172.24.4.93\" DevicePath \"\"" Dec 13 04:13:35.634468 kubelet[1392]: I1213 04:13:35.634430 1392 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5d7bce7b-30db-48b5-9250-9ecd6e3d1735-host-proc-sys-kernel\") on node \"172.24.4.93\" DevicePath \"\"" Dec 13 04:13:35.634468 kubelet[1392]: I1213 04:13:35.634452 1392 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5d7bce7b-30db-48b5-9250-9ecd6e3d1735-bpf-maps\") on node \"172.24.4.93\" DevicePath \"\"" Dec 13 04:13:35.634626 kubelet[1392]: I1213 04:13:35.634474 1392 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5d7bce7b-30db-48b5-9250-9ecd6e3d1735-lib-modules\") on node \"172.24.4.93\" DevicePath \"\"" Dec 13 04:13:35.634626 kubelet[1392]: I1213 04:13:35.634497 1392 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-msg77\" (UniqueName: \"kubernetes.io/projected/5d7bce7b-30db-48b5-9250-9ecd6e3d1735-kube-api-access-msg77\") on node \"172.24.4.93\" DevicePath \"\"" Dec 13 04:13:35.634626 kubelet[1392]: I1213 04:13:35.634520 1392 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5d7bce7b-30db-48b5-9250-9ecd6e3d1735-host-proc-sys-net\") on node \"172.24.4.93\" DevicePath \"\"" Dec 13 04:13:35.634626 kubelet[1392]: I1213 04:13:35.634543 1392 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5d7bce7b-30db-48b5-9250-9ecd6e3d1735-xtables-lock\") on node \"172.24.4.93\" DevicePath \"\"" Dec 13 04:13:35.634626 kubelet[1392]: I1213 04:13:35.634563 1392 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5d7bce7b-30db-48b5-9250-9ecd6e3d1735-clustermesh-secrets\") on node \"172.24.4.93\" DevicePath \"\"" Dec 13 04:13:35.634626 kubelet[1392]: I1213 04:13:35.634583 1392 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5d7bce7b-30db-48b5-9250-9ecd6e3d1735-cilium-config-path\") on node \"172.24.4.93\" DevicePath \"\"" Dec 13 04:13:35.634626 kubelet[1392]: I1213 04:13:35.634604 1392 reconciler_common.go:288] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/5d7bce7b-30db-48b5-9250-9ecd6e3d1735-cilium-ipsec-secrets\") on node \"172.24.4.93\" DevicePath \"\"" Dec 13 04:13:35.826111 kubelet[1392]: W1213 04:13:35.826041 1392 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5d7bce7b_30db_48b5_9250_9ecd6e3d1735.slice/cri-containerd-30d7379536440996cbc427dd4fa736f952862ece3819184f80bf4b46d027da52.scope WatchSource:0}: container "30d7379536440996cbc427dd4fa736f952862ece3819184f80bf4b46d027da52" in namespace "k8s.io": not found Dec 13 04:13:35.942842 systemd[1]: var-lib-kubelet-pods-5d7bce7b\x2d30db\x2d48b5\x2d9250\x2d9ecd6e3d1735-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dmsg77.mount: Deactivated successfully. Dec 13 04:13:35.942941 systemd[1]: var-lib-kubelet-pods-5d7bce7b\x2d30db\x2d48b5\x2d9250\x2d9ecd6e3d1735-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 04:13:35.943007 systemd[1]: var-lib-kubelet-pods-5d7bce7b\x2d30db\x2d48b5\x2d9250\x2d9ecd6e3d1735-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 04:13:35.993083 env[1138]: time="2024-12-13T04:13:35.993041070Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 04:13:35.995772 env[1138]: time="2024-12-13T04:13:35.995731271Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 04:13:35.998392 env[1138]: time="2024-12-13T04:13:35.998366631Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 04:13:35.999831 env[1138]: time="2024-12-13T04:13:35.999718199Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Dec 13 04:13:36.003783 env[1138]: time="2024-12-13T04:13:36.003727898Z" level=info msg="CreateContainer within sandbox \"4743a9b9a632190250e665bebe6b013ffdd7d2ad7cd78f822f3cdf4f38e2ce70\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Dec 13 04:13:36.016162 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1003690790.mount: Deactivated successfully. Dec 13 04:13:36.024227 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3400222946.mount: Deactivated successfully. Dec 13 04:13:36.030682 env[1138]: time="2024-12-13T04:13:36.030617981Z" level=info msg="CreateContainer within sandbox \"4743a9b9a632190250e665bebe6b013ffdd7d2ad7cd78f822f3cdf4f38e2ce70\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"ea3e71d25d403a926a05c5a30b25b4f2f469cce618a0ac8dea1a9f4218f77f2a\"" Dec 13 04:13:36.031822 env[1138]: time="2024-12-13T04:13:36.031727132Z" level=info msg="StartContainer for \"ea3e71d25d403a926a05c5a30b25b4f2f469cce618a0ac8dea1a9f4218f77f2a\"" Dec 13 04:13:36.067307 systemd[1]: Started cri-containerd-ea3e71d25d403a926a05c5a30b25b4f2f469cce618a0ac8dea1a9f4218f77f2a.scope. Dec 13 04:13:36.106063 env[1138]: time="2024-12-13T04:13:36.105967508Z" level=info msg="StartContainer for \"ea3e71d25d403a926a05c5a30b25b4f2f469cce618a0ac8dea1a9f4218f77f2a\" returns successfully" Dec 13 04:13:36.212792 kubelet[1392]: I1213 04:13:36.211321 1392 scope.go:117] "RemoveContainer" containerID="96bbaac251044e3c9c369f41fb42433eced157dcf1f661e7c831bde0eaab39ed" Dec 13 04:13:36.221139 systemd[1]: Removed slice kubepods-burstable-pod5d7bce7b_30db_48b5_9250_9ecd6e3d1735.slice. Dec 13 04:13:36.234974 env[1138]: time="2024-12-13T04:13:36.234921359Z" level=info msg="RemoveContainer for \"96bbaac251044e3c9c369f41fb42433eced157dcf1f661e7c831bde0eaab39ed\"" Dec 13 04:13:36.240915 env[1138]: time="2024-12-13T04:13:36.240885903Z" level=info msg="RemoveContainer for \"96bbaac251044e3c9c369f41fb42433eced157dcf1f661e7c831bde0eaab39ed\" returns successfully" Dec 13 04:13:36.377666 kubelet[1392]: I1213 04:13:36.377470 1392 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-pd98k" podStartSLOduration=1.033681922 podStartE2EDuration="4.377437477s" podCreationTimestamp="2024-12-13 04:13:32 +0000 UTC" firstStartedPulling="2024-12-13 04:13:32.657417436 +0000 UTC m=+79.323406396" lastFinishedPulling="2024-12-13 04:13:36.001172951 +0000 UTC m=+82.667161951" observedRunningTime="2024-12-13 04:13:36.284734846 +0000 UTC m=+82.950723806" watchObservedRunningTime="2024-12-13 04:13:36.377437477 +0000 UTC m=+83.043426487" Dec 13 04:13:36.378451 kubelet[1392]: E1213 04:13:36.378368 1392 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="5d7bce7b-30db-48b5-9250-9ecd6e3d1735" containerName="mount-cgroup" Dec 13 04:13:36.378643 kubelet[1392]: E1213 04:13:36.378617 1392 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="5d7bce7b-30db-48b5-9250-9ecd6e3d1735" containerName="mount-cgroup" Dec 13 04:13:36.378884 kubelet[1392]: I1213 04:13:36.378854 1392 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d7bce7b-30db-48b5-9250-9ecd6e3d1735" containerName="mount-cgroup" Dec 13 04:13:36.379146 kubelet[1392]: I1213 04:13:36.379117 1392 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d7bce7b-30db-48b5-9250-9ecd6e3d1735" containerName="mount-cgroup" Dec 13 04:13:36.395173 systemd[1]: Created slice kubepods-burstable-pod8813a271_58de_462f_88f2_afa0e966e17a.slice. Dec 13 04:13:36.404347 kubelet[1392]: W1213 04:13:36.404301 1392 reflector.go:561] object-"kube-system"/"cilium-ipsec-keys": failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:172.24.4.93" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node '172.24.4.93' and this object Dec 13 04:13:36.404677 kubelet[1392]: E1213 04:13:36.404615 1392 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"cilium-ipsec-keys\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-ipsec-keys\" is forbidden: User \"system:node:172.24.4.93\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node '172.24.4.93' and this object" logger="UnhandledError" Dec 13 04:13:36.515898 kubelet[1392]: E1213 04:13:36.515817 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:13:36.541386 kubelet[1392]: I1213 04:13:36.541339 1392 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8813a271-58de-462f-88f2-afa0e966e17a-bpf-maps\") pod \"cilium-n45g4\" (UID: \"8813a271-58de-462f-88f2-afa0e966e17a\") " pod="kube-system/cilium-n45g4" Dec 13 04:13:36.541685 kubelet[1392]: I1213 04:13:36.541645 1392 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/8813a271-58de-462f-88f2-afa0e966e17a-cilium-ipsec-secrets\") pod \"cilium-n45g4\" (UID: \"8813a271-58de-462f-88f2-afa0e966e17a\") " pod="kube-system/cilium-n45g4" Dec 13 04:13:36.541988 kubelet[1392]: I1213 04:13:36.541953 1392 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8813a271-58de-462f-88f2-afa0e966e17a-hubble-tls\") pod \"cilium-n45g4\" (UID: \"8813a271-58de-462f-88f2-afa0e966e17a\") " pod="kube-system/cilium-n45g4" Dec 13 04:13:36.542184 kubelet[1392]: I1213 04:13:36.542152 1392 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8813a271-58de-462f-88f2-afa0e966e17a-cni-path\") pod \"cilium-n45g4\" (UID: \"8813a271-58de-462f-88f2-afa0e966e17a\") " pod="kube-system/cilium-n45g4" Dec 13 04:13:36.542380 kubelet[1392]: I1213 04:13:36.542348 1392 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8813a271-58de-462f-88f2-afa0e966e17a-lib-modules\") pod \"cilium-n45g4\" (UID: \"8813a271-58de-462f-88f2-afa0e966e17a\") " pod="kube-system/cilium-n45g4" Dec 13 04:13:36.542616 kubelet[1392]: I1213 04:13:36.542580 1392 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nr9lh\" (UniqueName: \"kubernetes.io/projected/8813a271-58de-462f-88f2-afa0e966e17a-kube-api-access-nr9lh\") pod \"cilium-n45g4\" (UID: \"8813a271-58de-462f-88f2-afa0e966e17a\") " pod="kube-system/cilium-n45g4" Dec 13 04:13:36.542896 kubelet[1392]: I1213 04:13:36.542857 1392 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8813a271-58de-462f-88f2-afa0e966e17a-cilium-run\") pod \"cilium-n45g4\" (UID: \"8813a271-58de-462f-88f2-afa0e966e17a\") " pod="kube-system/cilium-n45g4" Dec 13 04:13:36.543103 kubelet[1392]: I1213 04:13:36.543070 1392 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8813a271-58de-462f-88f2-afa0e966e17a-cilium-cgroup\") pod \"cilium-n45g4\" (UID: \"8813a271-58de-462f-88f2-afa0e966e17a\") " pod="kube-system/cilium-n45g4" Dec 13 04:13:36.543301 kubelet[1392]: I1213 04:13:36.543268 1392 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8813a271-58de-462f-88f2-afa0e966e17a-xtables-lock\") pod \"cilium-n45g4\" (UID: \"8813a271-58de-462f-88f2-afa0e966e17a\") " pod="kube-system/cilium-n45g4" Dec 13 04:13:36.543529 kubelet[1392]: I1213 04:13:36.543495 1392 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8813a271-58de-462f-88f2-afa0e966e17a-cilium-config-path\") pod \"cilium-n45g4\" (UID: \"8813a271-58de-462f-88f2-afa0e966e17a\") " pod="kube-system/cilium-n45g4" Dec 13 04:13:36.543798 kubelet[1392]: I1213 04:13:36.543726 1392 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8813a271-58de-462f-88f2-afa0e966e17a-host-proc-sys-net\") pod \"cilium-n45g4\" (UID: \"8813a271-58de-462f-88f2-afa0e966e17a\") " pod="kube-system/cilium-n45g4" Dec 13 04:13:36.544030 kubelet[1392]: I1213 04:13:36.543991 1392 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8813a271-58de-462f-88f2-afa0e966e17a-hostproc\") pod \"cilium-n45g4\" (UID: \"8813a271-58de-462f-88f2-afa0e966e17a\") " pod="kube-system/cilium-n45g4" Dec 13 04:13:36.544241 kubelet[1392]: I1213 04:13:36.544208 1392 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8813a271-58de-462f-88f2-afa0e966e17a-etc-cni-netd\") pod \"cilium-n45g4\" (UID: \"8813a271-58de-462f-88f2-afa0e966e17a\") " pod="kube-system/cilium-n45g4" Dec 13 04:13:36.544456 kubelet[1392]: I1213 04:13:36.544419 1392 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8813a271-58de-462f-88f2-afa0e966e17a-clustermesh-secrets\") pod \"cilium-n45g4\" (UID: \"8813a271-58de-462f-88f2-afa0e966e17a\") " pod="kube-system/cilium-n45g4" Dec 13 04:13:36.544657 kubelet[1392]: I1213 04:13:36.544624 1392 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8813a271-58de-462f-88f2-afa0e966e17a-host-proc-sys-kernel\") pod \"cilium-n45g4\" (UID: \"8813a271-58de-462f-88f2-afa0e966e17a\") " pod="kube-system/cilium-n45g4" Dec 13 04:13:36.659343 kubelet[1392]: I1213 04:13:36.659172 1392 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d7bce7b-30db-48b5-9250-9ecd6e3d1735" path="/var/lib/kubelet/pods/5d7bce7b-30db-48b5-9250-9ecd6e3d1735/volumes" Dec 13 04:13:36.849400 kubelet[1392]: I1213 04:13:36.849328 1392 setters.go:600] "Node became not ready" node="172.24.4.93" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-12-13T04:13:36Z","lastTransitionTime":"2024-12-13T04:13:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Dec 13 04:13:37.312210 env[1138]: time="2024-12-13T04:13:37.310980035Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-n45g4,Uid:8813a271-58de-462f-88f2-afa0e966e17a,Namespace:kube-system,Attempt:0,}" Dec 13 04:13:37.343566 env[1138]: time="2024-12-13T04:13:37.336867290Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 04:13:37.343566 env[1138]: time="2024-12-13T04:13:37.336971066Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 04:13:37.343566 env[1138]: time="2024-12-13T04:13:37.337003588Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 04:13:37.343566 env[1138]: time="2024-12-13T04:13:37.337480576Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4d5c6dfb66905e1be20ba1601db6eabb27daf44a793fc1b6173385c88c1b3ee8 pid=3164 runtime=io.containerd.runc.v2 Dec 13 04:13:37.391977 systemd[1]: Started cri-containerd-4d5c6dfb66905e1be20ba1601db6eabb27daf44a793fc1b6173385c88c1b3ee8.scope. Dec 13 04:13:37.410862 env[1138]: time="2024-12-13T04:13:37.410796873Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-n45g4,Uid:8813a271-58de-462f-88f2-afa0e966e17a,Namespace:kube-system,Attempt:0,} returns sandbox id \"4d5c6dfb66905e1be20ba1601db6eabb27daf44a793fc1b6173385c88c1b3ee8\"" Dec 13 04:13:37.413853 env[1138]: time="2024-12-13T04:13:37.413737296Z" level=info msg="CreateContainer within sandbox \"4d5c6dfb66905e1be20ba1601db6eabb27daf44a793fc1b6173385c88c1b3ee8\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 04:13:37.517211 kubelet[1392]: E1213 04:13:37.517129 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:13:37.522915 env[1138]: time="2024-12-13T04:13:37.522842301Z" level=info msg="CreateContainer within sandbox \"4d5c6dfb66905e1be20ba1601db6eabb27daf44a793fc1b6173385c88c1b3ee8\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"fcec1cdef992f90ce99677781cc610a8bc18f31eb6360e5036e6d414094d05e2\"" Dec 13 04:13:37.524378 env[1138]: time="2024-12-13T04:13:37.524323974Z" level=info msg="StartContainer for \"fcec1cdef992f90ce99677781cc610a8bc18f31eb6360e5036e6d414094d05e2\"" Dec 13 04:13:37.558314 systemd[1]: Started cri-containerd-fcec1cdef992f90ce99677781cc610a8bc18f31eb6360e5036e6d414094d05e2.scope. Dec 13 04:13:37.758240 env[1138]: time="2024-12-13T04:13:37.756580963Z" level=info msg="StartContainer for \"fcec1cdef992f90ce99677781cc610a8bc18f31eb6360e5036e6d414094d05e2\" returns successfully" Dec 13 04:13:37.795701 systemd[1]: cri-containerd-fcec1cdef992f90ce99677781cc610a8bc18f31eb6360e5036e6d414094d05e2.scope: Deactivated successfully. Dec 13 04:13:37.880099 env[1138]: time="2024-12-13T04:13:37.879982386Z" level=info msg="shim disconnected" id=fcec1cdef992f90ce99677781cc610a8bc18f31eb6360e5036e6d414094d05e2 Dec 13 04:13:37.880099 env[1138]: time="2024-12-13T04:13:37.880076243Z" level=warning msg="cleaning up after shim disconnected" id=fcec1cdef992f90ce99677781cc610a8bc18f31eb6360e5036e6d414094d05e2 namespace=k8s.io Dec 13 04:13:37.880099 env[1138]: time="2024-12-13T04:13:37.880101170Z" level=info msg="cleaning up dead shim" Dec 13 04:13:37.899045 env[1138]: time="2024-12-13T04:13:37.898957202Z" level=warning msg="cleanup warnings time=\"2024-12-13T04:13:37Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3246 runtime=io.containerd.runc.v2\n" Dec 13 04:13:38.250478 env[1138]: time="2024-12-13T04:13:38.250317224Z" level=info msg="CreateContainer within sandbox \"4d5c6dfb66905e1be20ba1601db6eabb27daf44a793fc1b6173385c88c1b3ee8\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 04:13:38.291868 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1414614835.mount: Deactivated successfully. Dec 13 04:13:38.307295 env[1138]: time="2024-12-13T04:13:38.307181807Z" level=info msg="CreateContainer within sandbox \"4d5c6dfb66905e1be20ba1601db6eabb27daf44a793fc1b6173385c88c1b3ee8\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"4826daa7292fe6799d6d22c88b3e1472831932c872c1f472bd6d2c63caa60c51\"" Dec 13 04:13:38.308879 env[1138]: time="2024-12-13T04:13:38.308701250Z" level=info msg="StartContainer for \"4826daa7292fe6799d6d22c88b3e1472831932c872c1f472bd6d2c63caa60c51\"" Dec 13 04:13:38.349218 systemd[1]: Started cri-containerd-4826daa7292fe6799d6d22c88b3e1472831932c872c1f472bd6d2c63caa60c51.scope. Dec 13 04:13:38.392932 env[1138]: time="2024-12-13T04:13:38.392886624Z" level=info msg="StartContainer for \"4826daa7292fe6799d6d22c88b3e1472831932c872c1f472bd6d2c63caa60c51\" returns successfully" Dec 13 04:13:38.412646 systemd[1]: cri-containerd-4826daa7292fe6799d6d22c88b3e1472831932c872c1f472bd6d2c63caa60c51.scope: Deactivated successfully. Dec 13 04:13:38.440462 env[1138]: time="2024-12-13T04:13:38.440406995Z" level=info msg="shim disconnected" id=4826daa7292fe6799d6d22c88b3e1472831932c872c1f472bd6d2c63caa60c51 Dec 13 04:13:38.440797 env[1138]: time="2024-12-13T04:13:38.440777614Z" level=warning msg="cleaning up after shim disconnected" id=4826daa7292fe6799d6d22c88b3e1472831932c872c1f472bd6d2c63caa60c51 namespace=k8s.io Dec 13 04:13:38.440895 env[1138]: time="2024-12-13T04:13:38.440880659Z" level=info msg="cleaning up dead shim" Dec 13 04:13:38.448449 env[1138]: time="2024-12-13T04:13:38.448405971Z" level=warning msg="cleanup warnings time=\"2024-12-13T04:13:38Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3310 runtime=io.containerd.runc.v2\n" Dec 13 04:13:38.518261 kubelet[1392]: E1213 04:13:38.518053 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:13:38.949736 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4826daa7292fe6799d6d22c88b3e1472831932c872c1f472bd6d2c63caa60c51-rootfs.mount: Deactivated successfully. Dec 13 04:13:38.953084 kubelet[1392]: W1213 04:13:38.952996 1392 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5d7bce7b_30db_48b5_9250_9ecd6e3d1735.slice/cri-containerd-96bbaac251044e3c9c369f41fb42433eced157dcf1f661e7c831bde0eaab39ed.scope WatchSource:0}: container "96bbaac251044e3c9c369f41fb42433eced157dcf1f661e7c831bde0eaab39ed" in namespace "k8s.io": not found Dec 13 04:13:39.259336 env[1138]: time="2024-12-13T04:13:39.259129024Z" level=info msg="CreateContainer within sandbox \"4d5c6dfb66905e1be20ba1601db6eabb27daf44a793fc1b6173385c88c1b3ee8\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 04:13:39.301307 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount974986687.mount: Deactivated successfully. Dec 13 04:13:39.319342 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount610253315.mount: Deactivated successfully. Dec 13 04:13:39.326568 env[1138]: time="2024-12-13T04:13:39.326495273Z" level=info msg="CreateContainer within sandbox \"4d5c6dfb66905e1be20ba1601db6eabb27daf44a793fc1b6173385c88c1b3ee8\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"6db12c3d5ae5d3b06f5a0217c19b91c24c03a3c20602adab1bfbd2c0800434e7\"" Dec 13 04:13:39.328171 env[1138]: time="2024-12-13T04:13:39.328068227Z" level=info msg="StartContainer for \"6db12c3d5ae5d3b06f5a0217c19b91c24c03a3c20602adab1bfbd2c0800434e7\"" Dec 13 04:13:39.365720 systemd[1]: Started cri-containerd-6db12c3d5ae5d3b06f5a0217c19b91c24c03a3c20602adab1bfbd2c0800434e7.scope. Dec 13 04:13:39.414683 env[1138]: time="2024-12-13T04:13:39.414643766Z" level=info msg="StartContainer for \"6db12c3d5ae5d3b06f5a0217c19b91c24c03a3c20602adab1bfbd2c0800434e7\" returns successfully" Dec 13 04:13:39.421632 systemd[1]: cri-containerd-6db12c3d5ae5d3b06f5a0217c19b91c24c03a3c20602adab1bfbd2c0800434e7.scope: Deactivated successfully. Dec 13 04:13:39.450916 env[1138]: time="2024-12-13T04:13:39.450847862Z" level=info msg="shim disconnected" id=6db12c3d5ae5d3b06f5a0217c19b91c24c03a3c20602adab1bfbd2c0800434e7 Dec 13 04:13:39.450916 env[1138]: time="2024-12-13T04:13:39.450909478Z" level=warning msg="cleaning up after shim disconnected" id=6db12c3d5ae5d3b06f5a0217c19b91c24c03a3c20602adab1bfbd2c0800434e7 namespace=k8s.io Dec 13 04:13:39.450916 env[1138]: time="2024-12-13T04:13:39.450922402Z" level=info msg="cleaning up dead shim" Dec 13 04:13:39.458679 env[1138]: time="2024-12-13T04:13:39.458635557Z" level=warning msg="cleanup warnings time=\"2024-12-13T04:13:39Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3365 runtime=io.containerd.runc.v2\n" Dec 13 04:13:39.518880 kubelet[1392]: E1213 04:13:39.518705 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:13:39.734594 kubelet[1392]: E1213 04:13:39.734537 1392 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 04:13:40.268099 env[1138]: time="2024-12-13T04:13:40.268024885Z" level=info msg="CreateContainer within sandbox \"4d5c6dfb66905e1be20ba1601db6eabb27daf44a793fc1b6173385c88c1b3ee8\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 04:13:40.320314 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount569388572.mount: Deactivated successfully. Dec 13 04:13:40.341031 env[1138]: time="2024-12-13T04:13:40.340916662Z" level=info msg="CreateContainer within sandbox \"4d5c6dfb66905e1be20ba1601db6eabb27daf44a793fc1b6173385c88c1b3ee8\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"eca1341f6bcdc335b0888cbf49bd5e75fd99b7009c6a8e0d48a63966d4b69f03\"" Dec 13 04:13:40.342686 env[1138]: time="2024-12-13T04:13:40.342610582Z" level=info msg="StartContainer for \"eca1341f6bcdc335b0888cbf49bd5e75fd99b7009c6a8e0d48a63966d4b69f03\"" Dec 13 04:13:40.394352 systemd[1]: Started cri-containerd-eca1341f6bcdc335b0888cbf49bd5e75fd99b7009c6a8e0d48a63966d4b69f03.scope. Dec 13 04:13:40.434879 systemd[1]: cri-containerd-eca1341f6bcdc335b0888cbf49bd5e75fd99b7009c6a8e0d48a63966d4b69f03.scope: Deactivated successfully. Dec 13 04:13:40.444560 env[1138]: time="2024-12-13T04:13:40.444514855Z" level=info msg="StartContainer for \"eca1341f6bcdc335b0888cbf49bd5e75fd99b7009c6a8e0d48a63966d4b69f03\" returns successfully" Dec 13 04:13:40.480129 env[1138]: time="2024-12-13T04:13:40.480068948Z" level=info msg="shim disconnected" id=eca1341f6bcdc335b0888cbf49bd5e75fd99b7009c6a8e0d48a63966d4b69f03 Dec 13 04:13:40.480129 env[1138]: time="2024-12-13T04:13:40.480129582Z" level=warning msg="cleaning up after shim disconnected" id=eca1341f6bcdc335b0888cbf49bd5e75fd99b7009c6a8e0d48a63966d4b69f03 namespace=k8s.io Dec 13 04:13:40.480349 env[1138]: time="2024-12-13T04:13:40.480141304Z" level=info msg="cleaning up dead shim" Dec 13 04:13:40.487965 env[1138]: time="2024-12-13T04:13:40.487915332Z" level=warning msg="cleanup warnings time=\"2024-12-13T04:13:40Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3422 runtime=io.containerd.runc.v2\n" Dec 13 04:13:40.519821 kubelet[1392]: E1213 04:13:40.519621 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:13:40.949639 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-eca1341f6bcdc335b0888cbf49bd5e75fd99b7009c6a8e0d48a63966d4b69f03-rootfs.mount: Deactivated successfully. Dec 13 04:13:41.275897 env[1138]: time="2024-12-13T04:13:41.275143423Z" level=info msg="CreateContainer within sandbox \"4d5c6dfb66905e1be20ba1601db6eabb27daf44a793fc1b6173385c88c1b3ee8\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 04:13:41.327724 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3646028469.mount: Deactivated successfully. Dec 13 04:13:41.337386 env[1138]: time="2024-12-13T04:13:41.337321338Z" level=info msg="CreateContainer within sandbox \"4d5c6dfb66905e1be20ba1601db6eabb27daf44a793fc1b6173385c88c1b3ee8\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"1ab619ea364d540a8f9b92a394c90e75c3eb6f4e0bb38da5855f2185ce7db6ff\"" Dec 13 04:13:41.338839 env[1138]: time="2024-12-13T04:13:41.338706438Z" level=info msg="StartContainer for \"1ab619ea364d540a8f9b92a394c90e75c3eb6f4e0bb38da5855f2185ce7db6ff\"" Dec 13 04:13:41.377565 systemd[1]: Started cri-containerd-1ab619ea364d540a8f9b92a394c90e75c3eb6f4e0bb38da5855f2185ce7db6ff.scope. Dec 13 04:13:41.435458 env[1138]: time="2024-12-13T04:13:41.435411323Z" level=info msg="StartContainer for \"1ab619ea364d540a8f9b92a394c90e75c3eb6f4e0bb38da5855f2185ce7db6ff\" returns successfully" Dec 13 04:13:41.519923 kubelet[1392]: E1213 04:13:41.519879 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:13:42.110795 kubelet[1392]: W1213 04:13:42.110701 1392 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8813a271_58de_462f_88f2_afa0e966e17a.slice/cri-containerd-fcec1cdef992f90ce99677781cc610a8bc18f31eb6360e5036e6d414094d05e2.scope WatchSource:0}: task fcec1cdef992f90ce99677781cc610a8bc18f31eb6360e5036e6d414094d05e2 not found: not found Dec 13 04:13:42.237826 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 04:13:42.295820 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm_base(ctr(aes-generic),ghash-generic)))) Dec 13 04:13:42.520584 kubelet[1392]: E1213 04:13:42.520372 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:13:43.460721 systemd[1]: run-containerd-runc-k8s.io-1ab619ea364d540a8f9b92a394c90e75c3eb6f4e0bb38da5855f2185ce7db6ff-runc.0YT0cv.mount: Deactivated successfully. Dec 13 04:13:43.520718 kubelet[1392]: E1213 04:13:43.520661 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:13:44.521562 kubelet[1392]: E1213 04:13:44.521530 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:13:45.223884 kubelet[1392]: W1213 04:13:45.223848 1392 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8813a271_58de_462f_88f2_afa0e966e17a.slice/cri-containerd-4826daa7292fe6799d6d22c88b3e1472831932c872c1f472bd6d2c63caa60c51.scope WatchSource:0}: task 4826daa7292fe6799d6d22c88b3e1472831932c872c1f472bd6d2c63caa60c51 not found: not found Dec 13 04:13:45.523058 kubelet[1392]: E1213 04:13:45.522930 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:13:45.537941 systemd-networkd[971]: lxc_health: Link UP Dec 13 04:13:45.543960 systemd-networkd[971]: lxc_health: Gained carrier Dec 13 04:13:45.544962 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Dec 13 04:13:46.523864 kubelet[1392]: E1213 04:13:46.523804 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:13:47.349083 systemd-networkd[971]: lxc_health: Gained IPv6LL Dec 13 04:13:47.375454 kubelet[1392]: I1213 04:13:47.375395 1392 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-n45g4" podStartSLOduration=11.375381004 podStartE2EDuration="11.375381004s" podCreationTimestamp="2024-12-13 04:13:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 04:13:42.310104374 +0000 UTC m=+88.976093344" watchObservedRunningTime="2024-12-13 04:13:47.375381004 +0000 UTC m=+94.041369964" Dec 13 04:13:47.524946 kubelet[1392]: E1213 04:13:47.524912 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:13:48.010469 systemd[1]: run-containerd-runc-k8s.io-1ab619ea364d540a8f9b92a394c90e75c3eb6f4e0bb38da5855f2185ce7db6ff-runc.5yspU0.mount: Deactivated successfully. Dec 13 04:13:48.332984 kubelet[1392]: W1213 04:13:48.332914 1392 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8813a271_58de_462f_88f2_afa0e966e17a.slice/cri-containerd-6db12c3d5ae5d3b06f5a0217c19b91c24c03a3c20602adab1bfbd2c0800434e7.scope WatchSource:0}: task 6db12c3d5ae5d3b06f5a0217c19b91c24c03a3c20602adab1bfbd2c0800434e7 not found: not found Dec 13 04:13:48.525450 kubelet[1392]: E1213 04:13:48.525417 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:13:49.527989 kubelet[1392]: E1213 04:13:49.527937 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:13:50.196531 systemd[1]: run-containerd-runc-k8s.io-1ab619ea364d540a8f9b92a394c90e75c3eb6f4e0bb38da5855f2185ce7db6ff-runc.cbZF6V.mount: Deactivated successfully. Dec 13 04:13:50.529494 kubelet[1392]: E1213 04:13:50.529391 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:13:51.442193 kubelet[1392]: W1213 04:13:51.442121 1392 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8813a271_58de_462f_88f2_afa0e966e17a.slice/cri-containerd-eca1341f6bcdc335b0888cbf49bd5e75fd99b7009c6a8e0d48a63966d4b69f03.scope WatchSource:0}: task eca1341f6bcdc335b0888cbf49bd5e75fd99b7009c6a8e0d48a63966d4b69f03 not found: not found Dec 13 04:13:51.531275 kubelet[1392]: E1213 04:13:51.531224 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:13:52.532841 kubelet[1392]: E1213 04:13:52.532793 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:13:53.534015 kubelet[1392]: E1213 04:13:53.533892 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:13:54.437419 kubelet[1392]: E1213 04:13:54.437371 1392 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:13:54.534480 kubelet[1392]: E1213 04:13:54.534438 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:13:55.536253 kubelet[1392]: E1213 04:13:55.536161 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:13:56.536433 kubelet[1392]: E1213 04:13:56.536388 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"