Dec 13 04:01:27.020293 kernel: Linux version 5.15.173-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Thu Dec 12 23:50:37 -00 2024 Dec 13 04:01:27.020316 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=66bd2580285375a2ba5b0e34ba63606314bcd90aaed1de1996371bdcb032485c Dec 13 04:01:27.020328 kernel: BIOS-provided physical RAM map: Dec 13 04:01:27.020335 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Dec 13 04:01:27.020342 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Dec 13 04:01:27.020350 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Dec 13 04:01:27.020358 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdcfff] usable Dec 13 04:01:27.020365 kernel: BIOS-e820: [mem 0x000000007ffdd000-0x000000007fffffff] reserved Dec 13 04:01:27.020373 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Dec 13 04:01:27.020380 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Dec 13 04:01:27.020387 kernel: NX (Execute Disable) protection: active Dec 13 04:01:27.020394 kernel: SMBIOS 2.8 present. Dec 13 04:01:27.020401 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014 Dec 13 04:01:27.020408 kernel: Hypervisor detected: KVM Dec 13 04:01:27.020416 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 13 04:01:27.020426 kernel: kvm-clock: cpu 0, msr 5419b001, primary cpu clock Dec 13 04:01:27.020433 kernel: kvm-clock: using sched offset of 6943464215 cycles Dec 13 04:01:27.020442 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 13 04:01:27.020450 kernel: tsc: Detected 1996.249 MHz processor Dec 13 04:01:27.020458 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 13 04:01:27.020466 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 13 04:01:27.020474 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Dec 13 04:01:27.020482 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 13 04:01:27.020491 kernel: ACPI: Early table checksum verification disabled Dec 13 04:01:27.020499 kernel: ACPI: RSDP 0x00000000000F5930 000014 (v00 BOCHS ) Dec 13 04:01:27.020507 kernel: ACPI: RSDT 0x000000007FFE1848 000030 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 04:01:27.020515 kernel: ACPI: FACP 0x000000007FFE172C 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 04:01:27.020522 kernel: ACPI: DSDT 0x000000007FFE0040 0016EC (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 04:01:27.020530 kernel: ACPI: FACS 0x000000007FFE0000 000040 Dec 13 04:01:27.020538 kernel: ACPI: APIC 0x000000007FFE17A0 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 04:01:27.020545 kernel: ACPI: WAET 0x000000007FFE1820 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 04:01:27.020553 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe172c-0x7ffe179f] Dec 13 04:01:27.020563 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe172b] Dec 13 04:01:27.020570 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Dec 13 04:01:27.020578 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17a0-0x7ffe181f] Dec 13 04:01:27.020586 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe1820-0x7ffe1847] Dec 13 04:01:27.020593 kernel: No NUMA configuration found Dec 13 04:01:27.020601 kernel: Faking a node at [mem 0x0000000000000000-0x000000007ffdcfff] Dec 13 04:01:27.020611 kernel: NODE_DATA(0) allocated [mem 0x7ffd7000-0x7ffdcfff] Dec 13 04:01:27.020621 kernel: Zone ranges: Dec 13 04:01:27.020638 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 13 04:01:27.020646 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdcfff] Dec 13 04:01:27.020654 kernel: Normal empty Dec 13 04:01:27.020662 kernel: Movable zone start for each node Dec 13 04:01:27.020670 kernel: Early memory node ranges Dec 13 04:01:27.020679 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Dec 13 04:01:27.020695 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdcfff] Dec 13 04:01:27.020707 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdcfff] Dec 13 04:01:27.020719 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 04:01:27.020729 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Dec 13 04:01:27.020737 kernel: On node 0, zone DMA32: 35 pages in unavailable ranges Dec 13 04:01:27.020745 kernel: ACPI: PM-Timer IO Port: 0x608 Dec 13 04:01:27.020753 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 13 04:01:27.020761 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 13 04:01:27.020769 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Dec 13 04:01:27.020779 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 13 04:01:27.020787 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 13 04:01:27.020796 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 13 04:01:27.020803 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 13 04:01:27.020811 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 13 04:01:27.020819 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Dec 13 04:01:27.020827 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Dec 13 04:01:27.020835 kernel: Booting paravirtualized kernel on KVM Dec 13 04:01:27.020843 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 13 04:01:27.020852 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 Dec 13 04:01:27.020862 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u1048576 Dec 13 04:01:27.020870 kernel: pcpu-alloc: s188696 r8192 d32488 u1048576 alloc=1*2097152 Dec 13 04:01:27.020879 kernel: pcpu-alloc: [0] 0 1 Dec 13 04:01:27.020887 kernel: kvm-guest: stealtime: cpu 0, msr 7dc1c0c0 Dec 13 04:01:27.020895 kernel: kvm-guest: PV spinlocks disabled, no host support Dec 13 04:01:27.020903 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515805 Dec 13 04:01:27.020911 kernel: Policy zone: DMA32 Dec 13 04:01:27.020920 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=66bd2580285375a2ba5b0e34ba63606314bcd90aaed1de1996371bdcb032485c Dec 13 04:01:27.020931 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 04:01:27.020939 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 04:01:27.020947 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 13 04:01:27.020955 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 04:01:27.020963 kernel: Memory: 1973284K/2096620K available (12294K kernel code, 2275K rwdata, 13716K rodata, 47476K init, 4108K bss, 123076K reserved, 0K cma-reserved) Dec 13 04:01:27.020972 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 13 04:01:27.020979 kernel: ftrace: allocating 34549 entries in 135 pages Dec 13 04:01:27.020988 kernel: ftrace: allocated 135 pages with 4 groups Dec 13 04:01:27.020997 kernel: rcu: Hierarchical RCU implementation. Dec 13 04:01:27.021005 kernel: rcu: RCU event tracing is enabled. Dec 13 04:01:27.021014 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 13 04:01:27.021022 kernel: Rude variant of Tasks RCU enabled. Dec 13 04:01:27.021030 kernel: Tracing variant of Tasks RCU enabled. Dec 13 04:01:27.021038 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 04:01:27.021046 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 13 04:01:27.021081 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Dec 13 04:01:27.021089 kernel: Console: colour VGA+ 80x25 Dec 13 04:01:27.021100 kernel: printk: console [tty0] enabled Dec 13 04:01:27.021108 kernel: printk: console [ttyS0] enabled Dec 13 04:01:27.021116 kernel: ACPI: Core revision 20210730 Dec 13 04:01:27.021125 kernel: APIC: Switch to symmetric I/O mode setup Dec 13 04:01:27.021133 kernel: x2apic enabled Dec 13 04:01:27.021141 kernel: Switched APIC routing to physical x2apic. Dec 13 04:01:27.021149 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Dec 13 04:01:27.021157 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Dec 13 04:01:27.021165 kernel: Calibrating delay loop (skipped) preset value.. 3992.49 BogoMIPS (lpj=1996249) Dec 13 04:01:27.021173 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Dec 13 04:01:27.021183 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Dec 13 04:01:27.021192 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 13 04:01:27.021200 kernel: Spectre V2 : Mitigation: Retpolines Dec 13 04:01:27.021208 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Dec 13 04:01:27.021216 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Dec 13 04:01:27.021224 kernel: Speculative Store Bypass: Vulnerable Dec 13 04:01:27.021232 kernel: x86/fpu: x87 FPU will use FXSAVE Dec 13 04:01:27.021240 kernel: Freeing SMP alternatives memory: 32K Dec 13 04:01:27.021248 kernel: pid_max: default: 32768 minimum: 301 Dec 13 04:01:27.021257 kernel: LSM: Security Framework initializing Dec 13 04:01:27.021265 kernel: SELinux: Initializing. Dec 13 04:01:27.021273 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 13 04:01:27.021281 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 13 04:01:27.021290 kernel: smpboot: CPU0: AMD Intel Core i7 9xx (Nehalem Class Core i7) (family: 0x6, model: 0x1a, stepping: 0x3) Dec 13 04:01:27.021298 kernel: Performance Events: AMD PMU driver. Dec 13 04:01:27.021306 kernel: ... version: 0 Dec 13 04:01:27.021314 kernel: ... bit width: 48 Dec 13 04:01:27.021322 kernel: ... generic registers: 4 Dec 13 04:01:27.021336 kernel: ... value mask: 0000ffffffffffff Dec 13 04:01:27.021345 kernel: ... max period: 00007fffffffffff Dec 13 04:01:27.021355 kernel: ... fixed-purpose events: 0 Dec 13 04:01:27.021363 kernel: ... event mask: 000000000000000f Dec 13 04:01:27.021371 kernel: signal: max sigframe size: 1440 Dec 13 04:01:27.021380 kernel: rcu: Hierarchical SRCU implementation. Dec 13 04:01:27.021388 kernel: smp: Bringing up secondary CPUs ... Dec 13 04:01:27.021396 kernel: x86: Booting SMP configuration: Dec 13 04:01:27.021406 kernel: .... node #0, CPUs: #1 Dec 13 04:01:27.021415 kernel: kvm-clock: cpu 1, msr 5419b041, secondary cpu clock Dec 13 04:01:27.021423 kernel: kvm-guest: stealtime: cpu 1, msr 7dd1c0c0 Dec 13 04:01:27.021431 kernel: smp: Brought up 1 node, 2 CPUs Dec 13 04:01:27.021440 kernel: smpboot: Max logical packages: 2 Dec 13 04:01:27.021448 kernel: smpboot: Total of 2 processors activated (7984.99 BogoMIPS) Dec 13 04:01:27.021457 kernel: devtmpfs: initialized Dec 13 04:01:27.021465 kernel: x86/mm: Memory block size: 128MB Dec 13 04:01:27.021474 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 04:01:27.021484 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 13 04:01:27.021492 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 04:01:27.021501 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 04:01:27.021509 kernel: audit: initializing netlink subsys (disabled) Dec 13 04:01:27.021518 kernel: audit: type=2000 audit(1734062485.978:1): state=initialized audit_enabled=0 res=1 Dec 13 04:01:27.021526 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 04:01:27.021534 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 13 04:01:27.021543 kernel: cpuidle: using governor menu Dec 13 04:01:27.021551 kernel: ACPI: bus type PCI registered Dec 13 04:01:27.021562 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 04:01:27.021570 kernel: dca service started, version 1.12.1 Dec 13 04:01:27.021579 kernel: PCI: Using configuration type 1 for base access Dec 13 04:01:27.021587 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 13 04:01:27.021596 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 04:01:27.021604 kernel: ACPI: Added _OSI(Module Device) Dec 13 04:01:27.021612 kernel: ACPI: Added _OSI(Processor Device) Dec 13 04:01:27.021621 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 04:01:27.021629 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 04:01:27.021639 kernel: ACPI: Added _OSI(Linux-Dell-Video) Dec 13 04:01:27.021662 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Dec 13 04:01:27.021670 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Dec 13 04:01:27.021679 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 04:01:27.021687 kernel: ACPI: Interpreter enabled Dec 13 04:01:27.021695 kernel: ACPI: PM: (supports S0 S3 S5) Dec 13 04:01:27.021704 kernel: ACPI: Using IOAPIC for interrupt routing Dec 13 04:01:27.021712 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 13 04:01:27.021721 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Dec 13 04:01:27.021731 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 13 04:01:27.021869 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Dec 13 04:01:27.021966 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. Dec 13 04:01:27.021980 kernel: acpiphp: Slot [3] registered Dec 13 04:01:27.021989 kernel: acpiphp: Slot [4] registered Dec 13 04:01:27.021997 kernel: acpiphp: Slot [5] registered Dec 13 04:01:27.022006 kernel: acpiphp: Slot [6] registered Dec 13 04:01:27.022017 kernel: acpiphp: Slot [7] registered Dec 13 04:01:27.022026 kernel: acpiphp: Slot [8] registered Dec 13 04:01:27.022034 kernel: acpiphp: Slot [9] registered Dec 13 04:01:27.022043 kernel: acpiphp: Slot [10] registered Dec 13 04:01:27.022067 kernel: acpiphp: Slot [11] registered Dec 13 04:01:27.022076 kernel: acpiphp: Slot [12] registered Dec 13 04:01:27.022084 kernel: acpiphp: Slot [13] registered Dec 13 04:01:27.022092 kernel: acpiphp: Slot [14] registered Dec 13 04:01:27.022100 kernel: acpiphp: Slot [15] registered Dec 13 04:01:27.022109 kernel: acpiphp: Slot [16] registered Dec 13 04:01:27.022120 kernel: acpiphp: Slot [17] registered Dec 13 04:01:27.022128 kernel: acpiphp: Slot [18] registered Dec 13 04:01:27.022136 kernel: acpiphp: Slot [19] registered Dec 13 04:01:27.022144 kernel: acpiphp: Slot [20] registered Dec 13 04:01:27.022152 kernel: acpiphp: Slot [21] registered Dec 13 04:01:27.022161 kernel: acpiphp: Slot [22] registered Dec 13 04:01:27.022169 kernel: acpiphp: Slot [23] registered Dec 13 04:01:27.022177 kernel: acpiphp: Slot [24] registered Dec 13 04:01:27.022185 kernel: acpiphp: Slot [25] registered Dec 13 04:01:27.022195 kernel: acpiphp: Slot [26] registered Dec 13 04:01:27.022204 kernel: acpiphp: Slot [27] registered Dec 13 04:01:27.022212 kernel: acpiphp: Slot [28] registered Dec 13 04:01:27.022220 kernel: acpiphp: Slot [29] registered Dec 13 04:01:27.022228 kernel: acpiphp: Slot [30] registered Dec 13 04:01:27.022236 kernel: acpiphp: Slot [31] registered Dec 13 04:01:27.022244 kernel: PCI host bridge to bus 0000:00 Dec 13 04:01:27.022405 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 13 04:01:27.027216 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 13 04:01:27.027348 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 13 04:01:27.027422 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Dec 13 04:01:27.027499 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Dec 13 04:01:27.027575 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 13 04:01:27.027690 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Dec 13 04:01:27.027789 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Dec 13 04:01:27.027891 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Dec 13 04:01:27.027987 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc120-0xc12f] Dec 13 04:01:27.028102 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Dec 13 04:01:27.028190 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Dec 13 04:01:27.028275 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Dec 13 04:01:27.028360 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Dec 13 04:01:27.028459 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Dec 13 04:01:27.028550 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Dec 13 04:01:27.028638 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Dec 13 04:01:27.028736 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Dec 13 04:01:27.028823 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Dec 13 04:01:27.028910 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Dec 13 04:01:27.028997 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfeb90000-0xfeb90fff] Dec 13 04:01:27.029112 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfeb80000-0xfeb8ffff pref] Dec 13 04:01:27.029201 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 13 04:01:27.029293 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Dec 13 04:01:27.029375 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc080-0xc0bf] Dec 13 04:01:27.029456 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfeb91000-0xfeb91fff] Dec 13 04:01:27.029536 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Dec 13 04:01:27.029617 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb00000-0xfeb7ffff pref] Dec 13 04:01:27.029736 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Dec 13 04:01:27.029820 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Dec 13 04:01:27.029901 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfeb92000-0xfeb92fff] Dec 13 04:01:27.029982 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Dec 13 04:01:27.030088 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 Dec 13 04:01:27.030174 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc0c0-0xc0ff] Dec 13 04:01:27.030312 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Dec 13 04:01:27.030406 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 Dec 13 04:01:27.032196 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc100-0xc11f] Dec 13 04:01:27.032300 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Dec 13 04:01:27.032313 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 13 04:01:27.032322 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 13 04:01:27.032331 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 13 04:01:27.032340 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 13 04:01:27.032349 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Dec 13 04:01:27.032362 kernel: iommu: Default domain type: Translated Dec 13 04:01:27.032371 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 13 04:01:27.032464 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Dec 13 04:01:27.032555 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 13 04:01:27.032650 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Dec 13 04:01:27.032667 kernel: vgaarb: loaded Dec 13 04:01:27.032676 kernel: pps_core: LinuxPPS API ver. 1 registered Dec 13 04:01:27.032685 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Dec 13 04:01:27.032693 kernel: PTP clock support registered Dec 13 04:01:27.032705 kernel: PCI: Using ACPI for IRQ routing Dec 13 04:01:27.032713 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 13 04:01:27.032722 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Dec 13 04:01:27.032731 kernel: e820: reserve RAM buffer [mem 0x7ffdd000-0x7fffffff] Dec 13 04:01:27.032739 kernel: clocksource: Switched to clocksource kvm-clock Dec 13 04:01:27.032747 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 04:01:27.032756 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 04:01:27.032765 kernel: pnp: PnP ACPI init Dec 13 04:01:27.032863 kernel: pnp 00:03: [dma 2] Dec 13 04:01:27.032880 kernel: pnp: PnP ACPI: found 5 devices Dec 13 04:01:27.032889 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 13 04:01:27.032898 kernel: NET: Registered PF_INET protocol family Dec 13 04:01:27.032906 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 04:01:27.032915 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Dec 13 04:01:27.032924 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 04:01:27.032933 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 13 04:01:27.032941 kernel: TCP bind hash table entries: 16384 (order: 6, 262144 bytes, linear) Dec 13 04:01:27.032952 kernel: TCP: Hash tables configured (established 16384 bind 16384) Dec 13 04:01:27.032960 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 13 04:01:27.032969 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 13 04:01:27.032978 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 04:01:27.032986 kernel: NET: Registered PF_XDP protocol family Dec 13 04:01:27.038285 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 13 04:01:27.038435 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 13 04:01:27.038522 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 13 04:01:27.038606 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Dec 13 04:01:27.038698 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Dec 13 04:01:27.038804 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Dec 13 04:01:27.038906 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Dec 13 04:01:27.039000 kernel: pci 0000:00:01.0: Activating ISA DMA hang workarounds Dec 13 04:01:27.039015 kernel: PCI: CLS 0 bytes, default 64 Dec 13 04:01:27.039026 kernel: Initialise system trusted keyrings Dec 13 04:01:27.039036 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Dec 13 04:01:27.039076 kernel: Key type asymmetric registered Dec 13 04:01:27.039087 kernel: Asymmetric key parser 'x509' registered Dec 13 04:01:27.039097 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Dec 13 04:01:27.039107 kernel: io scheduler mq-deadline registered Dec 13 04:01:27.039117 kernel: io scheduler kyber registered Dec 13 04:01:27.039126 kernel: io scheduler bfq registered Dec 13 04:01:27.039136 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 13 04:01:27.039147 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Dec 13 04:01:27.039157 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Dec 13 04:01:27.039167 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Dec 13 04:01:27.039179 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Dec 13 04:01:27.039189 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 04:01:27.039199 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 13 04:01:27.039209 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 13 04:01:27.039218 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 13 04:01:27.039228 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 13 04:01:27.039238 kernel: random: crng init done Dec 13 04:01:27.039355 kernel: rtc_cmos 00:04: RTC can wake from S4 Dec 13 04:01:27.039374 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 13 04:01:27.039459 kernel: rtc_cmos 00:04: registered as rtc0 Dec 13 04:01:27.039548 kernel: rtc_cmos 00:04: setting system clock to 2024-12-13T04:01:26 UTC (1734062486) Dec 13 04:01:27.039629 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Dec 13 04:01:27.039641 kernel: NET: Registered PF_INET6 protocol family Dec 13 04:01:27.039651 kernel: Segment Routing with IPv6 Dec 13 04:01:27.039660 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 04:01:27.039669 kernel: NET: Registered PF_PACKET protocol family Dec 13 04:01:27.039678 kernel: Key type dns_resolver registered Dec 13 04:01:27.039690 kernel: IPI shorthand broadcast: enabled Dec 13 04:01:27.039700 kernel: sched_clock: Marking stable (699006065, 121591988)->(878192830, -57594777) Dec 13 04:01:27.039709 kernel: registered taskstats version 1 Dec 13 04:01:27.039718 kernel: Loading compiled-in X.509 certificates Dec 13 04:01:27.039728 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.173-flatcar: d9defb0205602bee9bb670636cbe5c74194fdb5e' Dec 13 04:01:27.039737 kernel: Key type .fscrypt registered Dec 13 04:01:27.039745 kernel: Key type fscrypt-provisioning registered Dec 13 04:01:27.039755 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 04:01:27.039765 kernel: ima: Allocated hash algorithm: sha1 Dec 13 04:01:27.039774 kernel: ima: No architecture policies found Dec 13 04:01:27.039783 kernel: clk: Disabling unused clocks Dec 13 04:01:27.039792 kernel: Freeing unused kernel image (initmem) memory: 47476K Dec 13 04:01:27.039801 kernel: Write protecting the kernel read-only data: 28672k Dec 13 04:01:27.039810 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Dec 13 04:01:27.039819 kernel: Freeing unused kernel image (rodata/data gap) memory: 620K Dec 13 04:01:27.039828 kernel: Run /init as init process Dec 13 04:01:27.039837 kernel: with arguments: Dec 13 04:01:27.039848 kernel: /init Dec 13 04:01:27.039857 kernel: with environment: Dec 13 04:01:27.039866 kernel: HOME=/ Dec 13 04:01:27.039875 kernel: TERM=linux Dec 13 04:01:27.039884 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 04:01:27.039896 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 04:01:27.039908 systemd[1]: Detected virtualization kvm. Dec 13 04:01:27.039918 systemd[1]: Detected architecture x86-64. Dec 13 04:01:27.039930 systemd[1]: Running in initrd. Dec 13 04:01:27.039940 systemd[1]: No hostname configured, using default hostname. Dec 13 04:01:27.039949 systemd[1]: Hostname set to . Dec 13 04:01:27.039959 systemd[1]: Initializing machine ID from VM UUID. Dec 13 04:01:27.039969 systemd[1]: Queued start job for default target initrd.target. Dec 13 04:01:27.039978 systemd[1]: Started systemd-ask-password-console.path. Dec 13 04:01:27.039988 systemd[1]: Reached target cryptsetup.target. Dec 13 04:01:27.039997 systemd[1]: Reached target paths.target. Dec 13 04:01:27.040009 systemd[1]: Reached target slices.target. Dec 13 04:01:27.040019 systemd[1]: Reached target swap.target. Dec 13 04:01:27.040028 systemd[1]: Reached target timers.target. Dec 13 04:01:27.040038 systemd[1]: Listening on iscsid.socket. Dec 13 04:01:27.040048 systemd[1]: Listening on iscsiuio.socket. Dec 13 04:01:27.040084 systemd[1]: Listening on systemd-journald-audit.socket. Dec 13 04:01:27.040094 systemd[1]: Listening on systemd-journald-dev-log.socket. Dec 13 04:01:27.040106 systemd[1]: Listening on systemd-journald.socket. Dec 13 04:01:27.040115 systemd[1]: Listening on systemd-networkd.socket. Dec 13 04:01:27.040125 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 04:01:27.040134 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 04:01:27.040144 systemd[1]: Reached target sockets.target. Dec 13 04:01:27.040163 systemd[1]: Starting kmod-static-nodes.service... Dec 13 04:01:27.040175 systemd[1]: Finished network-cleanup.service. Dec 13 04:01:27.040186 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 04:01:27.040196 systemd[1]: Starting systemd-journald.service... Dec 13 04:01:27.040206 systemd[1]: Starting systemd-modules-load.service... Dec 13 04:01:27.040216 systemd[1]: Starting systemd-resolved.service... Dec 13 04:01:27.040226 systemd[1]: Starting systemd-vconsole-setup.service... Dec 13 04:01:27.040236 systemd[1]: Finished kmod-static-nodes.service. Dec 13 04:01:27.040245 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 04:01:27.040262 systemd-journald[186]: Journal started Dec 13 04:01:27.040320 systemd-journald[186]: Runtime Journal (/run/log/journal/ca97bd47ca2d4a62bb010776e2063747) is 4.9M, max 39.5M, 34.5M free. Dec 13 04:01:27.027331 systemd-modules-load[187]: Inserted module 'overlay' Dec 13 04:01:27.074854 kernel: audit: type=1130 audit(1734062487.069:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:01:27.074896 systemd[1]: Started systemd-journald.service. Dec 13 04:01:27.069000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:01:27.042463 systemd-resolved[188]: Positive Trust Anchors: Dec 13 04:01:27.042476 systemd-resolved[188]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 04:01:27.042516 systemd-resolved[188]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 04:01:27.045545 systemd-resolved[188]: Defaulting to hostname 'linux'. Dec 13 04:01:27.087245 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 04:01:27.087276 kernel: audit: type=1130 audit(1734062487.082:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:01:27.082000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:01:27.082969 systemd[1]: Started systemd-resolved.service. Dec 13 04:01:27.092274 kernel: audit: type=1130 audit(1734062487.087:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:01:27.087000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:01:27.088041 systemd[1]: Finished systemd-vconsole-setup.service. Dec 13 04:01:27.094070 kernel: Bridge firewalling registered Dec 13 04:01:27.092803 systemd-modules-load[187]: Inserted module 'br_netfilter' Dec 13 04:01:27.095425 systemd[1]: Reached target nss-lookup.target. Dec 13 04:01:27.096788 systemd[1]: Starting dracut-cmdline-ask.service... Dec 13 04:01:27.099670 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Dec 13 04:01:27.105329 kernel: audit: type=1130 audit(1734062487.095:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:01:27.095000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:01:27.110876 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Dec 13 04:01:27.111000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:01:27.116083 kernel: audit: type=1130 audit(1734062487.111:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:01:27.119514 systemd[1]: Finished dracut-cmdline-ask.service. Dec 13 04:01:27.121011 systemd[1]: Starting dracut-cmdline.service... Dec 13 04:01:27.119000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:01:27.127588 kernel: audit: type=1130 audit(1734062487.119:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:01:27.133412 dracut-cmdline[203]: dracut-dracut-053 Dec 13 04:01:27.134565 kernel: SCSI subsystem initialized Dec 13 04:01:27.136537 dracut-cmdline[203]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=66bd2580285375a2ba5b0e34ba63606314bcd90aaed1de1996371bdcb032485c Dec 13 04:01:27.151085 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 04:01:27.153713 kernel: device-mapper: uevent: version 1.0.3 Dec 13 04:01:27.153742 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Dec 13 04:01:27.160007 systemd-modules-load[187]: Inserted module 'dm_multipath' Dec 13 04:01:27.161000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:01:27.161200 systemd[1]: Finished systemd-modules-load.service. Dec 13 04:01:27.167296 kernel: audit: type=1130 audit(1734062487.161:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:01:27.162644 systemd[1]: Starting systemd-sysctl.service... Dec 13 04:01:27.173856 systemd[1]: Finished systemd-sysctl.service. Dec 13 04:01:27.174000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:01:27.179077 kernel: audit: type=1130 audit(1734062487.174:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:01:27.230109 kernel: Loading iSCSI transport class v2.0-870. Dec 13 04:01:27.252117 kernel: iscsi: registered transport (tcp) Dec 13 04:01:27.280182 kernel: iscsi: registered transport (qla4xxx) Dec 13 04:01:27.280275 kernel: QLogic iSCSI HBA Driver Dec 13 04:01:27.339000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:01:27.338376 systemd[1]: Finished dracut-cmdline.service. Dec 13 04:01:27.345044 kernel: audit: type=1130 audit(1734062487.339:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:01:27.341537 systemd[1]: Starting dracut-pre-udev.service... Dec 13 04:01:27.428123 kernel: raid6: sse2x4 gen() 12512 MB/s Dec 13 04:01:27.446102 kernel: raid6: sse2x4 xor() 4693 MB/s Dec 13 04:01:27.464101 kernel: raid6: sse2x2 gen() 13542 MB/s Dec 13 04:01:27.482186 kernel: raid6: sse2x2 xor() 8233 MB/s Dec 13 04:01:27.500174 kernel: raid6: sse2x1 gen() 10378 MB/s Dec 13 04:01:27.518326 kernel: raid6: sse2x1 xor() 6596 MB/s Dec 13 04:01:27.518370 kernel: raid6: using algorithm sse2x2 gen() 13542 MB/s Dec 13 04:01:27.518383 kernel: raid6: .... xor() 8233 MB/s, rmw enabled Dec 13 04:01:27.519485 kernel: raid6: using ssse3x2 recovery algorithm Dec 13 04:01:27.536646 kernel: xor: measuring software checksum speed Dec 13 04:01:27.536695 kernel: prefetch64-sse : 7930 MB/sec Dec 13 04:01:27.539667 kernel: generic_sse : 5198 MB/sec Dec 13 04:01:27.539693 kernel: xor: using function: prefetch64-sse (7930 MB/sec) Dec 13 04:01:27.657174 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Dec 13 04:01:27.675000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:01:27.674556 systemd[1]: Finished dracut-pre-udev.service. Dec 13 04:01:27.676000 audit: BPF prog-id=7 op=LOAD Dec 13 04:01:27.676000 audit: BPF prog-id=8 op=LOAD Dec 13 04:01:27.678367 systemd[1]: Starting systemd-udevd.service... Dec 13 04:01:27.691691 systemd-udevd[386]: Using default interface naming scheme 'v252'. Dec 13 04:01:27.697000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:01:27.696482 systemd[1]: Started systemd-udevd.service. Dec 13 04:01:27.700962 systemd[1]: Starting dracut-pre-trigger.service... Dec 13 04:01:27.729950 dracut-pre-trigger[402]: rd.md=0: removing MD RAID activation Dec 13 04:01:27.798506 systemd[1]: Finished dracut-pre-trigger.service. Dec 13 04:01:27.799000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:01:27.801620 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 04:01:27.873630 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 04:01:27.876000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:01:27.943083 kernel: virtio_blk virtio2: [vda] 41943040 512-byte logical blocks (21.5 GB/20.0 GiB) Dec 13 04:01:27.971685 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 04:01:27.971705 kernel: GPT:17805311 != 41943039 Dec 13 04:01:27.971717 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 04:01:27.971728 kernel: GPT:17805311 != 41943039 Dec 13 04:01:27.971739 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 04:01:27.971749 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 04:01:27.995085 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (432) Dec 13 04:01:28.002731 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Dec 13 04:01:28.052819 kernel: libata version 3.00 loaded. Dec 13 04:01:28.052852 kernel: ata_piix 0000:00:01.1: version 2.13 Dec 13 04:01:28.053090 kernel: scsi host0: ata_piix Dec 13 04:01:28.053215 kernel: scsi host1: ata_piix Dec 13 04:01:28.053312 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc120 irq 14 Dec 13 04:01:28.053325 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc128 irq 15 Dec 13 04:01:28.060606 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Dec 13 04:01:28.063882 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Dec 13 04:01:28.064421 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Dec 13 04:01:28.069251 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Dec 13 04:01:28.071874 systemd[1]: Starting disk-uuid.service... Dec 13 04:01:28.083909 disk-uuid[462]: Primary Header is updated. Dec 13 04:01:28.083909 disk-uuid[462]: Secondary Entries is updated. Dec 13 04:01:28.083909 disk-uuid[462]: Secondary Header is updated. Dec 13 04:01:28.094099 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 04:01:28.099079 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 04:01:29.117077 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 04:01:29.117798 disk-uuid[463]: The operation has completed successfully. Dec 13 04:01:29.167783 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 04:01:29.168671 systemd[1]: Finished disk-uuid.service. Dec 13 04:01:29.169000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:01:29.169000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:01:29.170682 systemd[1]: Starting verity-setup.service... Dec 13 04:01:29.208006 kernel: device-mapper: verity: sha256 using implementation "sha256-ssse3" Dec 13 04:01:29.310450 systemd[1]: Found device dev-mapper-usr.device. Dec 13 04:01:29.315760 systemd[1]: Mounting sysusr-usr.mount... Dec 13 04:01:29.321757 systemd[1]: Finished verity-setup.service. Dec 13 04:01:29.324000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:01:29.476084 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Dec 13 04:01:29.476774 systemd[1]: Mounted sysusr-usr.mount. Dec 13 04:01:29.477938 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Dec 13 04:01:29.479233 systemd[1]: Starting ignition-setup.service... Dec 13 04:01:29.482749 systemd[1]: Starting parse-ip-for-networkd.service... Dec 13 04:01:29.496087 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 04:01:29.496167 kernel: BTRFS info (device vda6): using free space tree Dec 13 04:01:29.496188 kernel: BTRFS info (device vda6): has skinny extents Dec 13 04:01:29.525431 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 04:01:29.553008 systemd[1]: Finished ignition-setup.service. Dec 13 04:01:29.554000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:01:29.555339 systemd[1]: Starting ignition-fetch-offline.service... Dec 13 04:01:29.575972 systemd[1]: Finished parse-ip-for-networkd.service. Dec 13 04:01:29.577000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:01:29.578000 audit: BPF prog-id=9 op=LOAD Dec 13 04:01:29.579485 systemd[1]: Starting systemd-networkd.service... Dec 13 04:01:29.610626 systemd-networkd[633]: lo: Link UP Dec 13 04:01:29.611421 systemd-networkd[633]: lo: Gained carrier Dec 13 04:01:29.612372 systemd-networkd[633]: Enumeration completed Dec 13 04:01:29.613098 systemd-networkd[633]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 04:01:29.613251 systemd[1]: Started systemd-networkd.service. Dec 13 04:01:29.616000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:01:29.614855 systemd-networkd[633]: eth0: Link UP Dec 13 04:01:29.614859 systemd-networkd[633]: eth0: Gained carrier Dec 13 04:01:29.616430 systemd[1]: Reached target network.target. Dec 13 04:01:29.618074 systemd[1]: Starting iscsiuio.service... Dec 13 04:01:29.636266 systemd-networkd[633]: eth0: DHCPv4 address 172.24.4.88/24, gateway 172.24.4.1 acquired from 172.24.4.1 Dec 13 04:01:29.637267 systemd[1]: Started iscsiuio.service. Dec 13 04:01:29.638000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:01:29.641033 systemd[1]: Starting iscsid.service... Dec 13 04:01:29.648401 iscsid[638]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Dec 13 04:01:29.648401 iscsid[638]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Dec 13 04:01:29.648401 iscsid[638]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Dec 13 04:01:29.648401 iscsid[638]: If using hardware iscsi like qla4xxx this message can be ignored. Dec 13 04:01:29.648401 iscsid[638]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Dec 13 04:01:29.648401 iscsid[638]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Dec 13 04:01:29.649000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:01:29.648506 systemd[1]: Started iscsid.service. Dec 13 04:01:29.651221 systemd[1]: Starting dracut-initqueue.service... Dec 13 04:01:29.673417 systemd[1]: Finished dracut-initqueue.service. Dec 13 04:01:29.674000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:01:29.674875 systemd[1]: Reached target remote-fs-pre.target. Dec 13 04:01:29.676732 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 04:01:29.678774 systemd[1]: Reached target remote-fs.target. Dec 13 04:01:29.682494 systemd[1]: Starting dracut-pre-mount.service... Dec 13 04:01:29.701472 systemd[1]: Finished dracut-pre-mount.service. Dec 13 04:01:29.703000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:01:30.156077 ignition[615]: Ignition 2.14.0 Dec 13 04:01:30.157200 ignition[615]: Stage: fetch-offline Dec 13 04:01:30.158022 ignition[615]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 04:01:30.159184 ignition[615]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Dec 13 04:01:30.161429 ignition[615]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 04:01:30.162833 ignition[615]: parsed url from cmdline: "" Dec 13 04:01:30.162944 ignition[615]: no config URL provided Dec 13 04:01:30.163750 ignition[615]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 04:01:30.164710 ignition[615]: no config at "/usr/lib/ignition/user.ign" Dec 13 04:01:30.165438 ignition[615]: failed to fetch config: resource requires networking Dec 13 04:01:30.166997 ignition[615]: Ignition finished successfully Dec 13 04:01:30.169000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:01:30.168766 systemd[1]: Finished ignition-fetch-offline.service. Dec 13 04:01:30.170538 systemd[1]: Starting ignition-fetch.service... Dec 13 04:01:30.181621 ignition[656]: Ignition 2.14.0 Dec 13 04:01:30.181657 ignition[656]: Stage: fetch Dec 13 04:01:30.181778 ignition[656]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 04:01:30.181801 ignition[656]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Dec 13 04:01:30.182853 ignition[656]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 04:01:30.182949 ignition[656]: parsed url from cmdline: "" Dec 13 04:01:30.182953 ignition[656]: no config URL provided Dec 13 04:01:30.182959 ignition[656]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 04:01:30.182967 ignition[656]: no config at "/usr/lib/ignition/user.ign" Dec 13 04:01:30.241849 ignition[656]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Dec 13 04:01:30.241904 ignition[656]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Dec 13 04:01:30.241911 ignition[656]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Dec 13 04:01:30.610135 ignition[656]: GET result: OK Dec 13 04:01:30.610312 ignition[656]: parsing config with SHA512: 94b5d994dbcd5b6a9bb0570cbc84ac01493a470ff75a7d21ecd4d324dc34aabf69281a41f9d85e52a2216fb3803ca41de101f3088896c94d9c6250a8bd85c2b9 Dec 13 04:01:30.683768 unknown[656]: fetched base config from "system" Dec 13 04:01:30.684561 unknown[656]: fetched base config from "system" Dec 13 04:01:30.685156 unknown[656]: fetched user config from "openstack" Dec 13 04:01:30.685847 ignition[656]: fetch: fetch complete Dec 13 04:01:30.685868 ignition[656]: fetch: fetch passed Dec 13 04:01:30.685959 ignition[656]: Ignition finished successfully Dec 13 04:01:30.689696 systemd[1]: Finished ignition-fetch.service. Dec 13 04:01:30.690000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:01:30.692408 systemd[1]: Starting ignition-kargs.service... Dec 13 04:01:30.704106 ignition[662]: Ignition 2.14.0 Dec 13 04:01:30.704119 ignition[662]: Stage: kargs Dec 13 04:01:30.704234 ignition[662]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 04:01:30.707711 systemd[1]: Finished ignition-kargs.service. Dec 13 04:01:30.709000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:01:30.704255 ignition[662]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Dec 13 04:01:30.711048 systemd[1]: Starting ignition-disks.service... Dec 13 04:01:30.705203 ignition[662]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 04:01:30.706046 ignition[662]: kargs: kargs passed Dec 13 04:01:30.706109 ignition[662]: Ignition finished successfully Dec 13 04:01:30.726501 ignition[667]: Ignition 2.14.0 Dec 13 04:01:30.726523 ignition[667]: Stage: disks Dec 13 04:01:30.726746 ignition[667]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 04:01:30.726781 ignition[667]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Dec 13 04:01:30.728032 ignition[667]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 04:01:30.730000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:01:30.729854 systemd[1]: Finished ignition-disks.service. Dec 13 04:01:30.728936 ignition[667]: disks: disks passed Dec 13 04:01:30.730534 systemd[1]: Reached target initrd-root-device.target. Dec 13 04:01:30.728987 ignition[667]: Ignition finished successfully Dec 13 04:01:30.731026 systemd[1]: Reached target local-fs-pre.target. Dec 13 04:01:30.731524 systemd[1]: Reached target local-fs.target. Dec 13 04:01:30.732649 systemd[1]: Reached target sysinit.target. Dec 13 04:01:30.733546 systemd[1]: Reached target basic.target. Dec 13 04:01:30.735325 systemd[1]: Starting systemd-fsck-root.service... Dec 13 04:01:30.819255 systemd-fsck[675]: ROOT: clean, 621/1628000 files, 124058/1617920 blocks Dec 13 04:01:30.911536 systemd[1]: Finished systemd-fsck-root.service. Dec 13 04:01:30.912000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:01:30.914294 systemd[1]: Mounting sysroot.mount... Dec 13 04:01:31.066100 systemd-networkd[633]: eth0: Gained IPv6LL Dec 13 04:01:31.281116 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Dec 13 04:01:31.282316 systemd[1]: Mounted sysroot.mount. Dec 13 04:01:31.284768 systemd[1]: Reached target initrd-root-fs.target. Dec 13 04:01:31.353116 systemd[1]: Mounting sysroot-usr.mount... Dec 13 04:01:31.355219 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Dec 13 04:01:31.356665 systemd[1]: Starting flatcar-openstack-hostname.service... Dec 13 04:01:31.363214 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 04:01:31.363305 systemd[1]: Reached target ignition-diskful.target. Dec 13 04:01:31.369225 systemd[1]: Mounted sysroot-usr.mount. Dec 13 04:01:31.405294 systemd[1]: Mounting sysroot-usr-share-oem.mount... Dec 13 04:01:31.409499 systemd[1]: Starting initrd-setup-root.service... Dec 13 04:01:31.426672 initrd-setup-root[687]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 04:01:31.454828 initrd-setup-root[695]: cut: /sysroot/etc/group: No such file or directory Dec 13 04:01:31.462110 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (682) Dec 13 04:01:31.472096 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 04:01:31.472170 kernel: BTRFS info (device vda6): using free space tree Dec 13 04:01:31.472183 kernel: BTRFS info (device vda6): has skinny extents Dec 13 04:01:31.476509 initrd-setup-root[715]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 04:01:31.508040 initrd-setup-root[727]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 04:01:31.652632 systemd[1]: Mounted sysroot-usr-share-oem.mount. Dec 13 04:01:31.892570 systemd[1]: Finished initrd-setup-root.service. Dec 13 04:01:31.905282 kernel: kauditd_printk_skb: 22 callbacks suppressed Dec 13 04:01:31.905372 kernel: audit: type=1130 audit(1734062491.893:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:01:31.893000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:01:31.895167 systemd[1]: Starting ignition-mount.service... Dec 13 04:01:31.907932 systemd[1]: Starting sysroot-boot.service... Dec 13 04:01:31.916191 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Dec 13 04:01:31.916389 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Dec 13 04:01:31.950399 ignition[750]: INFO : Ignition 2.14.0 Dec 13 04:01:31.951996 ignition[750]: INFO : Stage: mount Dec 13 04:01:31.952709 ignition[750]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 04:01:31.953495 ignition[750]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Dec 13 04:01:31.955594 ignition[750]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 04:01:31.957253 ignition[750]: INFO : mount: mount passed Dec 13 04:01:31.958795 ignition[750]: INFO : Ignition finished successfully Dec 13 04:01:31.960854 systemd[1]: Finished ignition-mount.service. Dec 13 04:01:31.961000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:01:31.966076 kernel: audit: type=1130 audit(1734062491.961:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:01:31.973569 systemd[1]: Finished sysroot-boot.service. Dec 13 04:01:31.974000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:01:31.979082 kernel: audit: type=1130 audit(1734062491.974:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:01:32.020182 coreos-metadata[681]: Dec 13 04:01:32.020 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Dec 13 04:01:32.043859 coreos-metadata[681]: Dec 13 04:01:32.043 INFO Fetch successful Dec 13 04:01:32.046135 coreos-metadata[681]: Dec 13 04:01:32.045 INFO wrote hostname ci-3510-3-6-f-70d9d685f8.novalocal to /sysroot/etc/hostname Dec 13 04:01:32.052000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:01:32.052312 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Dec 13 04:01:32.069865 kernel: audit: type=1130 audit(1734062492.052:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:01:32.069889 kernel: audit: type=1131 audit(1734062492.052:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:01:32.052000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:01:32.052401 systemd[1]: Finished flatcar-openstack-hostname.service. Dec 13 04:01:32.053619 systemd[1]: Starting ignition-files.service... Dec 13 04:01:32.074319 systemd[1]: Mounting sysroot-usr-share-oem.mount... Dec 13 04:01:32.088106 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (758) Dec 13 04:01:32.093398 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 04:01:32.093423 kernel: BTRFS info (device vda6): using free space tree Dec 13 04:01:32.093434 kernel: BTRFS info (device vda6): has skinny extents Dec 13 04:01:32.100143 systemd[1]: Mounted sysroot-usr-share-oem.mount. Dec 13 04:01:32.111632 ignition[777]: INFO : Ignition 2.14.0 Dec 13 04:01:32.112517 ignition[777]: INFO : Stage: files Dec 13 04:01:32.113148 ignition[777]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 04:01:32.114029 ignition[777]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Dec 13 04:01:32.116114 ignition[777]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 04:01:32.118799 ignition[777]: DEBUG : files: compiled without relabeling support, skipping Dec 13 04:01:32.120388 ignition[777]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 04:01:32.121293 ignition[777]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 04:01:32.128799 ignition[777]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 04:01:32.129916 ignition[777]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 04:01:32.132019 unknown[777]: wrote ssh authorized keys file for user: core Dec 13 04:01:32.132761 ignition[777]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 04:01:32.134309 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Dec 13 04:01:32.135350 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 04:01:32.136557 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 04:01:32.137533 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 04:01:32.138739 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 04:01:32.138739 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 04:01:32.138739 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 04:01:32.138739 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Dec 13 04:01:33.403454 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Dec 13 04:01:35.077551 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 04:01:35.077551 ignition[777]: INFO : files: op(7): [started] processing unit "coreos-metadata-sshkeys@.service" Dec 13 04:01:35.077551 ignition[777]: INFO : files: op(7): [finished] processing unit "coreos-metadata-sshkeys@.service" Dec 13 04:01:35.077551 ignition[777]: INFO : files: op(8): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Dec 13 04:01:35.102586 kernel: audit: type=1130 audit(1734062495.090:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:01:35.090000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:01:35.102661 ignition[777]: INFO : files: op(8): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Dec 13 04:01:35.102661 ignition[777]: INFO : files: createResultFile: createFiles: op(9): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 04:01:35.102661 ignition[777]: INFO : files: createResultFile: createFiles: op(9): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 04:01:35.102661 ignition[777]: INFO : files: files passed Dec 13 04:01:35.102661 ignition[777]: INFO : Ignition finished successfully Dec 13 04:01:35.085027 systemd[1]: Finished ignition-files.service. Dec 13 04:01:35.090821 systemd[1]: Starting initrd-setup-root-after-ignition.service... Dec 13 04:01:35.108329 initrd-setup-root-after-ignition[801]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 04:01:35.113905 kernel: audit: type=1130 audit(1734062495.109:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:01:35.109000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:01:35.101927 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Dec 13 04:01:35.122394 kernel: audit: type=1130 audit(1734062495.114:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:01:35.122415 kernel: audit: type=1131 audit(1734062495.114:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:01:35.114000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:01:35.114000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:01:35.103311 systemd[1]: Starting ignition-quench.service... Dec 13 04:01:35.108041 systemd[1]: Finished initrd-setup-root-after-ignition.service. Dec 13 04:01:35.110132 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 04:01:35.110291 systemd[1]: Finished ignition-quench.service. Dec 13 04:01:35.115258 systemd[1]: Reached target ignition-complete.target. Dec 13 04:01:35.124560 systemd[1]: Starting initrd-parse-etc.service... Dec 13 04:01:35.148532 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 04:01:35.148619 systemd[1]: Finished initrd-parse-etc.service. Dec 13 04:01:35.158873 kernel: audit: type=1130 audit(1734062495.150:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:01:35.150000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:01:35.150000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:01:35.150345 systemd[1]: Reached target initrd-fs.target. Dec 13 04:01:35.159299 systemd[1]: Reached target initrd.target. Dec 13 04:01:35.160675 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Dec 13 04:01:35.161372 systemd[1]: Starting dracut-pre-pivot.service... Dec 13 04:01:35.175119 systemd[1]: Finished dracut-pre-pivot.service. Dec 13 04:01:35.175000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:01:35.176315 systemd[1]: Starting initrd-cleanup.service... Dec 13 04:01:35.189133 systemd[1]: Stopped target nss-lookup.target. Dec 13 04:01:35.190263 systemd[1]: Stopped target remote-cryptsetup.target. Dec 13 04:01:35.191380 systemd[1]: Stopped target timers.target. Dec 13 04:01:35.192355 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 04:01:35.192993 systemd[1]: Stopped dracut-pre-pivot.service. Dec 13 04:01:35.193000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:01:35.194166 systemd[1]: Stopped target initrd.target. Dec 13 04:01:35.195208 systemd[1]: Stopped target basic.target. Dec 13 04:01:35.196173 systemd[1]: Stopped target ignition-complete.target. Dec 13 04:01:35.197192 systemd[1]: Stopped target ignition-diskful.target. Dec 13 04:01:35.198224 systemd[1]: Stopped target initrd-root-device.target. Dec 13 04:01:35.199254 systemd[1]: Stopped target remote-fs.target. Dec 13 04:01:35.200233 systemd[1]: Stopped target remote-fs-pre.target. Dec 13 04:01:35.201243 systemd[1]: Stopped target sysinit.target. Dec 13 04:01:35.202229 systemd[1]: Stopped target local-fs.target. Dec 13 04:01:35.203197 systemd[1]: Stopped target local-fs-pre.target. Dec 13 04:01:35.204184 systemd[1]: Stopped target swap.target. Dec 13 04:01:35.205189 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 04:01:35.205891 systemd[1]: Stopped dracut-pre-mount.service. Dec 13 04:01:35.206000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:01:35.207111 systemd[1]: Stopped target cryptsetup.target. Dec 13 04:01:35.208138 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 04:01:35.208780 systemd[1]: Stopped dracut-initqueue.service. Dec 13 04:01:35.209000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:01:35.209926 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 04:01:35.210711 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Dec 13 04:01:35.211000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:01:35.211840 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 04:01:35.212476 systemd[1]: Stopped ignition-files.service. Dec 13 04:01:35.213000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:01:35.214303 systemd[1]: Stopping ignition-mount.service... Dec 13 04:01:35.219071 iscsid[638]: iscsid shutting down. Dec 13 04:01:35.220614 systemd[1]: Stopping iscsid.service... Dec 13 04:01:35.221561 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 04:01:35.222375 systemd[1]: Stopped kmod-static-nodes.service. Dec 13 04:01:35.223000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:01:35.224374 systemd[1]: Stopping sysroot-boot.service... Dec 13 04:01:35.225397 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 04:01:35.226266 systemd[1]: Stopped systemd-udev-trigger.service. Dec 13 04:01:35.227000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:01:35.227516 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 04:01:35.228240 systemd[1]: Stopped dracut-pre-trigger.service. Dec 13 04:01:35.229000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:01:35.231077 systemd[1]: iscsid.service: Deactivated successfully. Dec 13 04:01:35.231746 systemd[1]: Stopped iscsid.service. Dec 13 04:01:35.232000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:01:35.234037 systemd[1]: Stopping iscsiuio.service... Dec 13 04:01:35.235409 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 04:01:35.236144 systemd[1]: Finished initrd-cleanup.service. Dec 13 04:01:35.238000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:01:35.238000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:01:35.241577 systemd[1]: iscsiuio.service: Deactivated successfully. Dec 13 04:01:35.243000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:01:35.241873 systemd[1]: Stopped iscsiuio.service. Dec 13 04:01:35.245215 ignition[815]: INFO : Ignition 2.14.0 Dec 13 04:01:35.245215 ignition[815]: INFO : Stage: umount Dec 13 04:01:35.246260 ignition[815]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 04:01:35.246260 ignition[815]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Dec 13 04:01:35.249100 ignition[815]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 04:01:35.250503 ignition[815]: INFO : umount: umount passed Dec 13 04:01:35.250503 ignition[815]: INFO : Ignition finished successfully Dec 13 04:01:35.251961 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 04:01:35.252207 systemd[1]: Stopped ignition-mount.service. Dec 13 04:01:35.253000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:01:35.253914 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 04:01:35.255000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:01:35.254019 systemd[1]: Stopped ignition-disks.service. Dec 13 04:01:35.256000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:01:35.255455 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 04:01:35.258000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:01:35.255543 systemd[1]: Stopped ignition-kargs.service. Dec 13 04:01:35.256945 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 13 04:01:35.261000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:01:35.257029 systemd[1]: Stopped ignition-fetch.service. Dec 13 04:01:35.258546 systemd[1]: Stopped target network.target. Dec 13 04:01:35.260112 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 04:01:35.260228 systemd[1]: Stopped ignition-fetch-offline.service. Dec 13 04:01:35.261526 systemd[1]: Stopped target paths.target. Dec 13 04:01:35.262714 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 04:01:35.264133 systemd[1]: Stopped systemd-ask-password-console.path. Dec 13 04:01:35.264855 systemd[1]: Stopped target slices.target. Dec 13 04:01:35.265858 systemd[1]: Stopped target sockets.target. Dec 13 04:01:35.273000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:01:35.266872 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 04:01:35.266912 systemd[1]: Closed iscsid.socket. Dec 13 04:01:35.267955 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 04:01:35.267987 systemd[1]: Closed iscsiuio.socket. Dec 13 04:01:35.272634 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 04:01:35.272673 systemd[1]: Stopped ignition-setup.service. Dec 13 04:01:35.273647 systemd[1]: Stopping systemd-networkd.service... Dec 13 04:01:35.274815 systemd[1]: Stopping systemd-resolved.service... Dec 13 04:01:35.278215 systemd-networkd[633]: eth0: DHCPv6 lease lost Dec 13 04:01:35.280000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:01:35.280304 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 04:01:35.280413 systemd[1]: Stopped systemd-networkd.service. Dec 13 04:01:35.283000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:01:35.281257 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 04:01:35.285000 audit: BPF prog-id=9 op=UNLOAD Dec 13 04:01:35.285000 audit: BPF prog-id=6 op=UNLOAD Dec 13 04:01:35.281347 systemd[1]: Stopped systemd-resolved.service. Dec 13 04:01:35.284725 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 04:01:35.284756 systemd[1]: Closed systemd-networkd.socket. Dec 13 04:01:35.286476 systemd[1]: Stopping network-cleanup.service... Dec 13 04:01:35.288826 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 04:01:35.288881 systemd[1]: Stopped parse-ip-for-networkd.service. Dec 13 04:01:35.289000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:01:35.290101 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 04:01:35.290000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:01:35.290144 systemd[1]: Stopped systemd-sysctl.service. Dec 13 04:01:35.292000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:01:35.291578 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 04:01:35.291633 systemd[1]: Stopped systemd-modules-load.service. Dec 13 04:01:35.296587 systemd[1]: Stopping systemd-udevd.service... Dec 13 04:01:35.298674 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 13 04:01:35.302000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:01:35.301877 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 04:01:35.303000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:01:35.302005 systemd[1]: Stopped network-cleanup.service. Dec 13 04:01:35.302838 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 04:01:35.302970 systemd[1]: Stopped systemd-udevd.service. Dec 13 04:01:35.304424 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 04:01:35.307000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:01:35.304482 systemd[1]: Closed systemd-udevd-control.socket. Dec 13 04:01:35.308000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:01:35.305426 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 04:01:35.309000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:01:35.305461 systemd[1]: Closed systemd-udevd-kernel.socket. Dec 13 04:01:35.306384 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 04:01:35.311000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:01:35.306427 systemd[1]: Stopped dracut-pre-udev.service. Dec 13 04:01:35.307476 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 04:01:35.307517 systemd[1]: Stopped dracut-cmdline.service. Dec 13 04:01:35.308563 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 04:01:35.317000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:01:35.317000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:01:35.308603 systemd[1]: Stopped dracut-cmdline-ask.service. Dec 13 04:01:35.310483 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Dec 13 04:01:35.311406 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 04:01:35.311453 systemd[1]: Stopped systemd-vconsole-setup.service. Dec 13 04:01:35.316557 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 04:01:35.316640 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Dec 13 04:01:35.398405 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 04:01:36.086777 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 04:01:36.087031 systemd[1]: Stopped sysroot-boot.service. Dec 13 04:01:36.089000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:01:36.089939 systemd[1]: Reached target initrd-switch-root.target. Dec 13 04:01:36.091775 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 04:01:36.093000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:01:36.091878 systemd[1]: Stopped initrd-setup-root.service. Dec 13 04:01:36.095535 systemd[1]: Starting initrd-switch-root.service... Dec 13 04:01:36.139928 systemd[1]: Switching root. Dec 13 04:01:36.174630 systemd-journald[186]: Journal stopped Dec 13 04:01:43.441734 systemd-journald[186]: Received SIGTERM from PID 1 (systemd). Dec 13 04:01:43.441784 kernel: SELinux: Class mctp_socket not defined in policy. Dec 13 04:01:43.441802 kernel: SELinux: Class anon_inode not defined in policy. Dec 13 04:01:43.441818 kernel: SELinux: the above unknown classes and permissions will be allowed Dec 13 04:01:43.441832 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 04:01:43.441847 kernel: SELinux: policy capability open_perms=1 Dec 13 04:01:43.441860 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 04:01:43.441875 kernel: SELinux: policy capability always_check_network=0 Dec 13 04:01:43.441887 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 04:01:43.441898 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 04:01:43.441912 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 04:01:43.441924 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 04:01:43.441936 systemd[1]: Successfully loaded SELinux policy in 85.449ms. Dec 13 04:01:43.441953 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 21.233ms. Dec 13 04:01:43.441969 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 04:01:43.441982 systemd[1]: Detected virtualization kvm. Dec 13 04:01:43.441995 systemd[1]: Detected architecture x86-64. Dec 13 04:01:43.442007 systemd[1]: Detected first boot. Dec 13 04:01:43.442022 systemd[1]: Hostname set to . Dec 13 04:01:43.442035 systemd[1]: Initializing machine ID from VM UUID. Dec 13 04:01:43.442049 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Dec 13 04:01:43.443142 kernel: kauditd_printk_skb: 44 callbacks suppressed Dec 13 04:01:43.443157 kernel: audit: type=1400 audit(1734062497.038:87): avc: denied { associate } for pid=848 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Dec 13 04:01:43.443171 kernel: audit: type=1300 audit(1734062497.038:87): arch=c000003e syscall=188 success=yes exit=0 a0=c0001178d2 a1=c00002ae40 a2=c000029100 a3=32 items=0 ppid=831 pid=848 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 04:01:43.443188 kernel: audit: type=1327 audit(1734062497.038:87): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Dec 13 04:01:43.443201 kernel: audit: type=1400 audit(1734062497.043:88): avc: denied { associate } for pid=848 comm="torcx-generator" name="bin" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Dec 13 04:01:43.443214 kernel: audit: type=1300 audit(1734062497.043:88): arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c0001179a9 a2=1ed a3=0 items=2 ppid=831 pid=848 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 04:01:43.443228 kernel: audit: type=1307 audit(1734062497.043:88): cwd="/" Dec 13 04:01:43.443247 kernel: audit: type=1302 audit(1734062497.043:88): item=0 name=(null) inode=2 dev=00:1a mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:01:43.443265 kernel: audit: type=1302 audit(1734062497.043:88): item=1 name=(null) inode=3 dev=00:1a mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:01:43.443281 kernel: audit: type=1327 audit(1734062497.043:88): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Dec 13 04:01:43.443295 systemd[1]: Populated /etc with preset unit settings. Dec 13 04:01:43.443309 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 04:01:43.443322 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 04:01:43.443337 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 04:01:43.443349 kernel: audit: type=1334 audit(1734062503.208:89): prog-id=12 op=LOAD Dec 13 04:01:43.443361 kernel: audit: type=1334 audit(1734062503.208:90): prog-id=3 op=UNLOAD Dec 13 04:01:43.443375 kernel: audit: type=1334 audit(1734062503.211:91): prog-id=13 op=LOAD Dec 13 04:01:43.443387 kernel: audit: type=1334 audit(1734062503.214:92): prog-id=14 op=LOAD Dec 13 04:01:43.443399 kernel: audit: type=1334 audit(1734062503.214:93): prog-id=4 op=UNLOAD Dec 13 04:01:43.443415 kernel: audit: type=1334 audit(1734062503.214:94): prog-id=5 op=UNLOAD Dec 13 04:01:43.443427 kernel: audit: type=1334 audit(1734062503.216:95): prog-id=15 op=LOAD Dec 13 04:01:43.443438 kernel: audit: type=1334 audit(1734062503.216:96): prog-id=12 op=UNLOAD Dec 13 04:01:43.443452 kernel: audit: type=1334 audit(1734062503.219:97): prog-id=16 op=LOAD Dec 13 04:01:43.443464 kernel: audit: type=1334 audit(1734062503.222:98): prog-id=17 op=LOAD Dec 13 04:01:43.443476 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 13 04:01:43.443490 systemd[1]: Stopped initrd-switch-root.service. Dec 13 04:01:43.443503 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 13 04:01:43.443531 systemd[1]: Created slice system-addon\x2dconfig.slice. Dec 13 04:01:43.443557 systemd[1]: Created slice system-addon\x2drun.slice. Dec 13 04:01:43.443572 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Dec 13 04:01:43.443590 systemd[1]: Created slice system-getty.slice. Dec 13 04:01:43.443603 systemd[1]: Created slice system-modprobe.slice. Dec 13 04:01:43.443620 systemd[1]: Created slice system-serial\x2dgetty.slice. Dec 13 04:01:43.443633 systemd[1]: Created slice system-system\x2dcloudinit.slice. Dec 13 04:01:43.443664 systemd[1]: Created slice system-systemd\x2dfsck.slice. Dec 13 04:01:43.443680 systemd[1]: Created slice user.slice. Dec 13 04:01:43.443696 systemd[1]: Started systemd-ask-password-console.path. Dec 13 04:01:43.443708 systemd[1]: Started systemd-ask-password-wall.path. Dec 13 04:01:43.443721 systemd[1]: Set up automount boot.automount. Dec 13 04:01:43.443735 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Dec 13 04:01:43.443748 systemd[1]: Stopped target initrd-switch-root.target. Dec 13 04:01:43.443760 systemd[1]: Stopped target initrd-fs.target. Dec 13 04:01:43.443773 systemd[1]: Stopped target initrd-root-fs.target. Dec 13 04:01:43.443785 systemd[1]: Reached target integritysetup.target. Dec 13 04:01:43.443798 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 04:01:43.443812 systemd[1]: Reached target remote-fs.target. Dec 13 04:01:43.443825 systemd[1]: Reached target slices.target. Dec 13 04:01:43.443838 systemd[1]: Reached target swap.target. Dec 13 04:01:43.443850 systemd[1]: Reached target torcx.target. Dec 13 04:01:43.443863 systemd[1]: Reached target veritysetup.target. Dec 13 04:01:43.443876 systemd[1]: Listening on systemd-coredump.socket. Dec 13 04:01:43.443891 systemd[1]: Listening on systemd-initctl.socket. Dec 13 04:01:43.443903 systemd[1]: Listening on systemd-networkd.socket. Dec 13 04:01:43.443916 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 04:01:43.443928 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 04:01:43.443942 systemd[1]: Listening on systemd-userdbd.socket. Dec 13 04:01:43.443955 systemd[1]: Mounting dev-hugepages.mount... Dec 13 04:01:43.443968 systemd[1]: Mounting dev-mqueue.mount... Dec 13 04:01:43.443980 systemd[1]: Mounting media.mount... Dec 13 04:01:43.443994 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 04:01:43.444007 systemd[1]: Mounting sys-kernel-debug.mount... Dec 13 04:01:43.444020 systemd[1]: Mounting sys-kernel-tracing.mount... Dec 13 04:01:43.444033 systemd[1]: Mounting tmp.mount... Dec 13 04:01:43.444046 systemd[1]: Starting flatcar-tmpfiles.service... Dec 13 04:01:43.445095 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 04:01:43.445111 systemd[1]: Starting kmod-static-nodes.service... Dec 13 04:01:43.445123 systemd[1]: Starting modprobe@configfs.service... Dec 13 04:01:43.445134 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 04:01:43.445146 systemd[1]: Starting modprobe@drm.service... Dec 13 04:01:43.445157 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 04:01:43.445169 systemd[1]: Starting modprobe@fuse.service... Dec 13 04:01:43.445180 systemd[1]: Starting modprobe@loop.service... Dec 13 04:01:43.445192 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 04:01:43.445206 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 13 04:01:43.445218 systemd[1]: Stopped systemd-fsck-root.service. Dec 13 04:01:43.445229 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 13 04:01:43.445240 systemd[1]: Stopped systemd-fsck-usr.service. Dec 13 04:01:43.446729 systemd[1]: Stopped systemd-journald.service. Dec 13 04:01:43.446742 systemd[1]: Starting systemd-journald.service... Dec 13 04:01:43.446753 systemd[1]: Starting systemd-modules-load.service... Dec 13 04:01:43.446765 systemd[1]: Starting systemd-network-generator.service... Dec 13 04:01:43.446776 systemd[1]: Starting systemd-remount-fs.service... Dec 13 04:01:43.446790 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 04:01:43.446802 systemd[1]: verity-setup.service: Deactivated successfully. Dec 13 04:01:43.446813 systemd[1]: Stopped verity-setup.service. Dec 13 04:01:43.446825 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 04:01:43.446836 systemd[1]: Mounted dev-hugepages.mount. Dec 13 04:01:43.446847 systemd[1]: Mounted dev-mqueue.mount. Dec 13 04:01:43.446858 systemd[1]: Mounted media.mount. Dec 13 04:01:43.446887 systemd[1]: Mounted sys-kernel-debug.mount. Dec 13 04:01:43.446900 systemd[1]: Mounted sys-kernel-tracing.mount. Dec 13 04:01:43.446915 systemd[1]: Mounted tmp.mount. Dec 13 04:01:43.446927 systemd[1]: Finished kmod-static-nodes.service. Dec 13 04:01:43.446938 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 04:01:43.446949 systemd[1]: Finished modprobe@configfs.service. Dec 13 04:01:43.446960 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 04:01:43.446971 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 04:01:43.446985 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 04:01:43.446998 systemd[1]: Finished modprobe@drm.service. Dec 13 04:01:43.447009 kernel: loop: module loaded Dec 13 04:01:43.447019 kernel: fuse: init (API version 7.34) Dec 13 04:01:43.447031 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 04:01:43.447042 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 04:01:43.447068 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 04:01:43.447080 systemd[1]: Finished modprobe@loop.service. Dec 13 04:01:43.447091 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 04:01:43.447104 systemd[1]: Finished modprobe@fuse.service. Dec 13 04:01:43.447115 systemd[1]: Finished systemd-modules-load.service. Dec 13 04:01:43.447127 systemd[1]: Finished systemd-network-generator.service. Dec 13 04:01:43.447143 systemd-journald[918]: Journal started Dec 13 04:01:43.447184 systemd-journald[918]: Runtime Journal (/run/log/journal/ca97bd47ca2d4a62bb010776e2063747) is 4.9M, max 39.5M, 34.5M free. Dec 13 04:01:36.514000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 04:01:36.634000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 04:01:36.634000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 04:01:36.634000 audit: BPF prog-id=10 op=LOAD Dec 13 04:01:36.634000 audit: BPF prog-id=10 op=UNLOAD Dec 13 04:01:36.635000 audit: BPF prog-id=11 op=LOAD Dec 13 04:01:36.635000 audit: BPF prog-id=11 op=UNLOAD Dec 13 04:01:37.038000 audit[848]: AVC avc: denied { associate } for pid=848 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Dec 13 04:01:37.038000 audit[848]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c0001178d2 a1=c00002ae40 a2=c000029100 a3=32 items=0 ppid=831 pid=848 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 04:01:37.038000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Dec 13 04:01:37.043000 audit[848]: AVC avc: denied { associate } for pid=848 comm="torcx-generator" name="bin" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Dec 13 04:01:37.043000 audit[848]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c0001179a9 a2=1ed a3=0 items=2 ppid=831 pid=848 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 04:01:37.043000 audit: CWD cwd="/" Dec 13 04:01:37.043000 audit: PATH item=0 name=(null) inode=2 dev=00:1a mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:01:37.043000 audit: PATH item=1 name=(null) inode=3 dev=00:1a mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:01:37.043000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Dec 13 04:01:43.208000 audit: BPF prog-id=12 op=LOAD Dec 13 04:01:43.208000 audit: BPF prog-id=3 op=UNLOAD Dec 13 04:01:43.211000 audit: BPF prog-id=13 op=LOAD Dec 13 04:01:43.214000 audit: BPF prog-id=14 op=LOAD Dec 13 04:01:43.214000 audit: BPF prog-id=4 op=UNLOAD Dec 13 04:01:43.214000 audit: BPF prog-id=5 op=UNLOAD Dec 13 04:01:43.216000 audit: BPF prog-id=15 op=LOAD Dec 13 04:01:43.216000 audit: BPF prog-id=12 op=UNLOAD Dec 13 04:01:43.219000 audit: BPF prog-id=16 op=LOAD Dec 13 04:01:43.222000 audit: BPF prog-id=17 op=LOAD Dec 13 04:01:43.222000 audit: BPF prog-id=13 op=UNLOAD Dec 13 04:01:43.222000 audit: BPF prog-id=14 op=UNLOAD Dec 13 04:01:43.225000 audit: BPF prog-id=18 op=LOAD Dec 13 04:01:43.225000 audit: BPF prog-id=15 op=UNLOAD Dec 13 04:01:43.228000 audit: BPF prog-id=19 op=LOAD Dec 13 04:01:43.231000 audit: BPF prog-id=20 op=LOAD Dec 13 04:01:43.231000 audit: BPF prog-id=16 op=UNLOAD Dec 13 04:01:43.231000 audit: BPF prog-id=17 op=UNLOAD Dec 13 04:01:43.231000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:01:43.240000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:01:43.240000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:01:43.242000 audit: BPF prog-id=18 op=UNLOAD Dec 13 04:01:43.358000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:01:43.364000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:01:43.365000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:01:43.365000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:01:43.366000 audit: BPF prog-id=21 op=LOAD Dec 13 04:01:43.366000 audit: BPF prog-id=22 op=LOAD Dec 13 04:01:43.367000 audit: BPF prog-id=23 op=LOAD Dec 13 04:01:43.367000 audit: BPF prog-id=19 op=UNLOAD Dec 13 04:01:43.367000 audit: BPF prog-id=20 op=UNLOAD Dec 13 04:01:43.392000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:01:43.416000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:01:43.419000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:01:43.419000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:01:43.423000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:01:43.423000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:01:43.427000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:01:43.427000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:01:43.436000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:01:43.437000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:01:43.440000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Dec 13 04:01:43.440000 audit[918]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7fffd4834920 a2=4000 a3=7fffd48349bc items=0 ppid=1 pid=918 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 04:01:43.440000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Dec 13 04:01:43.442000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:01:43.442000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:01:43.444000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:01:43.444000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:01:43.450111 systemd[1]: Started systemd-journald.service. Dec 13 04:01:43.446000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:01:43.448000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:01:37.031704 /usr/lib/systemd/system-generators/torcx-generator[848]: time="2024-12-13T04:01:37Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 04:01:43.206859 systemd[1]: Queued start job for default target multi-user.target. Dec 13 04:01:37.032928 /usr/lib/systemd/system-generators/torcx-generator[848]: time="2024-12-13T04:01:37Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Dec 13 04:01:43.206871 systemd[1]: Unnecessary job was removed for dev-vda6.device. Dec 13 04:01:43.450000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:01:37.032979 /usr/lib/systemd/system-generators/torcx-generator[848]: time="2024-12-13T04:01:37Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Dec 13 04:01:43.231456 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 13 04:01:37.033047 /usr/lib/systemd/system-generators/torcx-generator[848]: time="2024-12-13T04:01:37Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Dec 13 04:01:43.452000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:01:37.033135 /usr/lib/systemd/system-generators/torcx-generator[848]: time="2024-12-13T04:01:37Z" level=debug msg="skipped missing lower profile" missing profile=oem Dec 13 04:01:43.451674 systemd[1]: Finished systemd-remount-fs.service. Dec 13 04:01:37.033278 /usr/lib/systemd/system-generators/torcx-generator[848]: time="2024-12-13T04:01:37Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Dec 13 04:01:37.033314 /usr/lib/systemd/system-generators/torcx-generator[848]: time="2024-12-13T04:01:37Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Dec 13 04:01:43.452560 systemd[1]: Reached target network-pre.target. Dec 13 04:01:37.033800 /usr/lib/systemd/system-generators/torcx-generator[848]: time="2024-12-13T04:01:37Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Dec 13 04:01:37.033894 /usr/lib/systemd/system-generators/torcx-generator[848]: time="2024-12-13T04:01:37Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Dec 13 04:01:37.033928 /usr/lib/systemd/system-generators/torcx-generator[848]: time="2024-12-13T04:01:37Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Dec 13 04:01:37.036379 /usr/lib/systemd/system-generators/torcx-generator[848]: time="2024-12-13T04:01:37Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Dec 13 04:01:37.036470 /usr/lib/systemd/system-generators/torcx-generator[848]: time="2024-12-13T04:01:37Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Dec 13 04:01:37.036521 /usr/lib/systemd/system-generators/torcx-generator[848]: time="2024-12-13T04:01:37Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.6: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.6 Dec 13 04:01:37.036565 /usr/lib/systemd/system-generators/torcx-generator[848]: time="2024-12-13T04:01:37Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Dec 13 04:01:37.036611 /usr/lib/systemd/system-generators/torcx-generator[848]: time="2024-12-13T04:01:37Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.6: no such file or directory" path=/var/lib/torcx/store/3510.3.6 Dec 13 04:01:37.036649 /usr/lib/systemd/system-generators/torcx-generator[848]: time="2024-12-13T04:01:37Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Dec 13 04:01:42.675754 /usr/lib/systemd/system-generators/torcx-generator[848]: time="2024-12-13T04:01:42Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 04:01:42.677090 /usr/lib/systemd/system-generators/torcx-generator[848]: time="2024-12-13T04:01:42Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 04:01:42.677400 /usr/lib/systemd/system-generators/torcx-generator[848]: time="2024-12-13T04:01:42Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 04:01:42.677919 /usr/lib/systemd/system-generators/torcx-generator[848]: time="2024-12-13T04:01:42Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 04:01:43.456961 systemd[1]: Mounting sys-fs-fuse-connections.mount... Dec 13 04:01:42.678112 /usr/lib/systemd/system-generators/torcx-generator[848]: time="2024-12-13T04:01:42Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Dec 13 04:01:42.678307 /usr/lib/systemd/system-generators/torcx-generator[848]: time="2024-12-13T04:01:42Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Dec 13 04:01:43.458494 systemd[1]: Mounting sys-kernel-config.mount... Dec 13 04:01:43.458973 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 04:01:43.465099 systemd[1]: Starting systemd-hwdb-update.service... Dec 13 04:01:43.466952 systemd[1]: Starting systemd-journal-flush.service... Dec 13 04:01:43.467559 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 04:01:43.468637 systemd[1]: Starting systemd-random-seed.service... Dec 13 04:01:43.469166 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 04:01:43.470213 systemd[1]: Starting systemd-sysctl.service... Dec 13 04:01:43.473255 systemd[1]: Mounted sys-fs-fuse-connections.mount. Dec 13 04:01:43.473825 systemd[1]: Mounted sys-kernel-config.mount. Dec 13 04:01:43.480135 systemd-journald[918]: Time spent on flushing to /var/log/journal/ca97bd47ca2d4a62bb010776e2063747 is 40.432ms for 1102 entries. Dec 13 04:01:43.480135 systemd-journald[918]: System Journal (/var/log/journal/ca97bd47ca2d4a62bb010776e2063747) is 8.0M, max 584.8M, 576.8M free. Dec 13 04:01:43.569385 systemd-journald[918]: Received client request to flush runtime journal. Dec 13 04:01:43.488000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:01:43.526000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:01:43.531000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:01:43.539000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:01:43.488045 systemd[1]: Finished systemd-random-seed.service. Dec 13 04:01:43.488616 systemd[1]: Reached target first-boot-complete.target. Dec 13 04:01:43.570310 udevadm[958]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Dec 13 04:01:43.525894 systemd[1]: Finished flatcar-tmpfiles.service. Dec 13 04:01:43.527759 systemd[1]: Starting systemd-sysusers.service... Dec 13 04:01:43.530891 systemd[1]: Finished systemd-sysctl.service. Dec 13 04:01:43.539479 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 04:01:43.541029 systemd[1]: Starting systemd-udev-settle.service... Dec 13 04:01:43.570307 systemd[1]: Finished systemd-journal-flush.service. Dec 13 04:01:43.570000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:01:43.599539 systemd[1]: Finished systemd-sysusers.service. Dec 13 04:01:43.599000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:01:44.273525 systemd[1]: Finished systemd-hwdb-update.service. Dec 13 04:01:44.274000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:01:44.276000 audit: BPF prog-id=24 op=LOAD Dec 13 04:01:44.276000 audit: BPF prog-id=25 op=LOAD Dec 13 04:01:44.276000 audit: BPF prog-id=7 op=UNLOAD Dec 13 04:01:44.276000 audit: BPF prog-id=8 op=UNLOAD Dec 13 04:01:44.277448 systemd[1]: Starting systemd-udevd.service... Dec 13 04:01:44.316884 systemd-udevd[959]: Using default interface naming scheme 'v252'. Dec 13 04:01:44.374929 systemd[1]: Started systemd-udevd.service. Dec 13 04:01:44.388000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:01:44.392000 audit: BPF prog-id=26 op=LOAD Dec 13 04:01:44.394481 systemd[1]: Starting systemd-networkd.service... Dec 13 04:01:44.414000 audit: BPF prog-id=27 op=LOAD Dec 13 04:01:44.414000 audit: BPF prog-id=28 op=LOAD Dec 13 04:01:44.415000 audit: BPF prog-id=29 op=LOAD Dec 13 04:01:44.417187 systemd[1]: Starting systemd-userdbd.service... Dec 13 04:01:44.457729 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Dec 13 04:01:44.483000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:01:44.483491 systemd[1]: Started systemd-userdbd.service. Dec 13 04:01:44.505542 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Dec 13 04:01:44.553198 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Dec 13 04:01:44.561139 kernel: ACPI: button: Power Button [PWRF] Dec 13 04:01:44.574149 systemd-networkd[974]: lo: Link UP Dec 13 04:01:44.574999 systemd-networkd[974]: lo: Gained carrier Dec 13 04:01:44.575555 systemd-networkd[974]: Enumeration completed Dec 13 04:01:44.576000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:01:44.575717 systemd[1]: Started systemd-networkd.service. Dec 13 04:01:44.576615 systemd-networkd[974]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 04:01:44.578519 systemd-networkd[974]: eth0: Link UP Dec 13 04:01:44.578608 systemd-networkd[974]: eth0: Gained carrier Dec 13 04:01:44.581000 audit[963]: AVC avc: denied { confidentiality } for pid=963 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Dec 13 04:01:44.588182 systemd-networkd[974]: eth0: DHCPv4 address 172.24.4.88/24, gateway 172.24.4.1 acquired from 172.24.4.1 Dec 13 04:01:44.581000 audit[963]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=564a5d5ee0c0 a1=337fc a2=7f415ad43bc5 a3=5 items=110 ppid=959 pid=963 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 04:01:44.581000 audit: CWD cwd="/" Dec 13 04:01:44.581000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:01:44.581000 audit: PATH item=1 name=(null) inode=13656 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:01:44.581000 audit: PATH item=2 name=(null) inode=13656 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:01:44.581000 audit: PATH item=3 name=(null) inode=13657 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:01:44.581000 audit: PATH item=4 name=(null) inode=13656 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:01:44.581000 audit: PATH item=5 name=(null) inode=13658 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:01:44.581000 audit: PATH item=6 name=(null) inode=13656 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:01:44.581000 audit: PATH item=7 name=(null) inode=13659 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:01:44.581000 audit: PATH item=8 name=(null) inode=13659 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:01:44.581000 audit: PATH item=9 name=(null) inode=13660 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:01:44.581000 audit: PATH item=10 name=(null) inode=13659 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:01:44.581000 audit: PATH item=11 name=(null) inode=13661 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:01:44.581000 audit: PATH item=12 name=(null) inode=13659 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:01:44.581000 audit: PATH item=13 name=(null) inode=13662 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:01:44.581000 audit: PATH item=14 name=(null) inode=13659 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:01:44.581000 audit: PATH item=15 name=(null) inode=13663 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:01:44.581000 audit: PATH item=16 name=(null) inode=13659 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:01:44.581000 audit: PATH item=17 name=(null) inode=13664 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:01:44.581000 audit: PATH item=18 name=(null) inode=13656 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:01:44.581000 audit: PATH item=19 name=(null) inode=13665 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:01:44.581000 audit: PATH item=20 name=(null) inode=13665 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:01:44.581000 audit: PATH item=21 name=(null) inode=13666 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:01:44.581000 audit: PATH item=22 name=(null) inode=13665 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:01:44.581000 audit: PATH item=23 name=(null) inode=13667 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:01:44.581000 audit: PATH item=24 name=(null) inode=13665 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:01:44.581000 audit: PATH item=25 name=(null) inode=13668 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:01:44.581000 audit: PATH item=26 name=(null) inode=13665 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:01:44.581000 audit: PATH item=27 name=(null) inode=13669 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:01:44.581000 audit: PATH item=28 name=(null) inode=13665 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:01:44.581000 audit: PATH item=29 name=(null) inode=13670 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:01:44.581000 audit: PATH item=30 name=(null) inode=13656 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:01:44.581000 audit: PATH item=31 name=(null) inode=13671 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:01:44.581000 audit: PATH item=32 name=(null) inode=13671 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:01:44.581000 audit: PATH item=33 name=(null) inode=13672 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:01:44.581000 audit: PATH item=34 name=(null) inode=13671 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:01:44.581000 audit: PATH item=35 name=(null) inode=13673 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:01:44.581000 audit: PATH item=36 name=(null) inode=13671 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:01:44.581000 audit: PATH item=37 name=(null) inode=13674 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:01:44.581000 audit: PATH item=38 name=(null) inode=13671 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:01:44.581000 audit: PATH item=39 name=(null) inode=13675 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:01:44.581000 audit: PATH item=40 name=(null) inode=13671 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:01:44.581000 audit: PATH item=41 name=(null) inode=13676 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:01:44.581000 audit: PATH item=42 name=(null) inode=13656 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:01:44.581000 audit: PATH item=43 name=(null) inode=13677 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:01:44.581000 audit: PATH item=44 name=(null) inode=13677 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:01:44.581000 audit: PATH item=45 name=(null) inode=13678 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:01:44.581000 audit: PATH item=46 name=(null) inode=13677 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:01:44.581000 audit: PATH item=47 name=(null) inode=13679 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:01:44.581000 audit: PATH item=48 name=(null) inode=13677 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:01:44.581000 audit: PATH item=49 name=(null) inode=13680 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:01:44.581000 audit: PATH item=50 name=(null) inode=13677 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:01:44.581000 audit: PATH item=51 name=(null) inode=13681 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:01:44.581000 audit: PATH item=52 name=(null) inode=13677 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:01:44.581000 audit: PATH item=53 name=(null) inode=13682 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:01:44.581000 audit: PATH item=54 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:01:44.581000 audit: PATH item=55 name=(null) inode=13683 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:01:44.581000 audit: PATH item=56 name=(null) inode=13683 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:01:44.581000 audit: PATH item=57 name=(null) inode=13684 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:01:44.581000 audit: PATH item=58 name=(null) inode=13683 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:01:44.581000 audit: PATH item=59 name=(null) inode=13685 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:01:44.581000 audit: PATH item=60 name=(null) inode=13683 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:01:44.581000 audit: PATH item=61 name=(null) inode=13686 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:01:44.581000 audit: PATH item=62 name=(null) inode=13686 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:01:44.581000 audit: PATH item=63 name=(null) inode=13687 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:01:44.581000 audit: PATH item=64 name=(null) inode=13686 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:01:44.581000 audit: PATH item=65 name=(null) inode=13688 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:01:44.581000 audit: PATH item=66 name=(null) inode=13686 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:01:44.581000 audit: PATH item=67 name=(null) inode=13689 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:01:44.581000 audit: PATH item=68 name=(null) inode=13686 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:01:44.581000 audit: PATH item=69 name=(null) inode=13690 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:01:44.581000 audit: PATH item=70 name=(null) inode=13686 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:01:44.581000 audit: PATH item=71 name=(null) inode=13691 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:01:44.581000 audit: PATH item=72 name=(null) inode=13683 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:01:44.581000 audit: PATH item=73 name=(null) inode=13692 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:01:44.581000 audit: PATH item=74 name=(null) inode=13692 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:01:44.581000 audit: PATH item=75 name=(null) inode=13693 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:01:44.581000 audit: PATH item=76 name=(null) inode=13692 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:01:44.581000 audit: PATH item=77 name=(null) inode=13694 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:01:44.581000 audit: PATH item=78 name=(null) inode=13692 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:01:44.581000 audit: PATH item=79 name=(null) inode=13695 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:01:44.581000 audit: PATH item=80 name=(null) inode=13692 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:01:44.581000 audit: PATH item=81 name=(null) inode=13696 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:01:44.581000 audit: PATH item=82 name=(null) inode=13692 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:01:44.581000 audit: PATH item=83 name=(null) inode=13697 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:01:44.581000 audit: PATH item=84 name=(null) inode=13683 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:01:44.581000 audit: PATH item=85 name=(null) inode=13698 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:01:44.581000 audit: PATH item=86 name=(null) inode=13698 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:01:44.581000 audit: PATH item=87 name=(null) inode=13699 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:01:44.581000 audit: PATH item=88 name=(null) inode=13698 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:01:44.581000 audit: PATH item=89 name=(null) inode=13700 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:01:44.581000 audit: PATH item=90 name=(null) inode=13698 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:01:44.581000 audit: PATH item=91 name=(null) inode=13701 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:01:44.581000 audit: PATH item=92 name=(null) inode=13698 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:01:44.581000 audit: PATH item=93 name=(null) inode=13702 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:01:44.581000 audit: PATH item=94 name=(null) inode=13698 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:01:44.581000 audit: PATH item=95 name=(null) inode=13703 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:01:44.581000 audit: PATH item=96 name=(null) inode=13683 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:01:44.581000 audit: PATH item=97 name=(null) inode=13704 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:01:44.581000 audit: PATH item=98 name=(null) inode=13704 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:01:44.581000 audit: PATH item=99 name=(null) inode=13705 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:01:44.581000 audit: PATH item=100 name=(null) inode=13704 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:01:44.581000 audit: PATH item=101 name=(null) inode=13706 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:01:44.581000 audit: PATH item=102 name=(null) inode=13704 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:01:44.581000 audit: PATH item=103 name=(null) inode=13707 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:01:44.581000 audit: PATH item=104 name=(null) inode=13704 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:01:44.581000 audit: PATH item=105 name=(null) inode=13708 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:01:44.581000 audit: PATH item=106 name=(null) inode=13704 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:01:44.581000 audit: PATH item=107 name=(null) inode=13709 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:01:44.581000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:01:44.581000 audit: PATH item=109 name=(null) inode=14670 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:01:44.581000 audit: PROCTITLE proctitle="(udev-worker)" Dec 13 04:01:44.621100 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Dec 13 04:01:44.627079 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Dec 13 04:01:44.633078 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 04:01:44.675000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:01:44.675499 systemd[1]: Finished systemd-udev-settle.service. Dec 13 04:01:44.677188 systemd[1]: Starting lvm2-activation-early.service... Dec 13 04:01:44.704723 lvm[989]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 04:01:44.733875 systemd[1]: Finished lvm2-activation-early.service. Dec 13 04:01:44.735000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:01:44.735318 systemd[1]: Reached target cryptsetup.target. Dec 13 04:01:44.738857 systemd[1]: Starting lvm2-activation.service... Dec 13 04:01:44.743073 lvm[990]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 04:01:44.777934 systemd[1]: Finished lvm2-activation.service. Dec 13 04:01:44.779000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:01:44.779323 systemd[1]: Reached target local-fs-pre.target. Dec 13 04:01:44.780473 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 04:01:44.780557 systemd[1]: Reached target local-fs.target. Dec 13 04:01:44.781652 systemd[1]: Reached target machines.target. Dec 13 04:01:44.785176 systemd[1]: Starting ldconfig.service... Dec 13 04:01:44.787602 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 04:01:44.787699 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 04:01:44.791494 systemd[1]: Starting systemd-boot-update.service... Dec 13 04:01:44.795472 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Dec 13 04:01:44.801496 systemd[1]: Starting systemd-machine-id-commit.service... Dec 13 04:01:44.812241 systemd[1]: Starting systemd-sysext.service... Dec 13 04:01:44.830651 systemd[1]: boot.automount: Got automount request for /boot, triggered by 992 (bootctl) Dec 13 04:01:44.833436 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Dec 13 04:01:44.871692 systemd[1]: Unmounting usr-share-oem.mount... Dec 13 04:01:44.918270 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Dec 13 04:01:44.919000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:01:44.938124 systemd[1]: usr-share-oem.mount: Deactivated successfully. Dec 13 04:01:44.938510 systemd[1]: Unmounted usr-share-oem.mount. Dec 13 04:01:45.455124 kernel: loop0: detected capacity change from 0 to 211296 Dec 13 04:01:45.513483 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 04:01:45.515010 systemd[1]: Finished systemd-machine-id-commit.service. Dec 13 04:01:45.516000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:01:45.557610 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 04:01:45.590177 kernel: loop1: detected capacity change from 0 to 211296 Dec 13 04:01:45.652878 (sd-sysext)[1005]: Using extensions 'kubernetes'. Dec 13 04:01:45.655564 (sd-sysext)[1005]: Merged extensions into '/usr'. Dec 13 04:01:45.702227 systemd-fsck[1002]: fsck.fat 4.2 (2021-01-31) Dec 13 04:01:45.702227 systemd-fsck[1002]: /dev/vda1: 789 files, 119291/258078 clusters Dec 13 04:01:45.716575 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Dec 13 04:01:45.718000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:01:45.721263 systemd[1]: Mounting boot.mount... Dec 13 04:01:45.725358 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 04:01:45.728272 systemd[1]: Mounting usr-share-oem.mount... Dec 13 04:01:45.728980 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 04:01:45.733952 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 04:01:45.736260 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 04:01:45.739347 systemd[1]: Starting modprobe@loop.service... Dec 13 04:01:45.741203 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 04:01:45.741328 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 04:01:45.741457 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 04:01:45.742587 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 04:01:45.742730 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 04:01:45.743000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:01:45.743000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:01:45.743584 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 04:01:45.743704 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 04:01:45.745000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:01:45.745000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:01:45.749257 systemd[1]: Mounted boot.mount. Dec 13 04:01:45.750627 systemd[1]: Mounted usr-share-oem.mount. Dec 13 04:01:45.751424 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 04:01:45.751561 systemd[1]: Finished modprobe@loop.service. Dec 13 04:01:45.755000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:01:45.755000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:01:45.759550 systemd[1]: Finished systemd-sysext.service. Dec 13 04:01:45.760000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:01:45.762589 systemd[1]: Starting ensure-sysext.service... Dec 13 04:01:45.767243 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 04:01:45.767340 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 04:01:45.769046 systemd[1]: Starting systemd-tmpfiles-setup.service... Dec 13 04:01:45.774031 systemd[1]: Reloading. Dec 13 04:01:45.796912 systemd-tmpfiles[1013]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Dec 13 04:01:45.809427 systemd-tmpfiles[1013]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 04:01:45.825174 systemd-tmpfiles[1013]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 04:01:45.876513 /usr/lib/systemd/system-generators/torcx-generator[1032]: time="2024-12-13T04:01:45Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 04:01:45.876548 /usr/lib/systemd/system-generators/torcx-generator[1032]: time="2024-12-13T04:01:45Z" level=info msg="torcx already run" Dec 13 04:01:45.913405 systemd-networkd[974]: eth0: Gained IPv6LL Dec 13 04:01:45.998071 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 04:01:45.998095 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 04:01:46.023462 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 04:01:46.095000 audit: BPF prog-id=30 op=LOAD Dec 13 04:01:46.095000 audit: BPF prog-id=26 op=UNLOAD Dec 13 04:01:46.095000 audit: BPF prog-id=31 op=LOAD Dec 13 04:01:46.095000 audit: BPF prog-id=21 op=UNLOAD Dec 13 04:01:46.096000 audit: BPF prog-id=32 op=LOAD Dec 13 04:01:46.096000 audit: BPF prog-id=33 op=LOAD Dec 13 04:01:46.096000 audit: BPF prog-id=22 op=UNLOAD Dec 13 04:01:46.096000 audit: BPF prog-id=23 op=UNLOAD Dec 13 04:01:46.098000 audit: BPF prog-id=34 op=LOAD Dec 13 04:01:46.098000 audit: BPF prog-id=27 op=UNLOAD Dec 13 04:01:46.098000 audit: BPF prog-id=35 op=LOAD Dec 13 04:01:46.098000 audit: BPF prog-id=36 op=LOAD Dec 13 04:01:46.098000 audit: BPF prog-id=28 op=UNLOAD Dec 13 04:01:46.098000 audit: BPF prog-id=29 op=UNLOAD Dec 13 04:01:46.100000 audit: BPF prog-id=37 op=LOAD Dec 13 04:01:46.100000 audit: BPF prog-id=38 op=LOAD Dec 13 04:01:46.100000 audit: BPF prog-id=24 op=UNLOAD Dec 13 04:01:46.100000 audit: BPF prog-id=25 op=UNLOAD Dec 13 04:01:46.104790 systemd[1]: Finished systemd-boot-update.service. Dec 13 04:01:46.105000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:01:46.122074 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 04:01:46.122484 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 04:01:46.124531 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 04:01:46.127101 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 04:01:46.131088 systemd[1]: Starting modprobe@loop.service... Dec 13 04:01:46.132327 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 04:01:46.132480 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 04:01:46.132624 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 04:01:46.133686 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 04:01:46.133828 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 04:01:46.134000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:01:46.134000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:01:46.135111 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 04:01:46.135235 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 04:01:46.135000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:01:46.135000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:01:46.136026 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 04:01:46.137554 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 04:01:46.137802 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 04:01:46.140090 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 04:01:46.142736 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 04:01:46.146805 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 04:01:46.147011 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 04:01:46.147194 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 04:01:46.148214 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 04:01:46.149396 systemd[1]: Finished modprobe@loop.service. Dec 13 04:01:46.150000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:01:46.150000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:01:46.151074 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 04:01:46.151000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:01:46.151000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:01:46.151218 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 04:01:46.155092 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 04:01:46.155369 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 04:01:46.157205 systemd[1]: Starting modprobe@drm.service... Dec 13 04:01:46.160681 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 04:01:46.164384 systemd[1]: Starting modprobe@loop.service... Dec 13 04:01:46.165023 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 04:01:46.165194 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 04:01:46.166661 systemd[1]: Starting systemd-networkd-wait-online.service... Dec 13 04:01:46.167558 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 04:01:46.168842 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 04:01:46.169006 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 04:01:46.169000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:01:46.169000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:01:46.170038 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 04:01:46.170214 systemd[1]: Finished modprobe@drm.service. Dec 13 04:01:46.170000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:01:46.170000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:01:46.171274 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 04:01:46.171390 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 04:01:46.171000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:01:46.172000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:01:46.172488 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 04:01:46.172613 systemd[1]: Finished modprobe@loop.service. Dec 13 04:01:46.173000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:01:46.173000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:01:46.174921 systemd[1]: Finished ensure-sysext.service. Dec 13 04:01:46.175000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:01:46.176422 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 04:01:46.176467 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 04:01:46.184000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:01:46.183772 systemd[1]: Finished systemd-networkd-wait-online.service. Dec 13 04:01:46.259850 systemd[1]: Finished systemd-tmpfiles-setup.service. Dec 13 04:01:46.260000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:01:46.261805 systemd[1]: Starting audit-rules.service... Dec 13 04:01:46.263361 systemd[1]: Starting clean-ca-certificates.service... Dec 13 04:01:46.264962 systemd[1]: Starting systemd-journal-catalog-update.service... Dec 13 04:01:46.266000 audit: BPF prog-id=39 op=LOAD Dec 13 04:01:46.268443 systemd[1]: Starting systemd-resolved.service... Dec 13 04:01:46.272000 audit: BPF prog-id=40 op=LOAD Dec 13 04:01:46.273090 systemd[1]: Starting systemd-timesyncd.service... Dec 13 04:01:46.275471 systemd[1]: Starting systemd-update-utmp.service... Dec 13 04:01:46.294039 systemd[1]: Finished clean-ca-certificates.service. Dec 13 04:01:46.294857 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 04:01:46.294000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:01:46.295000 audit[1095]: SYSTEM_BOOT pid=1095 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Dec 13 04:01:46.297026 systemd[1]: Finished systemd-update-utmp.service. Dec 13 04:01:46.297000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:01:46.339561 systemd[1]: Finished systemd-journal-catalog-update.service. Dec 13 04:01:46.340000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:01:46.364000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Dec 13 04:01:46.364000 audit[1109]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fff5bda7d70 a2=420 a3=0 items=0 ppid=1089 pid=1109 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 04:01:46.364000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 13 04:01:46.365630 augenrules[1109]: No rules Dec 13 04:01:46.365838 systemd[1]: Finished audit-rules.service. Dec 13 04:01:46.376251 ldconfig[991]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 04:01:46.381829 systemd[1]: Started systemd-timesyncd.service. Dec 13 04:01:46.382435 systemd[1]: Reached target time-set.target. Dec 13 04:01:46.398038 systemd[1]: Finished ldconfig.service. Dec 13 04:01:46.399715 systemd[1]: Starting systemd-update-done.service... Dec 13 04:01:46.400772 systemd-resolved[1093]: Positive Trust Anchors: Dec 13 04:01:46.401012 systemd-resolved[1093]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 04:01:46.401135 systemd-resolved[1093]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 04:01:46.407071 systemd[1]: Finished systemd-update-done.service. Dec 13 04:01:46.409242 systemd-resolved[1093]: Using system hostname 'ci-3510-3-6-f-70d9d685f8.novalocal'. Dec 13 04:01:46.411069 systemd[1]: Started systemd-resolved.service. Dec 13 04:01:46.411577 systemd[1]: Reached target network.target. Dec 13 04:01:46.411985 systemd[1]: Reached target network-online.target. Dec 13 04:01:46.412420 systemd[1]: Reached target nss-lookup.target. Dec 13 04:01:46.412831 systemd[1]: Reached target sysinit.target. Dec 13 04:01:46.413327 systemd[1]: Started motdgen.path. Dec 13 04:01:46.413751 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Dec 13 04:01:46.414397 systemd[1]: Started logrotate.timer. Dec 13 04:01:46.414858 systemd[1]: Started mdadm.timer. Dec 13 04:01:46.415255 systemd[1]: Started systemd-tmpfiles-clean.timer. Dec 13 04:01:46.415682 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 04:01:46.415713 systemd[1]: Reached target paths.target. Dec 13 04:01:46.416119 systemd[1]: Reached target timers.target. Dec 13 04:01:46.416774 systemd[1]: Listening on dbus.socket. Dec 13 04:01:46.418123 systemd[1]: Starting docker.socket... Dec 13 04:01:46.421776 systemd[1]: Listening on sshd.socket. Dec 13 04:01:46.422391 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 04:01:46.422785 systemd[1]: Listening on docker.socket. Dec 13 04:01:46.423277 systemd[1]: Reached target sockets.target. Dec 13 04:01:46.423530 systemd-timesyncd[1094]: Contacted time server 185.123.84.51:123 (0.flatcar.pool.ntp.org). Dec 13 04:01:46.423585 systemd-timesyncd[1094]: Initial clock synchronization to Fri 2024-12-13 04:01:46.552308 UTC. Dec 13 04:01:46.423688 systemd[1]: Reached target basic.target. Dec 13 04:01:46.424153 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 04:01:46.424182 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 04:01:46.425151 systemd[1]: Starting containerd.service... Dec 13 04:01:46.427532 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Dec 13 04:01:46.429071 systemd[1]: Starting dbus.service... Dec 13 04:01:46.431946 systemd[1]: Starting enable-oem-cloudinit.service... Dec 13 04:01:46.434998 systemd[1]: Starting extend-filesystems.service... Dec 13 04:01:46.441242 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Dec 13 04:01:46.446697 systemd[1]: Starting kubelet.service... Dec 13 04:01:46.448784 systemd[1]: Starting motdgen.service... Dec 13 04:01:46.454230 systemd[1]: Starting ssh-key-proc-cmdline.service... Dec 13 04:01:46.455411 jq[1123]: false Dec 13 04:01:46.456357 systemd[1]: Starting sshd-keygen.service... Dec 13 04:01:46.460514 systemd[1]: Starting systemd-logind.service... Dec 13 04:01:46.463171 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 04:01:46.463254 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 04:01:46.463860 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 04:01:46.465774 systemd[1]: Starting update-engine.service... Dec 13 04:01:46.467804 systemd[1]: Starting update-ssh-keys-after-ignition.service... Dec 13 04:01:46.470391 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 04:01:46.470633 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Dec 13 04:01:46.474844 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 04:01:46.475078 systemd[1]: Finished ssh-key-proc-cmdline.service. Dec 13 04:01:46.487089 jq[1136]: true Dec 13 04:01:46.512789 extend-filesystems[1124]: Found loop1 Dec 13 04:01:46.516984 jq[1147]: true Dec 13 04:01:46.517483 extend-filesystems[1124]: Found vda Dec 13 04:01:46.518110 extend-filesystems[1124]: Found vda1 Dec 13 04:01:46.521400 extend-filesystems[1124]: Found vda2 Dec 13 04:01:46.534905 extend-filesystems[1124]: Found vda3 Dec 13 04:01:46.535564 extend-filesystems[1124]: Found usr Dec 13 04:01:46.538212 extend-filesystems[1124]: Found vda4 Dec 13 04:01:46.538954 extend-filesystems[1124]: Found vda6 Dec 13 04:01:46.539204 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 04:01:46.539382 systemd[1]: Finished motdgen.service. Dec 13 04:01:46.539918 extend-filesystems[1124]: Found vda7 Dec 13 04:01:46.541086 extend-filesystems[1124]: Found vda9 Dec 13 04:01:46.541086 extend-filesystems[1124]: Checking size of /dev/vda9 Dec 13 04:01:46.561696 dbus-daemon[1120]: [system] SELinux support is enabled Dec 13 04:01:46.562221 systemd[1]: Started dbus.service. Dec 13 04:01:46.564929 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 04:01:46.564957 systemd[1]: Reached target system-config.target. Dec 13 04:01:46.565437 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 04:01:46.565452 systemd[1]: Reached target user-config.target. Dec 13 04:01:46.575781 env[1143]: time="2024-12-13T04:01:46.574280097Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Dec 13 04:01:46.598682 extend-filesystems[1124]: Resized partition /dev/vda9 Dec 13 04:01:46.609622 update_engine[1134]: I1213 04:01:46.608556 1134 main.cc:92] Flatcar Update Engine starting Dec 13 04:01:46.617137 systemd[1]: Started update-engine.service. Dec 13 04:01:46.619640 extend-filesystems[1174]: resize2fs 1.46.5 (30-Dec-2021) Dec 13 04:01:46.619599 systemd[1]: Started locksmithd.service. Dec 13 04:01:46.622382 update_engine[1134]: I1213 04:01:46.620462 1134 update_check_scheduler.cc:74] Next update check in 11m34s Dec 13 04:01:46.624990 systemd-logind[1132]: Watching system buttons on /dev/input/event1 (Power Button) Dec 13 04:01:46.625015 systemd-logind[1132]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 13 04:01:46.627456 systemd-logind[1132]: New seat seat0. Dec 13 04:01:46.629429 systemd[1]: Started systemd-logind.service. Dec 13 04:01:46.649087 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 4635643 blocks Dec 13 04:01:46.671242 env[1143]: time="2024-12-13T04:01:46.671192126Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 04:01:46.671947 env[1143]: time="2024-12-13T04:01:46.671876950Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 04:01:46.673305 env[1143]: time="2024-12-13T04:01:46.673264412Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.173-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 04:01:46.673305 env[1143]: time="2024-12-13T04:01:46.673298997Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 04:01:46.673533 env[1143]: time="2024-12-13T04:01:46.673502850Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 04:01:46.673573 env[1143]: time="2024-12-13T04:01:46.673530041Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 04:01:46.673573 env[1143]: time="2024-12-13T04:01:46.673546161Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Dec 13 04:01:46.673573 env[1143]: time="2024-12-13T04:01:46.673559075Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 04:01:46.673689 env[1143]: time="2024-12-13T04:01:46.673664272Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 04:01:46.673941 env[1143]: time="2024-12-13T04:01:46.673915874Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 04:01:46.674094 env[1143]: time="2024-12-13T04:01:46.674048603Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 04:01:46.674134 env[1143]: time="2024-12-13T04:01:46.674095100Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 04:01:46.674161 env[1143]: time="2024-12-13T04:01:46.674150104Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Dec 13 04:01:46.674196 env[1143]: time="2024-12-13T04:01:46.674165172Z" level=info msg="metadata content store policy set" policy=shared Dec 13 04:01:46.787186 kernel: EXT4-fs (vda9): resized filesystem to 4635643 Dec 13 04:01:46.898272 extend-filesystems[1174]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Dec 13 04:01:46.898272 extend-filesystems[1174]: old_desc_blocks = 1, new_desc_blocks = 3 Dec 13 04:01:46.898272 extend-filesystems[1174]: The filesystem on /dev/vda9 is now 4635643 (4k) blocks long. Dec 13 04:01:46.905914 extend-filesystems[1124]: Resized filesystem in /dev/vda9 Dec 13 04:01:46.900573 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 04:01:46.914197 bash[1173]: Updated "/home/core/.ssh/authorized_keys" Dec 13 04:01:46.900948 systemd[1]: Finished extend-filesystems.service. Dec 13 04:01:46.914551 env[1143]: time="2024-12-13T04:01:46.908448941Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 04:01:46.914551 env[1143]: time="2024-12-13T04:01:46.908586459Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 04:01:46.914551 env[1143]: time="2024-12-13T04:01:46.908626564Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 04:01:46.914551 env[1143]: time="2024-12-13T04:01:46.911436384Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 04:01:46.914551 env[1143]: time="2024-12-13T04:01:46.911528597Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 04:01:46.914551 env[1143]: time="2024-12-13T04:01:46.911568622Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 04:01:46.914551 env[1143]: time="2024-12-13T04:01:46.911602986Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 04:01:46.914551 env[1143]: time="2024-12-13T04:01:46.911639054Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 04:01:46.914551 env[1143]: time="2024-12-13T04:01:46.911673489Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Dec 13 04:01:46.914551 env[1143]: time="2024-12-13T04:01:46.911709777Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 04:01:46.914551 env[1143]: time="2024-12-13T04:01:46.911742418Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 04:01:46.914551 env[1143]: time="2024-12-13T04:01:46.911774137Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 04:01:46.914551 env[1143]: time="2024-12-13T04:01:46.912022643Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 04:01:46.914551 env[1143]: time="2024-12-13T04:01:46.912273213Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 04:01:46.909472 systemd[1]: Finished update-ssh-keys-after-ignition.service. Dec 13 04:01:46.919280 env[1143]: time="2024-12-13T04:01:46.912889289Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 04:01:46.919280 env[1143]: time="2024-12-13T04:01:46.912952357Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 04:01:46.919280 env[1143]: time="2024-12-13T04:01:46.912987894Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 04:01:46.919280 env[1143]: time="2024-12-13T04:01:46.913129910Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 04:01:46.919280 env[1143]: time="2024-12-13T04:01:46.913171769Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 04:01:46.919280 env[1143]: time="2024-12-13T04:01:46.913206043Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 04:01:46.919280 env[1143]: time="2024-12-13T04:01:46.913235027Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 04:01:46.919280 env[1143]: time="2024-12-13T04:01:46.913266386Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 04:01:46.919280 env[1143]: time="2024-12-13T04:01:46.913299328Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 04:01:46.919280 env[1143]: time="2024-12-13T04:01:46.913363468Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 04:01:46.919280 env[1143]: time="2024-12-13T04:01:46.913393685Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 04:01:46.919280 env[1143]: time="2024-12-13T04:01:46.913429101Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 04:01:46.919280 env[1143]: time="2024-12-13T04:01:46.913750895Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 04:01:46.919280 env[1143]: time="2024-12-13T04:01:46.913805587Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 04:01:46.919280 env[1143]: time="2024-12-13T04:01:46.913839150Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 04:01:46.920162 env[1143]: time="2024-12-13T04:01:46.913872232Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 04:01:46.920162 env[1143]: time="2024-12-13T04:01:46.913912498Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Dec 13 04:01:46.920162 env[1143]: time="2024-12-13T04:01:46.913946882Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 04:01:46.920162 env[1143]: time="2024-12-13T04:01:46.913993029Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Dec 13 04:01:46.920162 env[1143]: time="2024-12-13T04:01:46.915511166Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 04:01:46.920465 env[1143]: time="2024-12-13T04:01:46.916321827Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 04:01:46.920465 env[1143]: time="2024-12-13T04:01:46.916557328Z" level=info msg="Connect containerd service" Dec 13 04:01:46.920465 env[1143]: time="2024-12-13T04:01:46.916706989Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 04:01:46.925825 env[1143]: time="2024-12-13T04:01:46.921153238Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 04:01:46.925825 env[1143]: time="2024-12-13T04:01:46.921429496Z" level=info msg="Start subscribing containerd event" Dec 13 04:01:46.925825 env[1143]: time="2024-12-13T04:01:46.921939402Z" level=info msg="Start recovering state" Dec 13 04:01:46.925825 env[1143]: time="2024-12-13T04:01:46.922014583Z" level=info msg="Start event monitor" Dec 13 04:01:46.925825 env[1143]: time="2024-12-13T04:01:46.922035002Z" level=info msg="Start snapshots syncer" Dec 13 04:01:46.925825 env[1143]: time="2024-12-13T04:01:46.922045231Z" level=info msg="Start cni network conf syncer for default" Dec 13 04:01:46.925825 env[1143]: time="2024-12-13T04:01:46.922069977Z" level=info msg="Start streaming server" Dec 13 04:01:46.925825 env[1143]: time="2024-12-13T04:01:46.922423991Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 04:01:46.925825 env[1143]: time="2024-12-13T04:01:46.922624728Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 04:01:46.922955 systemd[1]: Started containerd.service. Dec 13 04:01:46.941215 env[1143]: time="2024-12-13T04:01:46.941164800Z" level=info msg="containerd successfully booted in 0.369790s" Dec 13 04:01:46.976469 locksmithd[1175]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 04:01:47.148599 sshd_keygen[1148]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 04:01:47.190437 systemd[1]: Finished sshd-keygen.service. Dec 13 04:01:47.192499 systemd[1]: Starting issuegen.service... Dec 13 04:01:47.198548 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 04:01:47.198701 systemd[1]: Finished issuegen.service. Dec 13 04:01:47.200471 systemd[1]: Starting systemd-user-sessions.service... Dec 13 04:01:47.207378 systemd[1]: Finished systemd-user-sessions.service. Dec 13 04:01:47.209433 systemd[1]: Started getty@tty1.service. Dec 13 04:01:47.210980 systemd[1]: Started serial-getty@ttyS0.service. Dec 13 04:01:47.211682 systemd[1]: Reached target getty.target. Dec 13 04:01:48.495652 systemd[1]: Started kubelet.service. Dec 13 04:01:50.305945 kubelet[1200]: E1213 04:01:50.305867 1200 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 04:01:50.308444 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 04:01:50.308747 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 04:01:50.309340 systemd[1]: kubelet.service: Consumed 2.252s CPU time. Dec 13 04:01:53.554692 coreos-metadata[1119]: Dec 13 04:01:53.554 WARN failed to locate config-drive, using the metadata service API instead Dec 13 04:01:53.656457 coreos-metadata[1119]: Dec 13 04:01:53.656 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Dec 13 04:01:53.997356 coreos-metadata[1119]: Dec 13 04:01:53.996 INFO Fetch successful Dec 13 04:01:53.997679 coreos-metadata[1119]: Dec 13 04:01:53.997 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Dec 13 04:01:54.012784 coreos-metadata[1119]: Dec 13 04:01:54.012 INFO Fetch successful Dec 13 04:01:54.019505 unknown[1119]: wrote ssh authorized keys file for user: core Dec 13 04:01:54.053981 update-ssh-keys[1210]: Updated "/home/core/.ssh/authorized_keys" Dec 13 04:01:54.055803 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Dec 13 04:01:54.056800 systemd[1]: Reached target multi-user.target. Dec 13 04:01:54.060407 systemd[1]: Starting systemd-update-utmp-runlevel.service... Dec 13 04:01:54.078883 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Dec 13 04:01:54.079360 systemd[1]: Finished systemd-update-utmp-runlevel.service. Dec 13 04:01:54.080790 systemd[1]: Startup finished in 967ms (kernel) + 9.623s (initrd) + 17.680s (userspace) = 28.271s. Dec 13 04:01:54.960633 systemd[1]: Created slice system-sshd.slice. Dec 13 04:01:54.964522 systemd[1]: Started sshd@0-172.24.4.88:22-172.24.4.1:44816.service. Dec 13 04:01:56.363046 sshd[1213]: Accepted publickey for core from 172.24.4.1 port 44816 ssh2: RSA SHA256:OkcE/e8cyiYfDhFAjIOhJbymiCk6iRYfYgj/ZDa0TCk Dec 13 04:01:56.368473 sshd[1213]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 04:01:56.396789 systemd[1]: Created slice user-500.slice. Dec 13 04:01:56.400353 systemd[1]: Starting user-runtime-dir@500.service... Dec 13 04:01:56.405449 systemd-logind[1132]: New session 1 of user core. Dec 13 04:01:56.422515 systemd[1]: Finished user-runtime-dir@500.service. Dec 13 04:01:56.426159 systemd[1]: Starting user@500.service... Dec 13 04:01:56.434366 (systemd)[1216]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 04:01:56.579041 systemd[1216]: Queued start job for default target default.target. Dec 13 04:01:56.580342 systemd[1216]: Reached target paths.target. Dec 13 04:01:56.580470 systemd[1216]: Reached target sockets.target. Dec 13 04:01:56.580564 systemd[1216]: Reached target timers.target. Dec 13 04:01:56.580650 systemd[1216]: Reached target basic.target. Dec 13 04:01:56.580813 systemd[1]: Started user@500.service. Dec 13 04:01:56.581809 systemd[1]: Started session-1.scope. Dec 13 04:01:56.582878 systemd[1216]: Reached target default.target. Dec 13 04:01:56.583191 systemd[1216]: Startup finished in 134ms. Dec 13 04:01:56.982384 systemd[1]: Started sshd@1-172.24.4.88:22-172.24.4.1:44826.service. Dec 13 04:01:58.472365 sshd[1225]: Accepted publickey for core from 172.24.4.1 port 44826 ssh2: RSA SHA256:OkcE/e8cyiYfDhFAjIOhJbymiCk6iRYfYgj/ZDa0TCk Dec 13 04:01:58.475784 sshd[1225]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 04:01:58.486519 systemd-logind[1132]: New session 2 of user core. Dec 13 04:01:58.487255 systemd[1]: Started session-2.scope. Dec 13 04:01:59.117001 sshd[1225]: pam_unix(sshd:session): session closed for user core Dec 13 04:01:59.124890 systemd[1]: Started sshd@2-172.24.4.88:22-172.24.4.1:44834.service. Dec 13 04:01:59.128758 systemd[1]: sshd@1-172.24.4.88:22-172.24.4.1:44826.service: Deactivated successfully. Dec 13 04:01:59.130531 systemd[1]: session-2.scope: Deactivated successfully. Dec 13 04:01:59.133403 systemd-logind[1132]: Session 2 logged out. Waiting for processes to exit. Dec 13 04:01:59.135709 systemd-logind[1132]: Removed session 2. Dec 13 04:02:00.283683 sshd[1230]: Accepted publickey for core from 172.24.4.1 port 44834 ssh2: RSA SHA256:OkcE/e8cyiYfDhFAjIOhJbymiCk6iRYfYgj/ZDa0TCk Dec 13 04:02:00.288018 sshd[1230]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 04:02:00.301182 systemd-logind[1132]: New session 3 of user core. Dec 13 04:02:00.301824 systemd[1]: Started session-3.scope. Dec 13 04:02:00.310586 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 04:02:00.311009 systemd[1]: Stopped kubelet.service. Dec 13 04:02:00.311118 systemd[1]: kubelet.service: Consumed 2.252s CPU time. Dec 13 04:02:00.314054 systemd[1]: Starting kubelet.service... Dec 13 04:02:00.554970 systemd[1]: Started kubelet.service. Dec 13 04:02:00.888361 sshd[1230]: pam_unix(sshd:session): session closed for user core Dec 13 04:02:00.898361 systemd[1]: Started sshd@3-172.24.4.88:22-172.24.4.1:44842.service. Dec 13 04:02:00.899456 systemd[1]: sshd@2-172.24.4.88:22-172.24.4.1:44834.service: Deactivated successfully. Dec 13 04:02:00.900845 systemd[1]: session-3.scope: Deactivated successfully. Dec 13 04:02:00.905372 systemd-logind[1132]: Session 3 logged out. Waiting for processes to exit. Dec 13 04:02:00.908051 systemd-logind[1132]: Removed session 3. Dec 13 04:02:01.074025 kubelet[1238]: E1213 04:02:01.073935 1238 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 04:02:01.081210 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 04:02:01.081375 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 04:02:02.034840 sshd[1246]: Accepted publickey for core from 172.24.4.1 port 44842 ssh2: RSA SHA256:OkcE/e8cyiYfDhFAjIOhJbymiCk6iRYfYgj/ZDa0TCk Dec 13 04:02:02.036863 sshd[1246]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 04:02:02.046305 systemd[1]: Started session-4.scope. Dec 13 04:02:02.046970 systemd-logind[1132]: New session 4 of user core. Dec 13 04:02:02.657480 sshd[1246]: pam_unix(sshd:session): session closed for user core Dec 13 04:02:02.664304 systemd[1]: Started sshd@4-172.24.4.88:22-172.24.4.1:44854.service. Dec 13 04:02:02.672440 systemd[1]: sshd@3-172.24.4.88:22-172.24.4.1:44842.service: Deactivated successfully. Dec 13 04:02:02.674015 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 04:02:02.676467 systemd-logind[1132]: Session 4 logged out. Waiting for processes to exit. Dec 13 04:02:02.679435 systemd-logind[1132]: Removed session 4. Dec 13 04:02:03.884653 sshd[1252]: Accepted publickey for core from 172.24.4.1 port 44854 ssh2: RSA SHA256:OkcE/e8cyiYfDhFAjIOhJbymiCk6iRYfYgj/ZDa0TCk Dec 13 04:02:03.887591 sshd[1252]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 04:02:03.897137 systemd[1]: Started session-5.scope. Dec 13 04:02:03.897824 systemd-logind[1132]: New session 5 of user core. Dec 13 04:02:04.381756 sudo[1256]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 04:02:04.382432 sudo[1256]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Dec 13 04:02:04.413376 systemd[1]: Starting coreos-metadata.service... Dec 13 04:02:11.256801 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 04:02:11.257551 systemd[1]: Stopped kubelet.service. Dec 13 04:02:11.260533 systemd[1]: Starting kubelet.service... Dec 13 04:02:11.475583 coreos-metadata[1260]: Dec 13 04:02:11.475 WARN failed to locate config-drive, using the metadata service API instead Dec 13 04:02:11.565384 coreos-metadata[1260]: Dec 13 04:02:11.565 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Dec 13 04:02:11.608468 systemd[1]: Started kubelet.service. Dec 13 04:02:11.757976 coreos-metadata[1260]: Dec 13 04:02:11.757 INFO Fetch successful Dec 13 04:02:11.757976 coreos-metadata[1260]: Dec 13 04:02:11.757 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Dec 13 04:02:11.773446 coreos-metadata[1260]: Dec 13 04:02:11.773 INFO Fetch successful Dec 13 04:02:11.773446 coreos-metadata[1260]: Dec 13 04:02:11.773 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Dec 13 04:02:11.788266 coreos-metadata[1260]: Dec 13 04:02:11.788 INFO Fetch successful Dec 13 04:02:11.788266 coreos-metadata[1260]: Dec 13 04:02:11.788 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Dec 13 04:02:11.802945 coreos-metadata[1260]: Dec 13 04:02:11.802 INFO Fetch successful Dec 13 04:02:11.802945 coreos-metadata[1260]: Dec 13 04:02:11.802 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Dec 13 04:02:11.818992 coreos-metadata[1260]: Dec 13 04:02:11.817 INFO Fetch successful Dec 13 04:02:11.832496 systemd[1]: Finished coreos-metadata.service. Dec 13 04:02:11.951438 kubelet[1271]: E1213 04:02:11.951400 1271 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 04:02:11.956371 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 04:02:11.956485 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 04:02:13.606680 systemd[1]: Stopped kubelet.service. Dec 13 04:02:13.611449 systemd[1]: Starting kubelet.service... Dec 13 04:02:13.657932 systemd[1]: Reloading. Dec 13 04:02:13.777949 /usr/lib/systemd/system-generators/torcx-generator[1337]: time="2024-12-13T04:02:13Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 04:02:13.778297 /usr/lib/systemd/system-generators/torcx-generator[1337]: time="2024-12-13T04:02:13Z" level=info msg="torcx already run" Dec 13 04:02:14.027087 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 04:02:14.027105 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 04:02:14.052227 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 04:02:14.160872 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 04:02:14.161218 systemd[1]: Stopped kubelet.service. Dec 13 04:02:14.163321 systemd[1]: Starting kubelet.service... Dec 13 04:02:14.284787 systemd[1]: Started kubelet.service. Dec 13 04:02:14.338368 kubelet[1388]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 04:02:14.338689 kubelet[1388]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 04:02:14.338741 kubelet[1388]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 04:02:14.338869 kubelet[1388]: I1213 04:02:14.338841 1388 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 04:02:14.797234 kubelet[1388]: I1213 04:02:14.797188 1388 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 04:02:14.797357 kubelet[1388]: I1213 04:02:14.797243 1388 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 04:02:14.797698 kubelet[1388]: I1213 04:02:14.797669 1388 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 04:02:14.943032 kubelet[1388]: I1213 04:02:14.942021 1388 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 04:02:14.993980 kubelet[1388]: I1213 04:02:14.993859 1388 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 04:02:14.997981 kubelet[1388]: I1213 04:02:14.997925 1388 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 04:02:14.998737 kubelet[1388]: I1213 04:02:14.998678 1388 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 04:02:14.998972 kubelet[1388]: I1213 04:02:14.998746 1388 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 04:02:14.998972 kubelet[1388]: I1213 04:02:14.998773 1388 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 04:02:14.998972 kubelet[1388]: I1213 04:02:14.998968 1388 state_mem.go:36] "Initialized new in-memory state store" Dec 13 04:02:14.999237 kubelet[1388]: I1213 04:02:14.999207 1388 kubelet.go:396] "Attempting to sync node with API server" Dec 13 04:02:14.999784 kubelet[1388]: I1213 04:02:14.999733 1388 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 04:02:14.999878 kubelet[1388]: I1213 04:02:14.999850 1388 kubelet.go:312] "Adding apiserver pod source" Dec 13 04:02:15.000154 kubelet[1388]: E1213 04:02:14.999974 1388 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:02:15.000154 kubelet[1388]: E1213 04:02:15.000047 1388 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:02:15.000762 kubelet[1388]: I1213 04:02:15.000713 1388 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 04:02:15.005043 kubelet[1388]: I1213 04:02:15.004989 1388 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Dec 13 04:02:15.013146 kubelet[1388]: I1213 04:02:15.013074 1388 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 04:02:15.013354 kubelet[1388]: W1213 04:02:15.013205 1388 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 04:02:15.015642 kubelet[1388]: I1213 04:02:15.015593 1388 server.go:1256] "Started kubelet" Dec 13 04:02:15.018404 kubelet[1388]: W1213 04:02:15.018364 1388 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes "172.24.4.88" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Dec 13 04:02:15.018662 kubelet[1388]: E1213 04:02:15.018636 1388 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes "172.24.4.88" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Dec 13 04:02:15.019154 kubelet[1388]: W1213 04:02:15.019124 1388 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Dec 13 04:02:15.019377 kubelet[1388]: E1213 04:02:15.019353 1388 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Dec 13 04:02:15.019619 kubelet[1388]: I1213 04:02:15.019590 1388 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 04:02:15.026469 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Dec 13 04:02:15.026620 kubelet[1388]: I1213 04:02:15.023856 1388 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 04:02:15.026930 kubelet[1388]: I1213 04:02:15.026897 1388 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 04:02:15.027622 kubelet[1388]: I1213 04:02:15.027561 1388 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 04:02:15.039216 kubelet[1388]: I1213 04:02:15.038661 1388 server.go:461] "Adding debug handlers to kubelet server" Dec 13 04:02:15.039648 kubelet[1388]: I1213 04:02:15.039609 1388 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 04:02:15.040508 kubelet[1388]: I1213 04:02:15.040439 1388 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 04:02:15.040670 kubelet[1388]: I1213 04:02:15.040616 1388 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 04:02:15.043729 kubelet[1388]: I1213 04:02:15.043643 1388 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 04:02:15.052359 kubelet[1388]: E1213 04:02:15.052170 1388 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 04:02:15.056444 kubelet[1388]: I1213 04:02:15.056389 1388 factory.go:221] Registration of the containerd container factory successfully Dec 13 04:02:15.056444 kubelet[1388]: I1213 04:02:15.056433 1388 factory.go:221] Registration of the systemd container factory successfully Dec 13 04:02:15.080267 kubelet[1388]: E1213 04:02:15.080218 1388 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"172.24.4.88\" not found" node="172.24.4.88" Dec 13 04:02:15.082228 kubelet[1388]: I1213 04:02:15.082110 1388 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 04:02:15.082228 kubelet[1388]: I1213 04:02:15.082154 1388 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 04:02:15.082228 kubelet[1388]: I1213 04:02:15.082173 1388 state_mem.go:36] "Initialized new in-memory state store" Dec 13 04:02:15.097601 kubelet[1388]: I1213 04:02:15.097573 1388 policy_none.go:49] "None policy: Start" Dec 13 04:02:15.099489 kubelet[1388]: I1213 04:02:15.099450 1388 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 04:02:15.099958 kubelet[1388]: I1213 04:02:15.099940 1388 state_mem.go:35] "Initializing new in-memory state store" Dec 13 04:02:15.116360 systemd[1]: Created slice kubepods.slice. Dec 13 04:02:15.128752 systemd[1]: Created slice kubepods-burstable.slice. Dec 13 04:02:15.134496 systemd[1]: Created slice kubepods-besteffort.slice. Dec 13 04:02:15.139971 kubelet[1388]: I1213 04:02:15.139925 1388 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 04:02:15.140201 kubelet[1388]: I1213 04:02:15.140177 1388 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 04:02:15.143668 kubelet[1388]: I1213 04:02:15.143636 1388 kubelet_node_status.go:73] "Attempting to register node" node="172.24.4.88" Dec 13 04:02:15.144975 kubelet[1388]: E1213 04:02:15.144939 1388 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172.24.4.88\" not found" Dec 13 04:02:15.153206 kubelet[1388]: I1213 04:02:15.153155 1388 kubelet_node_status.go:76] "Successfully registered node" node="172.24.4.88" Dec 13 04:02:15.181423 kubelet[1388]: E1213 04:02:15.181340 1388 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.24.4.88\" not found" Dec 13 04:02:15.241383 kubelet[1388]: I1213 04:02:15.241307 1388 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 04:02:15.242616 kubelet[1388]: I1213 04:02:15.242587 1388 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 04:02:15.242890 kubelet[1388]: I1213 04:02:15.242864 1388 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 04:02:15.243306 kubelet[1388]: I1213 04:02:15.243280 1388 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 04:02:15.243974 kubelet[1388]: E1213 04:02:15.243940 1388 kubelet.go:2353] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Dec 13 04:02:15.281877 kubelet[1388]: E1213 04:02:15.281822 1388 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.24.4.88\" not found" Dec 13 04:02:15.384228 kubelet[1388]: E1213 04:02:15.382918 1388 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.24.4.88\" not found" Dec 13 04:02:15.484712 kubelet[1388]: E1213 04:02:15.484670 1388 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.24.4.88\" not found" Dec 13 04:02:15.585831 kubelet[1388]: E1213 04:02:15.585745 1388 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.24.4.88\" not found" Dec 13 04:02:15.687115 kubelet[1388]: E1213 04:02:15.686881 1388 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.24.4.88\" not found" Dec 13 04:02:15.788324 kubelet[1388]: E1213 04:02:15.788275 1388 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.24.4.88\" not found" Dec 13 04:02:15.835256 sudo[1256]: pam_unix(sudo:session): session closed for user root Dec 13 04:02:15.883750 kubelet[1388]: I1213 04:02:15.883702 1388 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Dec 13 04:02:15.884359 kubelet[1388]: W1213 04:02:15.884321 1388 reflector.go:462] vendor/k8s.io/client-go/informers/factory.go:159: watch of *v1.RuntimeClass ended with: very short watch: vendor/k8s.io/client-go/informers/factory.go:159: Unexpected watch close - watch lasted less than a second and no items received Dec 13 04:02:15.884600 kubelet[1388]: W1213 04:02:15.884336 1388 reflector.go:462] vendor/k8s.io/client-go/informers/factory.go:159: watch of *v1.CSIDriver ended with: very short watch: vendor/k8s.io/client-go/informers/factory.go:159: Unexpected watch close - watch lasted less than a second and no items received Dec 13 04:02:15.884991 kubelet[1388]: W1213 04:02:15.884402 1388 reflector.go:462] vendor/k8s.io/client-go/informers/factory.go:159: watch of *v1.Node ended with: very short watch: vendor/k8s.io/client-go/informers/factory.go:159: Unexpected watch close - watch lasted less than a second and no items received Dec 13 04:02:15.890537 kubelet[1388]: I1213 04:02:15.890467 1388 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Dec 13 04:02:15.891343 env[1143]: time="2024-12-13T04:02:15.891208796Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 04:02:15.892410 kubelet[1388]: I1213 04:02:15.892156 1388 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Dec 13 04:02:16.001034 kubelet[1388]: E1213 04:02:16.000804 1388 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:02:16.001034 kubelet[1388]: I1213 04:02:16.000944 1388 apiserver.go:52] "Watching apiserver" Dec 13 04:02:16.024544 kubelet[1388]: I1213 04:02:16.024487 1388 topology_manager.go:215] "Topology Admit Handler" podUID="a2ef9317-52d1-4510-9d86-533056dd345c" podNamespace="kube-system" podName="cilium-fz5mc" Dec 13 04:02:16.025165 kubelet[1388]: I1213 04:02:16.025125 1388 topology_manager.go:215] "Topology Admit Handler" podUID="dca32bdc-c33a-4fc9-95d4-298d41758294" podNamespace="kube-system" podName="kube-proxy-qnq8v" Dec 13 04:02:16.043116 systemd[1]: Created slice kubepods-burstable-poda2ef9317_52d1_4510_9d86_533056dd345c.slice. Dec 13 04:02:16.044695 kubelet[1388]: I1213 04:02:16.044589 1388 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 04:02:16.044966 kubelet[1388]: I1213 04:02:16.044862 1388 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a2ef9317-52d1-4510-9d86-533056dd345c-clustermesh-secrets\") pod \"cilium-fz5mc\" (UID: \"a2ef9317-52d1-4510-9d86-533056dd345c\") " pod="kube-system/cilium-fz5mc" Dec 13 04:02:16.045171 kubelet[1388]: I1213 04:02:16.044965 1388 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/dca32bdc-c33a-4fc9-95d4-298d41758294-kube-proxy\") pod \"kube-proxy-qnq8v\" (UID: \"dca32bdc-c33a-4fc9-95d4-298d41758294\") " pod="kube-system/kube-proxy-qnq8v" Dec 13 04:02:16.045171 kubelet[1388]: I1213 04:02:16.045050 1388 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a2ef9317-52d1-4510-9d86-533056dd345c-xtables-lock\") pod \"cilium-fz5mc\" (UID: \"a2ef9317-52d1-4510-9d86-533056dd345c\") " pod="kube-system/cilium-fz5mc" Dec 13 04:02:16.045412 kubelet[1388]: I1213 04:02:16.045197 1388 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a2ef9317-52d1-4510-9d86-533056dd345c-cilium-config-path\") pod \"cilium-fz5mc\" (UID: \"a2ef9317-52d1-4510-9d86-533056dd345c\") " pod="kube-system/cilium-fz5mc" Dec 13 04:02:16.045412 kubelet[1388]: I1213 04:02:16.045293 1388 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a2ef9317-52d1-4510-9d86-533056dd345c-host-proc-sys-net\") pod \"cilium-fz5mc\" (UID: \"a2ef9317-52d1-4510-9d86-533056dd345c\") " pod="kube-system/cilium-fz5mc" Dec 13 04:02:16.045412 kubelet[1388]: I1213 04:02:16.045374 1388 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a2ef9317-52d1-4510-9d86-533056dd345c-cilium-run\") pod \"cilium-fz5mc\" (UID: \"a2ef9317-52d1-4510-9d86-533056dd345c\") " pod="kube-system/cilium-fz5mc" Dec 13 04:02:16.045754 kubelet[1388]: I1213 04:02:16.045452 1388 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a2ef9317-52d1-4510-9d86-533056dd345c-bpf-maps\") pod \"cilium-fz5mc\" (UID: \"a2ef9317-52d1-4510-9d86-533056dd345c\") " pod="kube-system/cilium-fz5mc" Dec 13 04:02:16.045754 kubelet[1388]: I1213 04:02:16.045547 1388 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a2ef9317-52d1-4510-9d86-533056dd345c-hostproc\") pod \"cilium-fz5mc\" (UID: \"a2ef9317-52d1-4510-9d86-533056dd345c\") " pod="kube-system/cilium-fz5mc" Dec 13 04:02:16.045754 kubelet[1388]: I1213 04:02:16.045644 1388 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a2ef9317-52d1-4510-9d86-533056dd345c-cilium-cgroup\") pod \"cilium-fz5mc\" (UID: \"a2ef9317-52d1-4510-9d86-533056dd345c\") " pod="kube-system/cilium-fz5mc" Dec 13 04:02:16.045754 kubelet[1388]: I1213 04:02:16.045724 1388 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a2ef9317-52d1-4510-9d86-533056dd345c-cni-path\") pod \"cilium-fz5mc\" (UID: \"a2ef9317-52d1-4510-9d86-533056dd345c\") " pod="kube-system/cilium-fz5mc" Dec 13 04:02:16.046621 kubelet[1388]: I1213 04:02:16.045803 1388 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a2ef9317-52d1-4510-9d86-533056dd345c-host-proc-sys-kernel\") pod \"cilium-fz5mc\" (UID: \"a2ef9317-52d1-4510-9d86-533056dd345c\") " pod="kube-system/cilium-fz5mc" Dec 13 04:02:16.046621 kubelet[1388]: I1213 04:02:16.045904 1388 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8fcds\" (UniqueName: \"kubernetes.io/projected/a2ef9317-52d1-4510-9d86-533056dd345c-kube-api-access-8fcds\") pod \"cilium-fz5mc\" (UID: \"a2ef9317-52d1-4510-9d86-533056dd345c\") " pod="kube-system/cilium-fz5mc" Dec 13 04:02:16.046621 kubelet[1388]: I1213 04:02:16.046003 1388 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ft4zr\" (UniqueName: \"kubernetes.io/projected/dca32bdc-c33a-4fc9-95d4-298d41758294-kube-api-access-ft4zr\") pod \"kube-proxy-qnq8v\" (UID: \"dca32bdc-c33a-4fc9-95d4-298d41758294\") " pod="kube-system/kube-proxy-qnq8v" Dec 13 04:02:16.046621 kubelet[1388]: I1213 04:02:16.046138 1388 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a2ef9317-52d1-4510-9d86-533056dd345c-hubble-tls\") pod \"cilium-fz5mc\" (UID: \"a2ef9317-52d1-4510-9d86-533056dd345c\") " pod="kube-system/cilium-fz5mc" Dec 13 04:02:16.046621 kubelet[1388]: I1213 04:02:16.046225 1388 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a2ef9317-52d1-4510-9d86-533056dd345c-etc-cni-netd\") pod \"cilium-fz5mc\" (UID: \"a2ef9317-52d1-4510-9d86-533056dd345c\") " pod="kube-system/cilium-fz5mc" Dec 13 04:02:16.047238 kubelet[1388]: I1213 04:02:16.046326 1388 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a2ef9317-52d1-4510-9d86-533056dd345c-lib-modules\") pod \"cilium-fz5mc\" (UID: \"a2ef9317-52d1-4510-9d86-533056dd345c\") " pod="kube-system/cilium-fz5mc" Dec 13 04:02:16.047745 kubelet[1388]: I1213 04:02:16.047713 1388 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dca32bdc-c33a-4fc9-95d4-298d41758294-xtables-lock\") pod \"kube-proxy-qnq8v\" (UID: \"dca32bdc-c33a-4fc9-95d4-298d41758294\") " pod="kube-system/kube-proxy-qnq8v" Dec 13 04:02:16.048018 kubelet[1388]: I1213 04:02:16.047990 1388 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dca32bdc-c33a-4fc9-95d4-298d41758294-lib-modules\") pod \"kube-proxy-qnq8v\" (UID: \"dca32bdc-c33a-4fc9-95d4-298d41758294\") " pod="kube-system/kube-proxy-qnq8v" Dec 13 04:02:16.065416 systemd[1]: Created slice kubepods-besteffort-poddca32bdc_c33a_4fc9_95d4_298d41758294.slice. Dec 13 04:02:16.113801 sshd[1252]: pam_unix(sshd:session): session closed for user core Dec 13 04:02:16.120473 systemd[1]: sshd@4-172.24.4.88:22-172.24.4.1:44854.service: Deactivated successfully. Dec 13 04:02:16.122627 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 04:02:16.123097 systemd[1]: session-5.scope: Consumed 1.072s CPU time. Dec 13 04:02:16.124906 systemd-logind[1132]: Session 5 logged out. Waiting for processes to exit. Dec 13 04:02:16.128341 systemd-logind[1132]: Removed session 5. Dec 13 04:02:16.360964 env[1143]: time="2024-12-13T04:02:16.360792057Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-fz5mc,Uid:a2ef9317-52d1-4510-9d86-533056dd345c,Namespace:kube-system,Attempt:0,}" Dec 13 04:02:16.380037 env[1143]: time="2024-12-13T04:02:16.379480607Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qnq8v,Uid:dca32bdc-c33a-4fc9-95d4-298d41758294,Namespace:kube-system,Attempt:0,}" Dec 13 04:02:17.001462 kubelet[1388]: E1213 04:02:17.001355 1388 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:02:17.205237 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount831251303.mount: Deactivated successfully. Dec 13 04:02:17.222545 env[1143]: time="2024-12-13T04:02:17.222486761Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 04:02:17.225039 env[1143]: time="2024-12-13T04:02:17.224998727Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 04:02:17.229264 env[1143]: time="2024-12-13T04:02:17.229175944Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 04:02:17.232784 env[1143]: time="2024-12-13T04:02:17.232742988Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 04:02:17.239255 env[1143]: time="2024-12-13T04:02:17.239208797Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 04:02:17.242855 env[1143]: time="2024-12-13T04:02:17.242773495Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 04:02:17.249411 env[1143]: time="2024-12-13T04:02:17.249342088Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 04:02:17.258787 env[1143]: time="2024-12-13T04:02:17.258685636Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 04:02:17.296012 env[1143]: time="2024-12-13T04:02:17.295886174Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 04:02:17.296456 env[1143]: time="2024-12-13T04:02:17.296361204Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 04:02:17.296456 env[1143]: time="2024-12-13T04:02:17.296415942Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 04:02:17.296995 env[1143]: time="2024-12-13T04:02:17.296890873Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5e301d3077bac7f32ff3e3d5088ad0a9d4b78c8ef1478cfe4a62009c5fe7b8f9 pid=1442 runtime=io.containerd.runc.v2 Dec 13 04:02:17.317047 env[1143]: time="2024-12-13T04:02:17.316712780Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 04:02:17.317047 env[1143]: time="2024-12-13T04:02:17.316799317Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 04:02:17.317047 env[1143]: time="2024-12-13T04:02:17.316838502Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 04:02:17.317497 env[1143]: time="2024-12-13T04:02:17.317447192Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d1e02220dbe0de3929facb73ec369e8976aa2eaf623b2de811f6ae64301f4f45 pid=1458 runtime=io.containerd.runc.v2 Dec 13 04:02:17.325634 systemd[1]: Started cri-containerd-5e301d3077bac7f32ff3e3d5088ad0a9d4b78c8ef1478cfe4a62009c5fe7b8f9.scope. Dec 13 04:02:17.336164 systemd[1]: Started cri-containerd-d1e02220dbe0de3929facb73ec369e8976aa2eaf623b2de811f6ae64301f4f45.scope. Dec 13 04:02:17.390783 env[1143]: time="2024-12-13T04:02:17.390688603Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qnq8v,Uid:dca32bdc-c33a-4fc9-95d4-298d41758294,Namespace:kube-system,Attempt:0,} returns sandbox id \"5e301d3077bac7f32ff3e3d5088ad0a9d4b78c8ef1478cfe4a62009c5fe7b8f9\"" Dec 13 04:02:17.395483 env[1143]: time="2024-12-13T04:02:17.395425573Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-fz5mc,Uid:a2ef9317-52d1-4510-9d86-533056dd345c,Namespace:kube-system,Attempt:0,} returns sandbox id \"d1e02220dbe0de3929facb73ec369e8976aa2eaf623b2de811f6ae64301f4f45\"" Dec 13 04:02:17.395698 env[1143]: time="2024-12-13T04:02:17.395669182Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\"" Dec 13 04:02:18.002136 kubelet[1388]: E1213 04:02:18.001980 1388 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:02:19.002588 kubelet[1388]: E1213 04:02:19.002497 1388 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:02:19.050892 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1562949754.mount: Deactivated successfully. Dec 13 04:02:19.904712 env[1143]: time="2024-12-13T04:02:19.904635437Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 04:02:19.917759 env[1143]: time="2024-12-13T04:02:19.917703124Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 04:02:19.921431 env[1143]: time="2024-12-13T04:02:19.921368976Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 04:02:19.923630 env[1143]: time="2024-12-13T04:02:19.923566867Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 04:02:19.924366 env[1143]: time="2024-12-13T04:02:19.924337593Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\" returns image reference \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\"" Dec 13 04:02:19.926366 env[1143]: time="2024-12-13T04:02:19.926343394Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Dec 13 04:02:19.929277 env[1143]: time="2024-12-13T04:02:19.929248743Z" level=info msg="CreateContainer within sandbox \"5e301d3077bac7f32ff3e3d5088ad0a9d4b78c8ef1478cfe4a62009c5fe7b8f9\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 04:02:19.948824 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3547481598.mount: Deactivated successfully. Dec 13 04:02:19.954122 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2982470109.mount: Deactivated successfully. Dec 13 04:02:19.986208 env[1143]: time="2024-12-13T04:02:19.986172708Z" level=info msg="CreateContainer within sandbox \"5e301d3077bac7f32ff3e3d5088ad0a9d4b78c8ef1478cfe4a62009c5fe7b8f9\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"dbb33a97a024fd84756d176ed1a69a8226c798f51fa3d7d954a903f8a118ac67\"" Dec 13 04:02:19.987090 env[1143]: time="2024-12-13T04:02:19.987044592Z" level=info msg="StartContainer for \"dbb33a97a024fd84756d176ed1a69a8226c798f51fa3d7d954a903f8a118ac67\"" Dec 13 04:02:20.002883 kubelet[1388]: E1213 04:02:20.002819 1388 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:02:20.015136 systemd[1]: Started cri-containerd-dbb33a97a024fd84756d176ed1a69a8226c798f51fa3d7d954a903f8a118ac67.scope. Dec 13 04:02:20.075234 env[1143]: time="2024-12-13T04:02:20.075118307Z" level=info msg="StartContainer for \"dbb33a97a024fd84756d176ed1a69a8226c798f51fa3d7d954a903f8a118ac67\" returns successfully" Dec 13 04:02:20.293021 kubelet[1388]: I1213 04:02:20.292940 1388 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-qnq8v" podStartSLOduration=2.762301667 podStartE2EDuration="5.292845982s" podCreationTimestamp="2024-12-13 04:02:15 +0000 UTC" firstStartedPulling="2024-12-13 04:02:17.394398296 +0000 UTC m=+3.104068254" lastFinishedPulling="2024-12-13 04:02:19.924942561 +0000 UTC m=+5.634612569" observedRunningTime="2024-12-13 04:02:20.292721774 +0000 UTC m=+6.002391783" watchObservedRunningTime="2024-12-13 04:02:20.292845982 +0000 UTC m=+6.002516041" Dec 13 04:02:21.003312 kubelet[1388]: E1213 04:02:21.003163 1388 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:02:22.003751 kubelet[1388]: E1213 04:02:22.003682 1388 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:02:23.003985 kubelet[1388]: E1213 04:02:23.003908 1388 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:02:24.004860 kubelet[1388]: E1213 04:02:24.004791 1388 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:02:25.005609 kubelet[1388]: E1213 04:02:25.005552 1388 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:02:26.006785 kubelet[1388]: E1213 04:02:26.006649 1388 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:02:27.007517 kubelet[1388]: E1213 04:02:27.007359 1388 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:02:27.674538 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount754146294.mount: Deactivated successfully. Dec 13 04:02:28.008577 kubelet[1388]: E1213 04:02:28.008538 1388 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:02:29.009785 kubelet[1388]: E1213 04:02:29.009721 1388 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:02:30.010622 kubelet[1388]: E1213 04:02:30.010531 1388 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:02:31.011440 kubelet[1388]: E1213 04:02:31.011339 1388 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:02:31.824491 update_engine[1134]: I1213 04:02:31.824427 1134 update_attempter.cc:509] Updating boot flags... Dec 13 04:02:32.012326 kubelet[1388]: E1213 04:02:32.012275 1388 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:02:33.013566 kubelet[1388]: E1213 04:02:33.013423 1388 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:02:33.190472 env[1143]: time="2024-12-13T04:02:33.190269482Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 04:02:33.197521 env[1143]: time="2024-12-13T04:02:33.197420342Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 04:02:33.202952 env[1143]: time="2024-12-13T04:02:33.202866928Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 04:02:33.207148 env[1143]: time="2024-12-13T04:02:33.205472951Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Dec 13 04:02:33.211771 env[1143]: time="2024-12-13T04:02:33.211704694Z" level=info msg="CreateContainer within sandbox \"d1e02220dbe0de3929facb73ec369e8976aa2eaf623b2de811f6ae64301f4f45\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 04:02:33.237540 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1118852684.mount: Deactivated successfully. Dec 13 04:02:33.261978 env[1143]: time="2024-12-13T04:02:33.261898731Z" level=info msg="CreateContainer within sandbox \"d1e02220dbe0de3929facb73ec369e8976aa2eaf623b2de811f6ae64301f4f45\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"fab0825c3caa0f0dac9d1717a11eedb45c33938bc7a45d7a9d0123d51381d786\"" Dec 13 04:02:33.265482 env[1143]: time="2024-12-13T04:02:33.263800617Z" level=info msg="StartContainer for \"fab0825c3caa0f0dac9d1717a11eedb45c33938bc7a45d7a9d0123d51381d786\"" Dec 13 04:02:33.306746 systemd[1]: Started cri-containerd-fab0825c3caa0f0dac9d1717a11eedb45c33938bc7a45d7a9d0123d51381d786.scope. Dec 13 04:02:33.308234 systemd[1]: run-containerd-runc-k8s.io-fab0825c3caa0f0dac9d1717a11eedb45c33938bc7a45d7a9d0123d51381d786-runc.HQpkFR.mount: Deactivated successfully. Dec 13 04:02:33.351763 env[1143]: time="2024-12-13T04:02:33.351710445Z" level=info msg="StartContainer for \"fab0825c3caa0f0dac9d1717a11eedb45c33938bc7a45d7a9d0123d51381d786\" returns successfully" Dec 13 04:02:33.354462 systemd[1]: cri-containerd-fab0825c3caa0f0dac9d1717a11eedb45c33938bc7a45d7a9d0123d51381d786.scope: Deactivated successfully. Dec 13 04:02:34.015497 kubelet[1388]: E1213 04:02:34.015382 1388 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:02:34.230743 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fab0825c3caa0f0dac9d1717a11eedb45c33938bc7a45d7a9d0123d51381d786-rootfs.mount: Deactivated successfully. Dec 13 04:02:34.464268 env[1143]: time="2024-12-13T04:02:34.463571596Z" level=info msg="shim disconnected" id=fab0825c3caa0f0dac9d1717a11eedb45c33938bc7a45d7a9d0123d51381d786 Dec 13 04:02:34.465152 env[1143]: time="2024-12-13T04:02:34.465097969Z" level=warning msg="cleaning up after shim disconnected" id=fab0825c3caa0f0dac9d1717a11eedb45c33938bc7a45d7a9d0123d51381d786 namespace=k8s.io Dec 13 04:02:34.465327 env[1143]: time="2024-12-13T04:02:34.465291713Z" level=info msg="cleaning up dead shim" Dec 13 04:02:34.485762 env[1143]: time="2024-12-13T04:02:34.485654724Z" level=warning msg="cleanup warnings time=\"2024-12-13T04:02:34Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1744 runtime=io.containerd.runc.v2\n" Dec 13 04:02:35.000022 kubelet[1388]: E1213 04:02:34.999954 1388 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:02:35.015752 kubelet[1388]: E1213 04:02:35.015684 1388 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:02:35.375942 env[1143]: time="2024-12-13T04:02:35.375795523Z" level=info msg="CreateContainer within sandbox \"d1e02220dbe0de3929facb73ec369e8976aa2eaf623b2de811f6ae64301f4f45\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 04:02:35.738616 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3452316750.mount: Deactivated successfully. Dec 13 04:02:35.959506 env[1143]: time="2024-12-13T04:02:35.959366248Z" level=info msg="CreateContainer within sandbox \"d1e02220dbe0de3929facb73ec369e8976aa2eaf623b2de811f6ae64301f4f45\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"ce44717ecfb8ce54c552e7dfc3b2bcecd9042b84a1bd3b4738feb1b76bb04b27\"" Dec 13 04:02:35.960458 env[1143]: time="2024-12-13T04:02:35.960394102Z" level=info msg="StartContainer for \"ce44717ecfb8ce54c552e7dfc3b2bcecd9042b84a1bd3b4738feb1b76bb04b27\"" Dec 13 04:02:36.015821 systemd[1]: Started cri-containerd-ce44717ecfb8ce54c552e7dfc3b2bcecd9042b84a1bd3b4738feb1b76bb04b27.scope. Dec 13 04:02:36.017567 kubelet[1388]: E1213 04:02:36.016571 1388 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:02:36.068986 env[1143]: time="2024-12-13T04:02:36.068937356Z" level=info msg="StartContainer for \"ce44717ecfb8ce54c552e7dfc3b2bcecd9042b84a1bd3b4738feb1b76bb04b27\" returns successfully" Dec 13 04:02:36.075206 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 04:02:36.075520 systemd[1]: Stopped systemd-sysctl.service. Dec 13 04:02:36.075751 systemd[1]: Stopping systemd-sysctl.service... Dec 13 04:02:36.082233 systemd[1]: Starting systemd-sysctl.service... Dec 13 04:02:36.082613 systemd[1]: cri-containerd-ce44717ecfb8ce54c552e7dfc3b2bcecd9042b84a1bd3b4738feb1b76bb04b27.scope: Deactivated successfully. Dec 13 04:02:36.090280 systemd[1]: Finished systemd-sysctl.service. Dec 13 04:02:36.112318 env[1143]: time="2024-12-13T04:02:36.112259192Z" level=info msg="shim disconnected" id=ce44717ecfb8ce54c552e7dfc3b2bcecd9042b84a1bd3b4738feb1b76bb04b27 Dec 13 04:02:36.112318 env[1143]: time="2024-12-13T04:02:36.112304134Z" level=warning msg="cleaning up after shim disconnected" id=ce44717ecfb8ce54c552e7dfc3b2bcecd9042b84a1bd3b4738feb1b76bb04b27 namespace=k8s.io Dec 13 04:02:36.112318 env[1143]: time="2024-12-13T04:02:36.112314746Z" level=info msg="cleaning up dead shim" Dec 13 04:02:36.119549 env[1143]: time="2024-12-13T04:02:36.119510917Z" level=warning msg="cleanup warnings time=\"2024-12-13T04:02:36Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1804 runtime=io.containerd.runc.v2\n" Dec 13 04:02:36.383537 env[1143]: time="2024-12-13T04:02:36.381800684Z" level=info msg="CreateContainer within sandbox \"d1e02220dbe0de3929facb73ec369e8976aa2eaf623b2de811f6ae64301f4f45\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 04:02:36.417180 env[1143]: time="2024-12-13T04:02:36.417044115Z" level=info msg="CreateContainer within sandbox \"d1e02220dbe0de3929facb73ec369e8976aa2eaf623b2de811f6ae64301f4f45\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"cd3ed1bb59f9069c284003855d1690912e56a6b4ea65446b17e25d40c6d1ac36\"" Dec 13 04:02:36.418744 env[1143]: time="2024-12-13T04:02:36.418694425Z" level=info msg="StartContainer for \"cd3ed1bb59f9069c284003855d1690912e56a6b4ea65446b17e25d40c6d1ac36\"" Dec 13 04:02:36.457354 systemd[1]: Started cri-containerd-cd3ed1bb59f9069c284003855d1690912e56a6b4ea65446b17e25d40c6d1ac36.scope. Dec 13 04:02:36.518935 systemd[1]: cri-containerd-cd3ed1bb59f9069c284003855d1690912e56a6b4ea65446b17e25d40c6d1ac36.scope: Deactivated successfully. Dec 13 04:02:36.523342 env[1143]: time="2024-12-13T04:02:36.523282839Z" level=info msg="StartContainer for \"cd3ed1bb59f9069c284003855d1690912e56a6b4ea65446b17e25d40c6d1ac36\" returns successfully" Dec 13 04:02:36.561696 env[1143]: time="2024-12-13T04:02:36.561642810Z" level=info msg="shim disconnected" id=cd3ed1bb59f9069c284003855d1690912e56a6b4ea65446b17e25d40c6d1ac36 Dec 13 04:02:36.561696 env[1143]: time="2024-12-13T04:02:36.561692263Z" level=warning msg="cleaning up after shim disconnected" id=cd3ed1bb59f9069c284003855d1690912e56a6b4ea65446b17e25d40c6d1ac36 namespace=k8s.io Dec 13 04:02:36.561909 env[1143]: time="2024-12-13T04:02:36.561702895Z" level=info msg="cleaning up dead shim" Dec 13 04:02:36.568575 env[1143]: time="2024-12-13T04:02:36.568535584Z" level=warning msg="cleanup warnings time=\"2024-12-13T04:02:36Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1862 runtime=io.containerd.runc.v2\n" Dec 13 04:02:36.731796 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ce44717ecfb8ce54c552e7dfc3b2bcecd9042b84a1bd3b4738feb1b76bb04b27-rootfs.mount: Deactivated successfully. Dec 13 04:02:37.016978 kubelet[1388]: E1213 04:02:37.016880 1388 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:02:37.388433 env[1143]: time="2024-12-13T04:02:37.388164018Z" level=info msg="CreateContainer within sandbox \"d1e02220dbe0de3929facb73ec369e8976aa2eaf623b2de811f6ae64301f4f45\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 04:02:37.416983 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount289326905.mount: Deactivated successfully. Dec 13 04:02:37.438008 env[1143]: time="2024-12-13T04:02:37.437926932Z" level=info msg="CreateContainer within sandbox \"d1e02220dbe0de3929facb73ec369e8976aa2eaf623b2de811f6ae64301f4f45\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"1bcc009296cb77c4951dc064ee16965807bef0b25bb6b84fca8eaf49708bc275\"" Dec 13 04:02:37.439359 env[1143]: time="2024-12-13T04:02:37.439205684Z" level=info msg="StartContainer for \"1bcc009296cb77c4951dc064ee16965807bef0b25bb6b84fca8eaf49708bc275\"" Dec 13 04:02:37.487026 systemd[1]: Started cri-containerd-1bcc009296cb77c4951dc064ee16965807bef0b25bb6b84fca8eaf49708bc275.scope. Dec 13 04:02:37.533513 systemd[1]: cri-containerd-1bcc009296cb77c4951dc064ee16965807bef0b25bb6b84fca8eaf49708bc275.scope: Deactivated successfully. Dec 13 04:02:37.536216 env[1143]: time="2024-12-13T04:02:37.535881396Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda2ef9317_52d1_4510_9d86_533056dd345c.slice/cri-containerd-1bcc009296cb77c4951dc064ee16965807bef0b25bb6b84fca8eaf49708bc275.scope/memory.events\": no such file or directory" Dec 13 04:02:37.542500 env[1143]: time="2024-12-13T04:02:37.542294424Z" level=info msg="StartContainer for \"1bcc009296cb77c4951dc064ee16965807bef0b25bb6b84fca8eaf49708bc275\" returns successfully" Dec 13 04:02:37.574220 env[1143]: time="2024-12-13T04:02:37.574154196Z" level=info msg="shim disconnected" id=1bcc009296cb77c4951dc064ee16965807bef0b25bb6b84fca8eaf49708bc275 Dec 13 04:02:37.574220 env[1143]: time="2024-12-13T04:02:37.574209319Z" level=warning msg="cleaning up after shim disconnected" id=1bcc009296cb77c4951dc064ee16965807bef0b25bb6b84fca8eaf49708bc275 namespace=k8s.io Dec 13 04:02:37.574220 env[1143]: time="2024-12-13T04:02:37.574222055Z" level=info msg="cleaning up dead shim" Dec 13 04:02:37.581126 env[1143]: time="2024-12-13T04:02:37.581089017Z" level=warning msg="cleanup warnings time=\"2024-12-13T04:02:37Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1920 runtime=io.containerd.runc.v2\n" Dec 13 04:02:37.732306 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1bcc009296cb77c4951dc064ee16965807bef0b25bb6b84fca8eaf49708bc275-rootfs.mount: Deactivated successfully. Dec 13 04:02:38.018114 kubelet[1388]: E1213 04:02:38.017995 1388 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:02:38.397331 env[1143]: time="2024-12-13T04:02:38.396982672Z" level=info msg="CreateContainer within sandbox \"d1e02220dbe0de3929facb73ec369e8976aa2eaf623b2de811f6ae64301f4f45\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 04:02:38.451666 env[1143]: time="2024-12-13T04:02:38.451537483Z" level=info msg="CreateContainer within sandbox \"d1e02220dbe0de3929facb73ec369e8976aa2eaf623b2de811f6ae64301f4f45\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"71e5dde8aedda281f0bc39355c0a4e691ba03f053f6a75a9ea8ab2a69c435cbf\"" Dec 13 04:02:38.452630 env[1143]: time="2024-12-13T04:02:38.452494044Z" level=info msg="StartContainer for \"71e5dde8aedda281f0bc39355c0a4e691ba03f053f6a75a9ea8ab2a69c435cbf\"" Dec 13 04:02:38.504804 systemd[1]: Started cri-containerd-71e5dde8aedda281f0bc39355c0a4e691ba03f053f6a75a9ea8ab2a69c435cbf.scope. Dec 13 04:02:38.549197 env[1143]: time="2024-12-13T04:02:38.549159799Z" level=info msg="StartContainer for \"71e5dde8aedda281f0bc39355c0a4e691ba03f053f6a75a9ea8ab2a69c435cbf\" returns successfully" Dec 13 04:02:38.706309 kubelet[1388]: I1213 04:02:38.705110 1388 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Dec 13 04:02:38.730702 systemd[1]: run-containerd-runc-k8s.io-71e5dde8aedda281f0bc39355c0a4e691ba03f053f6a75a9ea8ab2a69c435cbf-runc.nVRoh9.mount: Deactivated successfully. Dec 13 04:02:39.019264 kubelet[1388]: E1213 04:02:39.019182 1388 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:02:39.024210 kernel: Initializing XFRM netlink socket Dec 13 04:02:40.020628 kubelet[1388]: E1213 04:02:40.020499 1388 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:02:40.793733 systemd-networkd[974]: cilium_host: Link UP Dec 13 04:02:40.797756 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Dec 13 04:02:40.794272 systemd-networkd[974]: cilium_net: Link UP Dec 13 04:02:40.800797 systemd-networkd[974]: cilium_net: Gained carrier Dec 13 04:02:40.801194 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Dec 13 04:02:40.804278 systemd-networkd[974]: cilium_host: Gained carrier Dec 13 04:02:41.021208 kubelet[1388]: E1213 04:02:41.021013 1388 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:02:41.033089 systemd-networkd[974]: cilium_vxlan: Link UP Dec 13 04:02:41.033348 systemd-networkd[974]: cilium_vxlan: Gained carrier Dec 13 04:02:41.337256 systemd-networkd[974]: cilium_host: Gained IPv6LL Dec 13 04:02:41.352488 kernel: NET: Registered PF_ALG protocol family Dec 13 04:02:41.401488 systemd-networkd[974]: cilium_net: Gained IPv6LL Dec 13 04:02:42.022876 kubelet[1388]: E1213 04:02:42.022799 1388 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:02:42.174712 kubelet[1388]: I1213 04:02:42.174679 1388 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-fz5mc" podStartSLOduration=11.364317037 podStartE2EDuration="27.17462937s" podCreationTimestamp="2024-12-13 04:02:15 +0000 UTC" firstStartedPulling="2024-12-13 04:02:17.397378027 +0000 UTC m=+3.107047995" lastFinishedPulling="2024-12-13 04:02:33.20769032 +0000 UTC m=+18.917360328" observedRunningTime="2024-12-13 04:02:39.443556873 +0000 UTC m=+25.153226911" watchObservedRunningTime="2024-12-13 04:02:42.17462937 +0000 UTC m=+27.884299328" Dec 13 04:02:42.175207 kubelet[1388]: I1213 04:02:42.175189 1388 topology_manager.go:215] "Topology Admit Handler" podUID="0fc463fa-23bf-4643-a2de-03bff2581ff1" podNamespace="default" podName="nginx-deployment-6d5f899847-64wpn" Dec 13 04:02:42.182013 systemd[1]: Created slice kubepods-besteffort-pod0fc463fa_23bf_4643_a2de_03bff2581ff1.slice. Dec 13 04:02:42.294784 systemd-networkd[974]: lxc_health: Link UP Dec 13 04:02:42.312200 systemd-networkd[974]: lxc_health: Gained carrier Dec 13 04:02:42.313086 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Dec 13 04:02:42.335550 kubelet[1388]: I1213 04:02:42.335499 1388 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdwlf\" (UniqueName: \"kubernetes.io/projected/0fc463fa-23bf-4643-a2de-03bff2581ff1-kube-api-access-jdwlf\") pod \"nginx-deployment-6d5f899847-64wpn\" (UID: \"0fc463fa-23bf-4643-a2de-03bff2581ff1\") " pod="default/nginx-deployment-6d5f899847-64wpn" Dec 13 04:02:42.436500 systemd-networkd[974]: cilium_vxlan: Gained IPv6LL Dec 13 04:02:42.486601 env[1143]: time="2024-12-13T04:02:42.486508711Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-64wpn,Uid:0fc463fa-23bf-4643-a2de-03bff2581ff1,Namespace:default,Attempt:0,}" Dec 13 04:02:42.580441 systemd-networkd[974]: lxce159f03bef5a: Link UP Dec 13 04:02:42.588107 kernel: eth0: renamed from tmpc5c68 Dec 13 04:02:42.595194 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxce159f03bef5a: link becomes ready Dec 13 04:02:42.592838 systemd-networkd[974]: lxce159f03bef5a: Gained carrier Dec 13 04:02:43.024794 kubelet[1388]: E1213 04:02:43.024749 1388 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:02:43.385377 systemd-networkd[974]: lxc_health: Gained IPv6LL Dec 13 04:02:44.025951 kubelet[1388]: E1213 04:02:44.025908 1388 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:02:44.153267 systemd-networkd[974]: lxce159f03bef5a: Gained IPv6LL Dec 13 04:02:45.027238 kubelet[1388]: E1213 04:02:45.027172 1388 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:02:46.028539 kubelet[1388]: E1213 04:02:46.028406 1388 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:02:47.004762 env[1143]: time="2024-12-13T04:02:47.004574892Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 04:02:47.005714 env[1143]: time="2024-12-13T04:02:47.005629437Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 04:02:47.005995 env[1143]: time="2024-12-13T04:02:47.005891410Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 04:02:47.006850 env[1143]: time="2024-12-13T04:02:47.006747508Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c5c68870e22639a6087744f89b2e956484cd2362de914e4ba8c0f543b3701376 pid=2441 runtime=io.containerd.runc.v2 Dec 13 04:02:47.029193 kubelet[1388]: E1213 04:02:47.029133 1388 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:02:47.043741 systemd[1]: run-containerd-runc-k8s.io-c5c68870e22639a6087744f89b2e956484cd2362de914e4ba8c0f543b3701376-runc.SHYSjJ.mount: Deactivated successfully. Dec 13 04:02:47.052724 systemd[1]: Started cri-containerd-c5c68870e22639a6087744f89b2e956484cd2362de914e4ba8c0f543b3701376.scope. Dec 13 04:02:47.095461 env[1143]: time="2024-12-13T04:02:47.095412774Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-64wpn,Uid:0fc463fa-23bf-4643-a2de-03bff2581ff1,Namespace:default,Attempt:0,} returns sandbox id \"c5c68870e22639a6087744f89b2e956484cd2362de914e4ba8c0f543b3701376\"" Dec 13 04:02:47.097166 env[1143]: time="2024-12-13T04:02:47.097139480Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Dec 13 04:02:48.029455 kubelet[1388]: E1213 04:02:48.029262 1388 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:02:49.030274 kubelet[1388]: E1213 04:02:49.030235 1388 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:02:50.030965 kubelet[1388]: E1213 04:02:50.030893 1388 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:02:51.032041 kubelet[1388]: E1213 04:02:51.032001 1388 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:02:52.032992 kubelet[1388]: E1213 04:02:52.032944 1388 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:02:53.033987 kubelet[1388]: E1213 04:02:53.033898 1388 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:02:53.246550 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2919877647.mount: Deactivated successfully. Dec 13 04:02:54.034213 kubelet[1388]: E1213 04:02:54.034096 1388 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:02:55.001002 kubelet[1388]: E1213 04:02:55.000910 1388 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:02:55.035117 kubelet[1388]: E1213 04:02:55.035017 1388 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:02:55.396376 env[1143]: time="2024-12-13T04:02:55.396256975Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 04:02:55.400359 env[1143]: time="2024-12-13T04:02:55.400283621Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 04:02:55.404698 env[1143]: time="2024-12-13T04:02:55.404633393Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 04:02:55.408844 env[1143]: time="2024-12-13T04:02:55.408776247Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx@sha256:e04edf30a4ea4c5a4107110797c72d3ee8a654415f00acd4019be17218afd9a1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 04:02:55.411146 env[1143]: time="2024-12-13T04:02:55.411040965Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3\"" Dec 13 04:02:55.416540 env[1143]: time="2024-12-13T04:02:55.416458729Z" level=info msg="CreateContainer within sandbox \"c5c68870e22639a6087744f89b2e956484cd2362de914e4ba8c0f543b3701376\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Dec 13 04:02:55.448035 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1677956252.mount: Deactivated successfully. Dec 13 04:02:55.462352 env[1143]: time="2024-12-13T04:02:55.462277606Z" level=info msg="CreateContainer within sandbox \"c5c68870e22639a6087744f89b2e956484cd2362de914e4ba8c0f543b3701376\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"c56ed0016b111aa7428f533e38ac910abeb631ec54f4120677e4fe5dc86934c6\"" Dec 13 04:02:55.463855 env[1143]: time="2024-12-13T04:02:55.463804130Z" level=info msg="StartContainer for \"c56ed0016b111aa7428f533e38ac910abeb631ec54f4120677e4fe5dc86934c6\"" Dec 13 04:02:55.505847 systemd[1]: Started cri-containerd-c56ed0016b111aa7428f533e38ac910abeb631ec54f4120677e4fe5dc86934c6.scope. Dec 13 04:02:55.656179 env[1143]: time="2024-12-13T04:02:55.655382478Z" level=info msg="StartContainer for \"c56ed0016b111aa7428f533e38ac910abeb631ec54f4120677e4fe5dc86934c6\" returns successfully" Dec 13 04:02:56.035957 kubelet[1388]: E1213 04:02:56.035811 1388 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:02:56.554177 kubelet[1388]: I1213 04:02:56.554024 1388 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nginx-deployment-6d5f899847-64wpn" podStartSLOduration=6.238900376 podStartE2EDuration="14.553910912s" podCreationTimestamp="2024-12-13 04:02:42 +0000 UTC" firstStartedPulling="2024-12-13 04:02:47.096507839 +0000 UTC m=+32.806177797" lastFinishedPulling="2024-12-13 04:02:55.411518324 +0000 UTC m=+41.121188333" observedRunningTime="2024-12-13 04:02:56.552859325 +0000 UTC m=+42.262529333" watchObservedRunningTime="2024-12-13 04:02:56.553910912 +0000 UTC m=+42.263580920" Dec 13 04:02:57.036990 kubelet[1388]: E1213 04:02:57.036916 1388 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:02:58.037954 kubelet[1388]: E1213 04:02:58.037900 1388 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:02:59.039100 kubelet[1388]: E1213 04:02:59.038969 1388 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:03:00.039636 kubelet[1388]: E1213 04:03:00.039540 1388 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:03:01.040679 kubelet[1388]: E1213 04:03:01.040581 1388 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:03:02.041455 kubelet[1388]: E1213 04:03:02.041295 1388 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:03:03.041937 kubelet[1388]: E1213 04:03:03.041869 1388 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:03:04.042338 kubelet[1388]: E1213 04:03:04.042246 1388 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:03:05.043124 kubelet[1388]: E1213 04:03:05.043081 1388 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:03:06.044742 kubelet[1388]: E1213 04:03:06.044652 1388 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:03:06.950864 kubelet[1388]: I1213 04:03:06.950740 1388 topology_manager.go:215] "Topology Admit Handler" podUID="fad0a524-46e8-4412-b79d-0a9da6efe7b7" podNamespace="default" podName="nfs-server-provisioner-0" Dec 13 04:03:06.965120 systemd[1]: Created slice kubepods-besteffort-podfad0a524_46e8_4412_b79d_0a9da6efe7b7.slice. Dec 13 04:03:07.045597 kubelet[1388]: E1213 04:03:07.045530 1388 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:03:07.107253 kubelet[1388]: I1213 04:03:07.107156 1388 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxt5m\" (UniqueName: \"kubernetes.io/projected/fad0a524-46e8-4412-b79d-0a9da6efe7b7-kube-api-access-qxt5m\") pod \"nfs-server-provisioner-0\" (UID: \"fad0a524-46e8-4412-b79d-0a9da6efe7b7\") " pod="default/nfs-server-provisioner-0" Dec 13 04:03:07.107630 kubelet[1388]: I1213 04:03:07.107597 1388 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/fad0a524-46e8-4412-b79d-0a9da6efe7b7-data\") pod \"nfs-server-provisioner-0\" (UID: \"fad0a524-46e8-4412-b79d-0a9da6efe7b7\") " pod="default/nfs-server-provisioner-0" Dec 13 04:03:07.273573 env[1143]: time="2024-12-13T04:03:07.272775552Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:fad0a524-46e8-4412-b79d-0a9da6efe7b7,Namespace:default,Attempt:0,}" Dec 13 04:03:07.968317 systemd-networkd[974]: lxcba0cac564a92: Link UP Dec 13 04:03:07.980108 kernel: eth0: renamed from tmpbe735 Dec 13 04:03:07.999283 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 04:03:07.999477 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcba0cac564a92: link becomes ready Dec 13 04:03:07.999684 systemd-networkd[974]: lxcba0cac564a92: Gained carrier Dec 13 04:03:08.047748 kubelet[1388]: E1213 04:03:08.047640 1388 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:03:08.786442 env[1143]: time="2024-12-13T04:03:08.786279338Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 04:03:08.787405 env[1143]: time="2024-12-13T04:03:08.786561728Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 04:03:08.787405 env[1143]: time="2024-12-13T04:03:08.786660320Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 04:03:08.788019 env[1143]: time="2024-12-13T04:03:08.787875693Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/be735726812c16adeed5a921be9a45e77ea1a3579ab90c725046b8a298148cef pid=2567 runtime=io.containerd.runc.v2 Dec 13 04:03:08.830596 systemd[1]: run-containerd-runc-k8s.io-be735726812c16adeed5a921be9a45e77ea1a3579ab90c725046b8a298148cef-runc.GIlQDJ.mount: Deactivated successfully. Dec 13 04:03:08.844388 systemd[1]: Started cri-containerd-be735726812c16adeed5a921be9a45e77ea1a3579ab90c725046b8a298148cef.scope. Dec 13 04:03:08.918268 env[1143]: time="2024-12-13T04:03:08.918047559Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:fad0a524-46e8-4412-b79d-0a9da6efe7b7,Namespace:default,Attempt:0,} returns sandbox id \"be735726812c16adeed5a921be9a45e77ea1a3579ab90c725046b8a298148cef\"" Dec 13 04:03:08.921755 env[1143]: time="2024-12-13T04:03:08.921711597Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Dec 13 04:03:09.048976 kubelet[1388]: E1213 04:03:09.048835 1388 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:03:09.881361 systemd-networkd[974]: lxcba0cac564a92: Gained IPv6LL Dec 13 04:03:10.049675 kubelet[1388]: E1213 04:03:10.049376 1388 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:03:11.050609 kubelet[1388]: E1213 04:03:11.050554 1388 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:03:12.051175 kubelet[1388]: E1213 04:03:12.051107 1388 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:03:12.860237 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3860525479.mount: Deactivated successfully. Dec 13 04:03:13.052096 kubelet[1388]: E1213 04:03:13.051954 1388 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:03:14.053275 kubelet[1388]: E1213 04:03:14.052953 1388 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:03:15.000839 kubelet[1388]: E1213 04:03:15.000744 1388 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:03:15.053958 kubelet[1388]: E1213 04:03:15.053897 1388 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:03:16.054749 kubelet[1388]: E1213 04:03:16.054633 1388 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:03:17.055344 kubelet[1388]: E1213 04:03:17.055251 1388 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:03:17.407599 env[1143]: time="2024-12-13T04:03:17.407388465Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 04:03:17.413584 env[1143]: time="2024-12-13T04:03:17.413505967Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 04:03:17.417647 env[1143]: time="2024-12-13T04:03:17.417575752Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 04:03:17.420537 env[1143]: time="2024-12-13T04:03:17.420492613Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 04:03:17.421415 env[1143]: time="2024-12-13T04:03:17.421366276Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Dec 13 04:03:17.427225 env[1143]: time="2024-12-13T04:03:17.427149661Z" level=info msg="CreateContainer within sandbox \"be735726812c16adeed5a921be9a45e77ea1a3579ab90c725046b8a298148cef\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Dec 13 04:03:17.468551 env[1143]: time="2024-12-13T04:03:17.468475957Z" level=info msg="CreateContainer within sandbox \"be735726812c16adeed5a921be9a45e77ea1a3579ab90c725046b8a298148cef\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"183e4ce454d922c1b98854df71b55073e05280f1b8f990ec4256012b1b55c1ed\"" Dec 13 04:03:17.469325 env[1143]: time="2024-12-13T04:03:17.469288160Z" level=info msg="StartContainer for \"183e4ce454d922c1b98854df71b55073e05280f1b8f990ec4256012b1b55c1ed\"" Dec 13 04:03:17.505082 systemd[1]: Started cri-containerd-183e4ce454d922c1b98854df71b55073e05280f1b8f990ec4256012b1b55c1ed.scope. Dec 13 04:03:17.553204 env[1143]: time="2024-12-13T04:03:17.553150445Z" level=info msg="StartContainer for \"183e4ce454d922c1b98854df71b55073e05280f1b8f990ec4256012b1b55c1ed\" returns successfully" Dec 13 04:03:17.747986 kubelet[1388]: I1213 04:03:17.747792 1388 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=3.245632644 podStartE2EDuration="11.747715908s" podCreationTimestamp="2024-12-13 04:03:06 +0000 UTC" firstStartedPulling="2024-12-13 04:03:08.920460253 +0000 UTC m=+54.630130251" lastFinishedPulling="2024-12-13 04:03:17.422543517 +0000 UTC m=+63.132213515" observedRunningTime="2024-12-13 04:03:17.74767575 +0000 UTC m=+63.457345748" watchObservedRunningTime="2024-12-13 04:03:17.747715908 +0000 UTC m=+63.457385866" Dec 13 04:03:18.055772 kubelet[1388]: E1213 04:03:18.055718 1388 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:03:18.444788 systemd[1]: run-containerd-runc-k8s.io-183e4ce454d922c1b98854df71b55073e05280f1b8f990ec4256012b1b55c1ed-runc.41E3rP.mount: Deactivated successfully. Dec 13 04:03:19.057200 kubelet[1388]: E1213 04:03:19.057115 1388 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:03:20.057550 kubelet[1388]: E1213 04:03:20.057481 1388 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:03:21.058783 kubelet[1388]: E1213 04:03:21.058712 1388 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:03:22.059903 kubelet[1388]: E1213 04:03:22.059786 1388 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:03:23.059999 kubelet[1388]: E1213 04:03:23.059952 1388 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:03:24.061174 kubelet[1388]: E1213 04:03:24.061122 1388 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:03:25.062300 kubelet[1388]: E1213 04:03:25.062253 1388 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:03:26.063214 kubelet[1388]: E1213 04:03:26.063113 1388 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:03:27.063471 kubelet[1388]: E1213 04:03:27.063384 1388 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:03:27.415440 kubelet[1388]: I1213 04:03:27.415276 1388 topology_manager.go:215] "Topology Admit Handler" podUID="3b737ced-d433-44c8-aaf8-82a5277a73bc" podNamespace="default" podName="test-pod-1" Dec 13 04:03:27.429357 systemd[1]: Created slice kubepods-besteffort-pod3b737ced_d433_44c8_aaf8_82a5277a73bc.slice. Dec 13 04:03:27.565065 kubelet[1388]: I1213 04:03:27.565000 1388 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-df35f16c-b2a6-4def-967a-bb9873102bd7\" (UniqueName: \"kubernetes.io/nfs/3b737ced-d433-44c8-aaf8-82a5277a73bc-pvc-df35f16c-b2a6-4def-967a-bb9873102bd7\") pod \"test-pod-1\" (UID: \"3b737ced-d433-44c8-aaf8-82a5277a73bc\") " pod="default/test-pod-1" Dec 13 04:03:27.565230 kubelet[1388]: I1213 04:03:27.565095 1388 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mtcnf\" (UniqueName: \"kubernetes.io/projected/3b737ced-d433-44c8-aaf8-82a5277a73bc-kube-api-access-mtcnf\") pod \"test-pod-1\" (UID: \"3b737ced-d433-44c8-aaf8-82a5277a73bc\") " pod="default/test-pod-1" Dec 13 04:03:27.772128 kernel: FS-Cache: Loaded Dec 13 04:03:27.849575 kernel: RPC: Registered named UNIX socket transport module. Dec 13 04:03:27.849691 kernel: RPC: Registered udp transport module. Dec 13 04:03:27.849718 kernel: RPC: Registered tcp transport module. Dec 13 04:03:27.849741 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Dec 13 04:03:27.929148 kernel: FS-Cache: Netfs 'nfs' registered for caching Dec 13 04:03:28.065278 kubelet[1388]: E1213 04:03:28.064357 1388 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:03:28.155564 kernel: NFS: Registering the id_resolver key type Dec 13 04:03:28.155862 kernel: Key type id_resolver registered Dec 13 04:03:28.155951 kernel: Key type id_legacy registered Dec 13 04:03:28.237458 nfsidmap[2715]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'novalocal' Dec 13 04:03:28.249649 nfsidmap[2716]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'novalocal' Dec 13 04:03:28.341224 env[1143]: time="2024-12-13T04:03:28.339973497Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:3b737ced-d433-44c8-aaf8-82a5277a73bc,Namespace:default,Attempt:0,}" Dec 13 04:03:28.450345 systemd-networkd[974]: lxcba9a94f396f0: Link UP Dec 13 04:03:28.460273 kernel: eth0: renamed from tmpf1e17 Dec 13 04:03:28.472671 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 04:03:28.472880 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcba9a94f396f0: link becomes ready Dec 13 04:03:28.472641 systemd-networkd[974]: lxcba9a94f396f0: Gained carrier Dec 13 04:03:28.790189 env[1143]: time="2024-12-13T04:03:28.790027186Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 04:03:28.790189 env[1143]: time="2024-12-13T04:03:28.790138108Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 04:03:28.790537 env[1143]: time="2024-12-13T04:03:28.790162753Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 04:03:28.791307 env[1143]: time="2024-12-13T04:03:28.791163528Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f1e171c8a274b732cb66106fbd60fe541a2f02bf5e86816c2d05fcb3cb42a51b pid=2744 runtime=io.containerd.runc.v2 Dec 13 04:03:28.813252 systemd[1]: Started cri-containerd-f1e171c8a274b732cb66106fbd60fe541a2f02bf5e86816c2d05fcb3cb42a51b.scope. Dec 13 04:03:28.816747 systemd[1]: run-containerd-runc-k8s.io-f1e171c8a274b732cb66106fbd60fe541a2f02bf5e86816c2d05fcb3cb42a51b-runc.CA6A3P.mount: Deactivated successfully. Dec 13 04:03:28.882133 env[1143]: time="2024-12-13T04:03:28.882032044Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:3b737ced-d433-44c8-aaf8-82a5277a73bc,Namespace:default,Attempt:0,} returns sandbox id \"f1e171c8a274b732cb66106fbd60fe541a2f02bf5e86816c2d05fcb3cb42a51b\"" Dec 13 04:03:28.884522 env[1143]: time="2024-12-13T04:03:28.884474758Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Dec 13 04:03:29.067309 kubelet[1388]: E1213 04:03:29.067153 1388 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:03:29.317124 env[1143]: time="2024-12-13T04:03:29.316992599Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 04:03:29.322700 env[1143]: time="2024-12-13T04:03:29.322153797Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 04:03:29.329902 env[1143]: time="2024-12-13T04:03:29.329844085Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 04:03:29.336513 env[1143]: time="2024-12-13T04:03:29.336423783Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx@sha256:e04edf30a4ea4c5a4107110797c72d3ee8a654415f00acd4019be17218afd9a1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 04:03:29.338710 env[1143]: time="2024-12-13T04:03:29.338618272Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3\"" Dec 13 04:03:29.343540 env[1143]: time="2024-12-13T04:03:29.343406359Z" level=info msg="CreateContainer within sandbox \"f1e171c8a274b732cb66106fbd60fe541a2f02bf5e86816c2d05fcb3cb42a51b\" for container &ContainerMetadata{Name:test,Attempt:0,}" Dec 13 04:03:29.419348 env[1143]: time="2024-12-13T04:03:29.419120148Z" level=info msg="CreateContainer within sandbox \"f1e171c8a274b732cb66106fbd60fe541a2f02bf5e86816c2d05fcb3cb42a51b\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"70ab7d7e7c705cac5944f84dd097b62c76f65974020b44c9f05d08b087c60cf8\"" Dec 13 04:03:29.421038 env[1143]: time="2024-12-13T04:03:29.420979785Z" level=info msg="StartContainer for \"70ab7d7e7c705cac5944f84dd097b62c76f65974020b44c9f05d08b087c60cf8\"" Dec 13 04:03:29.460687 systemd[1]: Started cri-containerd-70ab7d7e7c705cac5944f84dd097b62c76f65974020b44c9f05d08b087c60cf8.scope. Dec 13 04:03:29.534828 env[1143]: time="2024-12-13T04:03:29.534743220Z" level=info msg="StartContainer for \"70ab7d7e7c705cac5944f84dd097b62c76f65974020b44c9f05d08b087c60cf8\" returns successfully" Dec 13 04:03:29.722012 systemd-networkd[974]: lxcba9a94f396f0: Gained IPv6LL Dec 13 04:03:29.788113 kubelet[1388]: I1213 04:03:29.787957 1388 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=20.332468535 podStartE2EDuration="20.787888809s" podCreationTimestamp="2024-12-13 04:03:09 +0000 UTC" firstStartedPulling="2024-12-13 04:03:28.883769192 +0000 UTC m=+74.593439160" lastFinishedPulling="2024-12-13 04:03:29.339189426 +0000 UTC m=+75.048859434" observedRunningTime="2024-12-13 04:03:29.787755907 +0000 UTC m=+75.497425874" watchObservedRunningTime="2024-12-13 04:03:29.787888809 +0000 UTC m=+75.497558788" Dec 13 04:03:30.068682 kubelet[1388]: E1213 04:03:30.068369 1388 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:03:31.068624 kubelet[1388]: E1213 04:03:31.068572 1388 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:03:32.069426 kubelet[1388]: E1213 04:03:32.069346 1388 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:03:33.070493 kubelet[1388]: E1213 04:03:33.070450 1388 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:03:34.071535 kubelet[1388]: E1213 04:03:34.071452 1388 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:03:35.000812 kubelet[1388]: E1213 04:03:35.000756 1388 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:03:35.073362 kubelet[1388]: E1213 04:03:35.073248 1388 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:03:36.074222 kubelet[1388]: E1213 04:03:36.073856 1388 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:03:37.076198 kubelet[1388]: E1213 04:03:37.076132 1388 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:03:37.794959 systemd[1]: run-containerd-runc-k8s.io-71e5dde8aedda281f0bc39355c0a4e691ba03f053f6a75a9ea8ab2a69c435cbf-runc.pLAFLa.mount: Deactivated successfully. Dec 13 04:03:37.834479 env[1143]: time="2024-12-13T04:03:37.834311068Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 04:03:37.849645 env[1143]: time="2024-12-13T04:03:37.849553601Z" level=info msg="StopContainer for \"71e5dde8aedda281f0bc39355c0a4e691ba03f053f6a75a9ea8ab2a69c435cbf\" with timeout 2 (s)" Dec 13 04:03:37.850722 env[1143]: time="2024-12-13T04:03:37.850636321Z" level=info msg="Stop container \"71e5dde8aedda281f0bc39355c0a4e691ba03f053f6a75a9ea8ab2a69c435cbf\" with signal terminated" Dec 13 04:03:37.866421 systemd-networkd[974]: lxc_health: Link DOWN Dec 13 04:03:37.866438 systemd-networkd[974]: lxc_health: Lost carrier Dec 13 04:03:37.926432 systemd[1]: cri-containerd-71e5dde8aedda281f0bc39355c0a4e691ba03f053f6a75a9ea8ab2a69c435cbf.scope: Deactivated successfully. Dec 13 04:03:37.927413 systemd[1]: cri-containerd-71e5dde8aedda281f0bc39355c0a4e691ba03f053f6a75a9ea8ab2a69c435cbf.scope: Consumed 8.480s CPU time. Dec 13 04:03:37.970027 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-71e5dde8aedda281f0bc39355c0a4e691ba03f053f6a75a9ea8ab2a69c435cbf-rootfs.mount: Deactivated successfully. Dec 13 04:03:38.078022 kubelet[1388]: E1213 04:03:38.077673 1388 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:03:38.081210 env[1143]: time="2024-12-13T04:03:38.081097633Z" level=info msg="shim disconnected" id=71e5dde8aedda281f0bc39355c0a4e691ba03f053f6a75a9ea8ab2a69c435cbf Dec 13 04:03:38.081398 env[1143]: time="2024-12-13T04:03:38.081207246Z" level=warning msg="cleaning up after shim disconnected" id=71e5dde8aedda281f0bc39355c0a4e691ba03f053f6a75a9ea8ab2a69c435cbf namespace=k8s.io Dec 13 04:03:38.081398 env[1143]: time="2024-12-13T04:03:38.081232263Z" level=info msg="cleaning up dead shim" Dec 13 04:03:38.101712 env[1143]: time="2024-12-13T04:03:38.101581177Z" level=warning msg="cleanup warnings time=\"2024-12-13T04:03:38Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2873 runtime=io.containerd.runc.v2\n" Dec 13 04:03:38.116889 env[1143]: time="2024-12-13T04:03:38.116771426Z" level=info msg="StopContainer for \"71e5dde8aedda281f0bc39355c0a4e691ba03f053f6a75a9ea8ab2a69c435cbf\" returns successfully" Dec 13 04:03:38.118752 env[1143]: time="2024-12-13T04:03:38.118645400Z" level=info msg="StopPodSandbox for \"d1e02220dbe0de3929facb73ec369e8976aa2eaf623b2de811f6ae64301f4f45\"" Dec 13 04:03:38.118971 env[1143]: time="2024-12-13T04:03:38.118884362Z" level=info msg="Container to stop \"1bcc009296cb77c4951dc064ee16965807bef0b25bb6b84fca8eaf49708bc275\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 04:03:38.119139 env[1143]: time="2024-12-13T04:03:38.118965312Z" level=info msg="Container to stop \"71e5dde8aedda281f0bc39355c0a4e691ba03f053f6a75a9ea8ab2a69c435cbf\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 04:03:38.119139 env[1143]: time="2024-12-13T04:03:38.119018480Z" level=info msg="Container to stop \"ce44717ecfb8ce54c552e7dfc3b2bcecd9042b84a1bd3b4738feb1b76bb04b27\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 04:03:38.119296 env[1143]: time="2024-12-13T04:03:38.119122752Z" level=info msg="Container to stop \"cd3ed1bb59f9069c284003855d1690912e56a6b4ea65446b17e25d40c6d1ac36\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 04:03:38.119296 env[1143]: time="2024-12-13T04:03:38.119180279Z" level=info msg="Container to stop \"fab0825c3caa0f0dac9d1717a11eedb45c33938bc7a45d7a9d0123d51381d786\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 04:03:38.123395 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d1e02220dbe0de3929facb73ec369e8976aa2eaf623b2de811f6ae64301f4f45-shm.mount: Deactivated successfully. Dec 13 04:03:38.137793 systemd[1]: cri-containerd-d1e02220dbe0de3929facb73ec369e8976aa2eaf623b2de811f6ae64301f4f45.scope: Deactivated successfully. Dec 13 04:03:38.182630 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d1e02220dbe0de3929facb73ec369e8976aa2eaf623b2de811f6ae64301f4f45-rootfs.mount: Deactivated successfully. Dec 13 04:03:38.231021 env[1143]: time="2024-12-13T04:03:38.230870317Z" level=info msg="shim disconnected" id=d1e02220dbe0de3929facb73ec369e8976aa2eaf623b2de811f6ae64301f4f45 Dec 13 04:03:38.231694 env[1143]: time="2024-12-13T04:03:38.231647414Z" level=warning msg="cleaning up after shim disconnected" id=d1e02220dbe0de3929facb73ec369e8976aa2eaf623b2de811f6ae64301f4f45 namespace=k8s.io Dec 13 04:03:38.231868 env[1143]: time="2024-12-13T04:03:38.231833718Z" level=info msg="cleaning up dead shim" Dec 13 04:03:38.247940 env[1143]: time="2024-12-13T04:03:38.247861216Z" level=warning msg="cleanup warnings time=\"2024-12-13T04:03:38Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2904 runtime=io.containerd.runc.v2\n" Dec 13 04:03:38.249311 env[1143]: time="2024-12-13T04:03:38.249256505Z" level=info msg="TearDown network for sandbox \"d1e02220dbe0de3929facb73ec369e8976aa2eaf623b2de811f6ae64301f4f45\" successfully" Dec 13 04:03:38.249405 env[1143]: time="2024-12-13T04:03:38.249321284Z" level=info msg="StopPodSandbox for \"d1e02220dbe0de3929facb73ec369e8976aa2eaf623b2de811f6ae64301f4f45\" returns successfully" Dec 13 04:03:38.356916 kubelet[1388]: I1213 04:03:38.354490 1388 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a2ef9317-52d1-4510-9d86-533056dd345c-xtables-lock\") pod \"a2ef9317-52d1-4510-9d86-533056dd345c\" (UID: \"a2ef9317-52d1-4510-9d86-533056dd345c\") " Dec 13 04:03:38.356916 kubelet[1388]: I1213 04:03:38.354668 1388 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a2ef9317-52d1-4510-9d86-533056dd345c-cilium-config-path\") pod \"a2ef9317-52d1-4510-9d86-533056dd345c\" (UID: \"a2ef9317-52d1-4510-9d86-533056dd345c\") " Dec 13 04:03:38.356916 kubelet[1388]: I1213 04:03:38.354818 1388 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a2ef9317-52d1-4510-9d86-533056dd345c-hostproc\") pod \"a2ef9317-52d1-4510-9d86-533056dd345c\" (UID: \"a2ef9317-52d1-4510-9d86-533056dd345c\") " Dec 13 04:03:38.356916 kubelet[1388]: I1213 04:03:38.354875 1388 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a2ef9317-52d1-4510-9d86-533056dd345c-cilium-cgroup\") pod \"a2ef9317-52d1-4510-9d86-533056dd345c\" (UID: \"a2ef9317-52d1-4510-9d86-533056dd345c\") " Dec 13 04:03:38.356916 kubelet[1388]: I1213 04:03:38.354967 1388 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a2ef9317-52d1-4510-9d86-533056dd345c-bpf-maps\") pod \"a2ef9317-52d1-4510-9d86-533056dd345c\" (UID: \"a2ef9317-52d1-4510-9d86-533056dd345c\") " Dec 13 04:03:38.356916 kubelet[1388]: I1213 04:03:38.355047 1388 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a2ef9317-52d1-4510-9d86-533056dd345c-cni-path\") pod \"a2ef9317-52d1-4510-9d86-533056dd345c\" (UID: \"a2ef9317-52d1-4510-9d86-533056dd345c\") " Dec 13 04:03:38.357515 kubelet[1388]: I1213 04:03:38.355211 1388 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a2ef9317-52d1-4510-9d86-533056dd345c-host-proc-sys-kernel\") pod \"a2ef9317-52d1-4510-9d86-533056dd345c\" (UID: \"a2ef9317-52d1-4510-9d86-533056dd345c\") " Dec 13 04:03:38.357515 kubelet[1388]: I1213 04:03:38.355301 1388 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a2ef9317-52d1-4510-9d86-533056dd345c-etc-cni-netd\") pod \"a2ef9317-52d1-4510-9d86-533056dd345c\" (UID: \"a2ef9317-52d1-4510-9d86-533056dd345c\") " Dec 13 04:03:38.357515 kubelet[1388]: I1213 04:03:38.355391 1388 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a2ef9317-52d1-4510-9d86-533056dd345c-lib-modules\") pod \"a2ef9317-52d1-4510-9d86-533056dd345c\" (UID: \"a2ef9317-52d1-4510-9d86-533056dd345c\") " Dec 13 04:03:38.357515 kubelet[1388]: I1213 04:03:38.355445 1388 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a2ef9317-52d1-4510-9d86-533056dd345c-cilium-run\") pod \"a2ef9317-52d1-4510-9d86-533056dd345c\" (UID: \"a2ef9317-52d1-4510-9d86-533056dd345c\") " Dec 13 04:03:38.357515 kubelet[1388]: I1213 04:03:38.355532 1388 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a2ef9317-52d1-4510-9d86-533056dd345c-host-proc-sys-net\") pod \"a2ef9317-52d1-4510-9d86-533056dd345c\" (UID: \"a2ef9317-52d1-4510-9d86-533056dd345c\") " Dec 13 04:03:38.357515 kubelet[1388]: I1213 04:03:38.355783 1388 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8fcds\" (UniqueName: \"kubernetes.io/projected/a2ef9317-52d1-4510-9d86-533056dd345c-kube-api-access-8fcds\") pod \"a2ef9317-52d1-4510-9d86-533056dd345c\" (UID: \"a2ef9317-52d1-4510-9d86-533056dd345c\") " Dec 13 04:03:38.357953 kubelet[1388]: I1213 04:03:38.355882 1388 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a2ef9317-52d1-4510-9d86-533056dd345c-hubble-tls\") pod \"a2ef9317-52d1-4510-9d86-533056dd345c\" (UID: \"a2ef9317-52d1-4510-9d86-533056dd345c\") " Dec 13 04:03:38.357953 kubelet[1388]: I1213 04:03:38.356047 1388 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a2ef9317-52d1-4510-9d86-533056dd345c-clustermesh-secrets\") pod \"a2ef9317-52d1-4510-9d86-533056dd345c\" (UID: \"a2ef9317-52d1-4510-9d86-533056dd345c\") " Dec 13 04:03:38.357953 kubelet[1388]: I1213 04:03:38.356396 1388 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a2ef9317-52d1-4510-9d86-533056dd345c-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "a2ef9317-52d1-4510-9d86-533056dd345c" (UID: "a2ef9317-52d1-4510-9d86-533056dd345c"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 04:03:38.358373 kubelet[1388]: I1213 04:03:38.358306 1388 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a2ef9317-52d1-4510-9d86-533056dd345c-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "a2ef9317-52d1-4510-9d86-533056dd345c" (UID: "a2ef9317-52d1-4510-9d86-533056dd345c"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 04:03:38.358585 kubelet[1388]: I1213 04:03:38.358550 1388 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a2ef9317-52d1-4510-9d86-533056dd345c-hostproc" (OuterVolumeSpecName: "hostproc") pod "a2ef9317-52d1-4510-9d86-533056dd345c" (UID: "a2ef9317-52d1-4510-9d86-533056dd345c"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 04:03:38.358770 kubelet[1388]: I1213 04:03:38.358737 1388 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a2ef9317-52d1-4510-9d86-533056dd345c-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "a2ef9317-52d1-4510-9d86-533056dd345c" (UID: "a2ef9317-52d1-4510-9d86-533056dd345c"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 04:03:38.359003 kubelet[1388]: I1213 04:03:38.358969 1388 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a2ef9317-52d1-4510-9d86-533056dd345c-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "a2ef9317-52d1-4510-9d86-533056dd345c" (UID: "a2ef9317-52d1-4510-9d86-533056dd345c"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 04:03:38.359628 kubelet[1388]: I1213 04:03:38.359218 1388 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a2ef9317-52d1-4510-9d86-533056dd345c-cni-path" (OuterVolumeSpecName: "cni-path") pod "a2ef9317-52d1-4510-9d86-533056dd345c" (UID: "a2ef9317-52d1-4510-9d86-533056dd345c"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 04:03:38.360204 kubelet[1388]: I1213 04:03:38.360165 1388 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a2ef9317-52d1-4510-9d86-533056dd345c-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "a2ef9317-52d1-4510-9d86-533056dd345c" (UID: "a2ef9317-52d1-4510-9d86-533056dd345c"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 04:03:38.363185 kubelet[1388]: I1213 04:03:38.360345 1388 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a2ef9317-52d1-4510-9d86-533056dd345c-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "a2ef9317-52d1-4510-9d86-533056dd345c" (UID: "a2ef9317-52d1-4510-9d86-533056dd345c"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 04:03:38.365220 kubelet[1388]: I1213 04:03:38.360427 1388 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a2ef9317-52d1-4510-9d86-533056dd345c-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "a2ef9317-52d1-4510-9d86-533056dd345c" (UID: "a2ef9317-52d1-4510-9d86-533056dd345c"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 04:03:38.378272 kubelet[1388]: I1213 04:03:38.378215 1388 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a2ef9317-52d1-4510-9d86-533056dd345c-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "a2ef9317-52d1-4510-9d86-533056dd345c" (UID: "a2ef9317-52d1-4510-9d86-533056dd345c"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 04:03:38.379394 kubelet[1388]: I1213 04:03:38.379346 1388 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a2ef9317-52d1-4510-9d86-533056dd345c-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "a2ef9317-52d1-4510-9d86-533056dd345c" (UID: "a2ef9317-52d1-4510-9d86-533056dd345c"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 04:03:38.389146 kubelet[1388]: I1213 04:03:38.389020 1388 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2ef9317-52d1-4510-9d86-533056dd345c-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "a2ef9317-52d1-4510-9d86-533056dd345c" (UID: "a2ef9317-52d1-4510-9d86-533056dd345c"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 04:03:38.391331 kubelet[1388]: I1213 04:03:38.391267 1388 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2ef9317-52d1-4510-9d86-533056dd345c-kube-api-access-8fcds" (OuterVolumeSpecName: "kube-api-access-8fcds") pod "a2ef9317-52d1-4510-9d86-533056dd345c" (UID: "a2ef9317-52d1-4510-9d86-533056dd345c"). InnerVolumeSpecName "kube-api-access-8fcds". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 04:03:38.393775 kubelet[1388]: I1213 04:03:38.393680 1388 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2ef9317-52d1-4510-9d86-533056dd345c-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "a2ef9317-52d1-4510-9d86-533056dd345c" (UID: "a2ef9317-52d1-4510-9d86-533056dd345c"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 04:03:38.457171 kubelet[1388]: I1213 04:03:38.457015 1388 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a2ef9317-52d1-4510-9d86-533056dd345c-clustermesh-secrets\") on node \"172.24.4.88\" DevicePath \"\"" Dec 13 04:03:38.457171 kubelet[1388]: I1213 04:03:38.457099 1388 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a2ef9317-52d1-4510-9d86-533056dd345c-host-proc-sys-net\") on node \"172.24.4.88\" DevicePath \"\"" Dec 13 04:03:38.457171 kubelet[1388]: I1213 04:03:38.457127 1388 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-8fcds\" (UniqueName: \"kubernetes.io/projected/a2ef9317-52d1-4510-9d86-533056dd345c-kube-api-access-8fcds\") on node \"172.24.4.88\" DevicePath \"\"" Dec 13 04:03:38.457171 kubelet[1388]: I1213 04:03:38.457149 1388 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a2ef9317-52d1-4510-9d86-533056dd345c-hubble-tls\") on node \"172.24.4.88\" DevicePath \"\"" Dec 13 04:03:38.457171 kubelet[1388]: I1213 04:03:38.457169 1388 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a2ef9317-52d1-4510-9d86-533056dd345c-xtables-lock\") on node \"172.24.4.88\" DevicePath \"\"" Dec 13 04:03:38.457171 kubelet[1388]: I1213 04:03:38.457188 1388 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a2ef9317-52d1-4510-9d86-533056dd345c-cilium-config-path\") on node \"172.24.4.88\" DevicePath \"\"" Dec 13 04:03:38.457171 kubelet[1388]: I1213 04:03:38.457207 1388 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a2ef9317-52d1-4510-9d86-533056dd345c-hostproc\") on node \"172.24.4.88\" DevicePath \"\"" Dec 13 04:03:38.458240 kubelet[1388]: I1213 04:03:38.457227 1388 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a2ef9317-52d1-4510-9d86-533056dd345c-cilium-cgroup\") on node \"172.24.4.88\" DevicePath \"\"" Dec 13 04:03:38.458240 kubelet[1388]: I1213 04:03:38.457247 1388 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a2ef9317-52d1-4510-9d86-533056dd345c-cilium-run\") on node \"172.24.4.88\" DevicePath \"\"" Dec 13 04:03:38.458240 kubelet[1388]: I1213 04:03:38.457267 1388 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a2ef9317-52d1-4510-9d86-533056dd345c-bpf-maps\") on node \"172.24.4.88\" DevicePath \"\"" Dec 13 04:03:38.458240 kubelet[1388]: I1213 04:03:38.457285 1388 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a2ef9317-52d1-4510-9d86-533056dd345c-cni-path\") on node \"172.24.4.88\" DevicePath \"\"" Dec 13 04:03:38.458240 kubelet[1388]: I1213 04:03:38.457303 1388 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a2ef9317-52d1-4510-9d86-533056dd345c-host-proc-sys-kernel\") on node \"172.24.4.88\" DevicePath \"\"" Dec 13 04:03:38.458240 kubelet[1388]: I1213 04:03:38.457322 1388 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a2ef9317-52d1-4510-9d86-533056dd345c-etc-cni-netd\") on node \"172.24.4.88\" DevicePath \"\"" Dec 13 04:03:38.458240 kubelet[1388]: I1213 04:03:38.457354 1388 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a2ef9317-52d1-4510-9d86-533056dd345c-lib-modules\") on node \"172.24.4.88\" DevicePath \"\"" Dec 13 04:03:38.782462 systemd[1]: var-lib-kubelet-pods-a2ef9317\x2d52d1\x2d4510\x2d9d86\x2d533056dd345c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d8fcds.mount: Deactivated successfully. Dec 13 04:03:38.782589 systemd[1]: var-lib-kubelet-pods-a2ef9317\x2d52d1\x2d4510\x2d9d86\x2d533056dd345c-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 04:03:38.782668 systemd[1]: var-lib-kubelet-pods-a2ef9317\x2d52d1\x2d4510\x2d9d86\x2d533056dd345c-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 04:03:38.807258 kubelet[1388]: I1213 04:03:38.807196 1388 scope.go:117] "RemoveContainer" containerID="71e5dde8aedda281f0bc39355c0a4e691ba03f053f6a75a9ea8ab2a69c435cbf" Dec 13 04:03:38.811779 env[1143]: time="2024-12-13T04:03:38.811737211Z" level=info msg="RemoveContainer for \"71e5dde8aedda281f0bc39355c0a4e691ba03f053f6a75a9ea8ab2a69c435cbf\"" Dec 13 04:03:38.816301 systemd[1]: Removed slice kubepods-burstable-poda2ef9317_52d1_4510_9d86_533056dd345c.slice. Dec 13 04:03:38.816458 systemd[1]: kubepods-burstable-poda2ef9317_52d1_4510_9d86_533056dd345c.slice: Consumed 8.618s CPU time. Dec 13 04:03:38.836239 env[1143]: time="2024-12-13T04:03:38.836186787Z" level=info msg="RemoveContainer for \"71e5dde8aedda281f0bc39355c0a4e691ba03f053f6a75a9ea8ab2a69c435cbf\" returns successfully" Dec 13 04:03:38.837351 kubelet[1388]: I1213 04:03:38.837188 1388 scope.go:117] "RemoveContainer" containerID="1bcc009296cb77c4951dc064ee16965807bef0b25bb6b84fca8eaf49708bc275" Dec 13 04:03:38.839109 env[1143]: time="2024-12-13T04:03:38.838915360Z" level=info msg="RemoveContainer for \"1bcc009296cb77c4951dc064ee16965807bef0b25bb6b84fca8eaf49708bc275\"" Dec 13 04:03:38.855699 env[1143]: time="2024-12-13T04:03:38.855563183Z" level=info msg="RemoveContainer for \"1bcc009296cb77c4951dc064ee16965807bef0b25bb6b84fca8eaf49708bc275\" returns successfully" Dec 13 04:03:38.856194 kubelet[1388]: I1213 04:03:38.856120 1388 scope.go:117] "RemoveContainer" containerID="cd3ed1bb59f9069c284003855d1690912e56a6b4ea65446b17e25d40c6d1ac36" Dec 13 04:03:38.859215 env[1143]: time="2024-12-13T04:03:38.859172736Z" level=info msg="RemoveContainer for \"cd3ed1bb59f9069c284003855d1690912e56a6b4ea65446b17e25d40c6d1ac36\"" Dec 13 04:03:38.885903 env[1143]: time="2024-12-13T04:03:38.885799525Z" level=info msg="RemoveContainer for \"cd3ed1bb59f9069c284003855d1690912e56a6b4ea65446b17e25d40c6d1ac36\" returns successfully" Dec 13 04:03:38.886524 kubelet[1388]: I1213 04:03:38.886342 1388 scope.go:117] "RemoveContainer" containerID="ce44717ecfb8ce54c552e7dfc3b2bcecd9042b84a1bd3b4738feb1b76bb04b27" Dec 13 04:03:38.889254 env[1143]: time="2024-12-13T04:03:38.889189111Z" level=info msg="RemoveContainer for \"ce44717ecfb8ce54c552e7dfc3b2bcecd9042b84a1bd3b4738feb1b76bb04b27\"" Dec 13 04:03:38.905672 env[1143]: time="2024-12-13T04:03:38.905572385Z" level=info msg="RemoveContainer for \"ce44717ecfb8ce54c552e7dfc3b2bcecd9042b84a1bd3b4738feb1b76bb04b27\" returns successfully" Dec 13 04:03:38.906097 kubelet[1388]: I1213 04:03:38.906010 1388 scope.go:117] "RemoveContainer" containerID="fab0825c3caa0f0dac9d1717a11eedb45c33938bc7a45d7a9d0123d51381d786" Dec 13 04:03:38.908634 env[1143]: time="2024-12-13T04:03:38.908594080Z" level=info msg="RemoveContainer for \"fab0825c3caa0f0dac9d1717a11eedb45c33938bc7a45d7a9d0123d51381d786\"" Dec 13 04:03:38.919230 env[1143]: time="2024-12-13T04:03:38.919025573Z" level=info msg="RemoveContainer for \"fab0825c3caa0f0dac9d1717a11eedb45c33938bc7a45d7a9d0123d51381d786\" returns successfully" Dec 13 04:03:38.919536 kubelet[1388]: I1213 04:03:38.919471 1388 scope.go:117] "RemoveContainer" containerID="71e5dde8aedda281f0bc39355c0a4e691ba03f053f6a75a9ea8ab2a69c435cbf" Dec 13 04:03:38.920416 env[1143]: time="2024-12-13T04:03:38.919996178Z" level=error msg="ContainerStatus for \"71e5dde8aedda281f0bc39355c0a4e691ba03f053f6a75a9ea8ab2a69c435cbf\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"71e5dde8aedda281f0bc39355c0a4e691ba03f053f6a75a9ea8ab2a69c435cbf\": not found" Dec 13 04:03:38.920730 kubelet[1388]: E1213 04:03:38.920665 1388 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"71e5dde8aedda281f0bc39355c0a4e691ba03f053f6a75a9ea8ab2a69c435cbf\": not found" containerID="71e5dde8aedda281f0bc39355c0a4e691ba03f053f6a75a9ea8ab2a69c435cbf" Dec 13 04:03:38.920890 kubelet[1388]: I1213 04:03:38.920851 1388 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"71e5dde8aedda281f0bc39355c0a4e691ba03f053f6a75a9ea8ab2a69c435cbf"} err="failed to get container status \"71e5dde8aedda281f0bc39355c0a4e691ba03f053f6a75a9ea8ab2a69c435cbf\": rpc error: code = NotFound desc = an error occurred when try to find container \"71e5dde8aedda281f0bc39355c0a4e691ba03f053f6a75a9ea8ab2a69c435cbf\": not found" Dec 13 04:03:38.921014 kubelet[1388]: I1213 04:03:38.920901 1388 scope.go:117] "RemoveContainer" containerID="1bcc009296cb77c4951dc064ee16965807bef0b25bb6b84fca8eaf49708bc275" Dec 13 04:03:38.921454 env[1143]: time="2024-12-13T04:03:38.921318612Z" level=error msg="ContainerStatus for \"1bcc009296cb77c4951dc064ee16965807bef0b25bb6b84fca8eaf49708bc275\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1bcc009296cb77c4951dc064ee16965807bef0b25bb6b84fca8eaf49708bc275\": not found" Dec 13 04:03:38.921728 kubelet[1388]: E1213 04:03:38.921653 1388 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1bcc009296cb77c4951dc064ee16965807bef0b25bb6b84fca8eaf49708bc275\": not found" containerID="1bcc009296cb77c4951dc064ee16965807bef0b25bb6b84fca8eaf49708bc275" Dec 13 04:03:38.921828 kubelet[1388]: I1213 04:03:38.921737 1388 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1bcc009296cb77c4951dc064ee16965807bef0b25bb6b84fca8eaf49708bc275"} err="failed to get container status \"1bcc009296cb77c4951dc064ee16965807bef0b25bb6b84fca8eaf49708bc275\": rpc error: code = NotFound desc = an error occurred when try to find container \"1bcc009296cb77c4951dc064ee16965807bef0b25bb6b84fca8eaf49708bc275\": not found" Dec 13 04:03:38.921828 kubelet[1388]: I1213 04:03:38.921764 1388 scope.go:117] "RemoveContainer" containerID="cd3ed1bb59f9069c284003855d1690912e56a6b4ea65446b17e25d40c6d1ac36" Dec 13 04:03:38.922443 env[1143]: time="2024-12-13T04:03:38.922310315Z" level=error msg="ContainerStatus for \"cd3ed1bb59f9069c284003855d1690912e56a6b4ea65446b17e25d40c6d1ac36\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"cd3ed1bb59f9069c284003855d1690912e56a6b4ea65446b17e25d40c6d1ac36\": not found" Dec 13 04:03:38.922662 kubelet[1388]: E1213 04:03:38.922625 1388 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"cd3ed1bb59f9069c284003855d1690912e56a6b4ea65446b17e25d40c6d1ac36\": not found" containerID="cd3ed1bb59f9069c284003855d1690912e56a6b4ea65446b17e25d40c6d1ac36" Dec 13 04:03:38.922797 kubelet[1388]: I1213 04:03:38.922691 1388 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"cd3ed1bb59f9069c284003855d1690912e56a6b4ea65446b17e25d40c6d1ac36"} err="failed to get container status \"cd3ed1bb59f9069c284003855d1690912e56a6b4ea65446b17e25d40c6d1ac36\": rpc error: code = NotFound desc = an error occurred when try to find container \"cd3ed1bb59f9069c284003855d1690912e56a6b4ea65446b17e25d40c6d1ac36\": not found" Dec 13 04:03:38.922797 kubelet[1388]: I1213 04:03:38.922716 1388 scope.go:117] "RemoveContainer" containerID="ce44717ecfb8ce54c552e7dfc3b2bcecd9042b84a1bd3b4738feb1b76bb04b27" Dec 13 04:03:38.923513 env[1143]: time="2024-12-13T04:03:38.923392676Z" level=error msg="ContainerStatus for \"ce44717ecfb8ce54c552e7dfc3b2bcecd9042b84a1bd3b4738feb1b76bb04b27\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ce44717ecfb8ce54c552e7dfc3b2bcecd9042b84a1bd3b4738feb1b76bb04b27\": not found" Dec 13 04:03:38.923957 kubelet[1388]: E1213 04:03:38.923913 1388 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ce44717ecfb8ce54c552e7dfc3b2bcecd9042b84a1bd3b4738feb1b76bb04b27\": not found" containerID="ce44717ecfb8ce54c552e7dfc3b2bcecd9042b84a1bd3b4738feb1b76bb04b27" Dec 13 04:03:38.924142 kubelet[1388]: I1213 04:03:38.923990 1388 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ce44717ecfb8ce54c552e7dfc3b2bcecd9042b84a1bd3b4738feb1b76bb04b27"} err="failed to get container status \"ce44717ecfb8ce54c552e7dfc3b2bcecd9042b84a1bd3b4738feb1b76bb04b27\": rpc error: code = NotFound desc = an error occurred when try to find container \"ce44717ecfb8ce54c552e7dfc3b2bcecd9042b84a1bd3b4738feb1b76bb04b27\": not found" Dec 13 04:03:38.924142 kubelet[1388]: I1213 04:03:38.924017 1388 scope.go:117] "RemoveContainer" containerID="fab0825c3caa0f0dac9d1717a11eedb45c33938bc7a45d7a9d0123d51381d786" Dec 13 04:03:38.924867 env[1143]: time="2024-12-13T04:03:38.924734667Z" level=error msg="ContainerStatus for \"fab0825c3caa0f0dac9d1717a11eedb45c33938bc7a45d7a9d0123d51381d786\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"fab0825c3caa0f0dac9d1717a11eedb45c33938bc7a45d7a9d0123d51381d786\": not found" Dec 13 04:03:38.925203 kubelet[1388]: E1213 04:03:38.925131 1388 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"fab0825c3caa0f0dac9d1717a11eedb45c33938bc7a45d7a9d0123d51381d786\": not found" containerID="fab0825c3caa0f0dac9d1717a11eedb45c33938bc7a45d7a9d0123d51381d786" Dec 13 04:03:38.925321 kubelet[1388]: I1213 04:03:38.925207 1388 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"fab0825c3caa0f0dac9d1717a11eedb45c33938bc7a45d7a9d0123d51381d786"} err="failed to get container status \"fab0825c3caa0f0dac9d1717a11eedb45c33938bc7a45d7a9d0123d51381d786\": rpc error: code = NotFound desc = an error occurred when try to find container \"fab0825c3caa0f0dac9d1717a11eedb45c33938bc7a45d7a9d0123d51381d786\": not found" Dec 13 04:03:39.078152 kubelet[1388]: E1213 04:03:39.077976 1388 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:03:39.251326 kubelet[1388]: I1213 04:03:39.251277 1388 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="a2ef9317-52d1-4510-9d86-533056dd345c" path="/var/lib/kubelet/pods/a2ef9317-52d1-4510-9d86-533056dd345c/volumes" Dec 13 04:03:40.079980 kubelet[1388]: E1213 04:03:40.079888 1388 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:03:40.205831 kubelet[1388]: E1213 04:03:40.205764 1388 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 04:03:41.080904 kubelet[1388]: E1213 04:03:41.080782 1388 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:03:42.082022 kubelet[1388]: E1213 04:03:42.081973 1388 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:03:42.430949 kubelet[1388]: I1213 04:03:42.430800 1388 topology_manager.go:215] "Topology Admit Handler" podUID="d08d3fd7-ab04-498a-94e4-c8e3956f17fc" podNamespace="kube-system" podName="cilium-operator-5cc964979-4fmtm" Dec 13 04:03:42.431298 kubelet[1388]: E1213 04:03:42.431272 1388 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a2ef9317-52d1-4510-9d86-533056dd345c" containerName="clean-cilium-state" Dec 13 04:03:42.431448 kubelet[1388]: E1213 04:03:42.431428 1388 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a2ef9317-52d1-4510-9d86-533056dd345c" containerName="cilium-agent" Dec 13 04:03:42.431633 kubelet[1388]: E1213 04:03:42.431606 1388 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a2ef9317-52d1-4510-9d86-533056dd345c" containerName="mount-bpf-fs" Dec 13 04:03:42.431792 kubelet[1388]: E1213 04:03:42.431771 1388 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a2ef9317-52d1-4510-9d86-533056dd345c" containerName="mount-cgroup" Dec 13 04:03:42.431929 kubelet[1388]: E1213 04:03:42.431909 1388 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a2ef9317-52d1-4510-9d86-533056dd345c" containerName="apply-sysctl-overwrites" Dec 13 04:03:42.432141 kubelet[1388]: I1213 04:03:42.432114 1388 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2ef9317-52d1-4510-9d86-533056dd345c" containerName="cilium-agent" Dec 13 04:03:42.432498 kubelet[1388]: I1213 04:03:42.432465 1388 topology_manager.go:215] "Topology Admit Handler" podUID="7eaef61f-6e14-4177-a3a2-5ca32ef66f21" podNamespace="kube-system" podName="cilium-trpmn" Dec 13 04:03:42.448608 kubelet[1388]: W1213 04:03:42.448549 1388 reflector.go:539] object-"kube-system"/"cilium-ipsec-keys": failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:172.24.4.88" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node '172.24.4.88' and this object Dec 13 04:03:42.449477 kubelet[1388]: E1213 04:03:42.449433 1388 reflector.go:147] object-"kube-system"/"cilium-ipsec-keys": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:172.24.4.88" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node '172.24.4.88' and this object Dec 13 04:03:42.449738 kubelet[1388]: W1213 04:03:42.449021 1388 reflector.go:539] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:172.24.4.88" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node '172.24.4.88' and this object Dec 13 04:03:42.450026 kubelet[1388]: E1213 04:03:42.449982 1388 reflector.go:147] object-"kube-system"/"cilium-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:172.24.4.88" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node '172.24.4.88' and this object Dec 13 04:03:42.450160 systemd[1]: Created slice kubepods-besteffort-podd08d3fd7_ab04_498a_94e4_c8e3956f17fc.slice. Dec 13 04:03:42.454380 kubelet[1388]: W1213 04:03:42.449210 1388 reflector.go:539] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:172.24.4.88" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node '172.24.4.88' and this object Dec 13 04:03:42.454380 kubelet[1388]: W1213 04:03:42.449322 1388 reflector.go:539] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:172.24.4.88" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node '172.24.4.88' and this object Dec 13 04:03:42.454380 kubelet[1388]: E1213 04:03:42.453295 1388 reflector.go:147] object-"kube-system"/"cilium-clustermesh": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:172.24.4.88" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node '172.24.4.88' and this object Dec 13 04:03:42.454380 kubelet[1388]: E1213 04:03:42.453403 1388 reflector.go:147] object-"kube-system"/"hubble-server-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:172.24.4.88" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node '172.24.4.88' and this object Dec 13 04:03:42.466693 systemd[1]: Created slice kubepods-burstable-pod7eaef61f_6e14_4177_a3a2_5ca32ef66f21.slice. Dec 13 04:03:42.584943 kubelet[1388]: I1213 04:03:42.584891 1388 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7eaef61f-6e14-4177-a3a2-5ca32ef66f21-clustermesh-secrets\") pod \"cilium-trpmn\" (UID: \"7eaef61f-6e14-4177-a3a2-5ca32ef66f21\") " pod="kube-system/cilium-trpmn" Dec 13 04:03:42.585426 kubelet[1388]: I1213 04:03:42.585358 1388 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7eaef61f-6e14-4177-a3a2-5ca32ef66f21-xtables-lock\") pod \"cilium-trpmn\" (UID: \"7eaef61f-6e14-4177-a3a2-5ca32ef66f21\") " pod="kube-system/cilium-trpmn" Dec 13 04:03:42.585724 kubelet[1388]: I1213 04:03:42.585696 1388 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7eaef61f-6e14-4177-a3a2-5ca32ef66f21-bpf-maps\") pod \"cilium-trpmn\" (UID: \"7eaef61f-6e14-4177-a3a2-5ca32ef66f21\") " pod="kube-system/cilium-trpmn" Dec 13 04:03:42.585946 kubelet[1388]: I1213 04:03:42.585922 1388 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/7eaef61f-6e14-4177-a3a2-5ca32ef66f21-cilium-ipsec-secrets\") pod \"cilium-trpmn\" (UID: \"7eaef61f-6e14-4177-a3a2-5ca32ef66f21\") " pod="kube-system/cilium-trpmn" Dec 13 04:03:42.586188 kubelet[1388]: I1213 04:03:42.586158 1388 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7eaef61f-6e14-4177-a3a2-5ca32ef66f21-cilium-cgroup\") pod \"cilium-trpmn\" (UID: \"7eaef61f-6e14-4177-a3a2-5ca32ef66f21\") " pod="kube-system/cilium-trpmn" Dec 13 04:03:42.586409 kubelet[1388]: I1213 04:03:42.586386 1388 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7eaef61f-6e14-4177-a3a2-5ca32ef66f21-cilium-run\") pod \"cilium-trpmn\" (UID: \"7eaef61f-6e14-4177-a3a2-5ca32ef66f21\") " pod="kube-system/cilium-trpmn" Dec 13 04:03:42.586612 kubelet[1388]: I1213 04:03:42.586590 1388 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7eaef61f-6e14-4177-a3a2-5ca32ef66f21-hostproc\") pod \"cilium-trpmn\" (UID: \"7eaef61f-6e14-4177-a3a2-5ca32ef66f21\") " pod="kube-system/cilium-trpmn" Dec 13 04:03:42.586849 kubelet[1388]: I1213 04:03:42.586825 1388 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7eaef61f-6e14-4177-a3a2-5ca32ef66f21-hubble-tls\") pod \"cilium-trpmn\" (UID: \"7eaef61f-6e14-4177-a3a2-5ca32ef66f21\") " pod="kube-system/cilium-trpmn" Dec 13 04:03:42.587103 kubelet[1388]: I1213 04:03:42.587036 1388 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwzxz\" (UniqueName: \"kubernetes.io/projected/d08d3fd7-ab04-498a-94e4-c8e3956f17fc-kube-api-access-bwzxz\") pod \"cilium-operator-5cc964979-4fmtm\" (UID: \"d08d3fd7-ab04-498a-94e4-c8e3956f17fc\") " pod="kube-system/cilium-operator-5cc964979-4fmtm" Dec 13 04:03:42.587322 kubelet[1388]: I1213 04:03:42.587298 1388 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7eaef61f-6e14-4177-a3a2-5ca32ef66f21-cni-path\") pod \"cilium-trpmn\" (UID: \"7eaef61f-6e14-4177-a3a2-5ca32ef66f21\") " pod="kube-system/cilium-trpmn" Dec 13 04:03:42.587521 kubelet[1388]: I1213 04:03:42.587498 1388 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7eaef61f-6e14-4177-a3a2-5ca32ef66f21-etc-cni-netd\") pod \"cilium-trpmn\" (UID: \"7eaef61f-6e14-4177-a3a2-5ca32ef66f21\") " pod="kube-system/cilium-trpmn" Dec 13 04:03:42.587714 kubelet[1388]: I1213 04:03:42.587692 1388 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7eaef61f-6e14-4177-a3a2-5ca32ef66f21-cilium-config-path\") pod \"cilium-trpmn\" (UID: \"7eaef61f-6e14-4177-a3a2-5ca32ef66f21\") " pod="kube-system/cilium-trpmn" Dec 13 04:03:42.587913 kubelet[1388]: I1213 04:03:42.587891 1388 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7eaef61f-6e14-4177-a3a2-5ca32ef66f21-host-proc-sys-kernel\") pod \"cilium-trpmn\" (UID: \"7eaef61f-6e14-4177-a3a2-5ca32ef66f21\") " pod="kube-system/cilium-trpmn" Dec 13 04:03:42.588153 kubelet[1388]: I1213 04:03:42.588129 1388 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d08d3fd7-ab04-498a-94e4-c8e3956f17fc-cilium-config-path\") pod \"cilium-operator-5cc964979-4fmtm\" (UID: \"d08d3fd7-ab04-498a-94e4-c8e3956f17fc\") " pod="kube-system/cilium-operator-5cc964979-4fmtm" Dec 13 04:03:42.588361 kubelet[1388]: I1213 04:03:42.588339 1388 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7eaef61f-6e14-4177-a3a2-5ca32ef66f21-lib-modules\") pod \"cilium-trpmn\" (UID: \"7eaef61f-6e14-4177-a3a2-5ca32ef66f21\") " pod="kube-system/cilium-trpmn" Dec 13 04:03:42.588595 kubelet[1388]: I1213 04:03:42.588571 1388 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7eaef61f-6e14-4177-a3a2-5ca32ef66f21-host-proc-sys-net\") pod \"cilium-trpmn\" (UID: \"7eaef61f-6e14-4177-a3a2-5ca32ef66f21\") " pod="kube-system/cilium-trpmn" Dec 13 04:03:42.588798 kubelet[1388]: I1213 04:03:42.588775 1388 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhvsr\" (UniqueName: \"kubernetes.io/projected/7eaef61f-6e14-4177-a3a2-5ca32ef66f21-kube-api-access-xhvsr\") pod \"cilium-trpmn\" (UID: \"7eaef61f-6e14-4177-a3a2-5ca32ef66f21\") " pod="kube-system/cilium-trpmn" Dec 13 04:03:43.083700 kubelet[1388]: E1213 04:03:43.083580 1388 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:03:43.691409 kubelet[1388]: E1213 04:03:43.691273 1388 configmap.go:199] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Dec 13 04:03:43.691409 kubelet[1388]: E1213 04:03:43.691411 1388 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d08d3fd7-ab04-498a-94e4-c8e3956f17fc-cilium-config-path podName:d08d3fd7-ab04-498a-94e4-c8e3956f17fc nodeName:}" failed. No retries permitted until 2024-12-13 04:03:44.191383732 +0000 UTC m=+89.901053690 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/d08d3fd7-ab04-498a-94e4-c8e3956f17fc-cilium-config-path") pod "cilium-operator-5cc964979-4fmtm" (UID: "d08d3fd7-ab04-498a-94e4-c8e3956f17fc") : failed to sync configmap cache: timed out waiting for the condition Dec 13 04:03:43.692617 kubelet[1388]: E1213 04:03:43.691711 1388 configmap.go:199] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Dec 13 04:03:43.692617 kubelet[1388]: E1213 04:03:43.691781 1388 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7eaef61f-6e14-4177-a3a2-5ca32ef66f21-cilium-config-path podName:7eaef61f-6e14-4177-a3a2-5ca32ef66f21 nodeName:}" failed. No retries permitted until 2024-12-13 04:03:44.191763047 +0000 UTC m=+89.901433005 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/7eaef61f-6e14-4177-a3a2-5ca32ef66f21-cilium-config-path") pod "cilium-trpmn" (UID: "7eaef61f-6e14-4177-a3a2-5ca32ef66f21") : failed to sync configmap cache: timed out waiting for the condition Dec 13 04:03:43.695508 kubelet[1388]: E1213 04:03:43.695426 1388 projected.go:269] Couldn't get secret kube-system/hubble-server-certs: failed to sync secret cache: timed out waiting for the condition Dec 13 04:03:43.695839 kubelet[1388]: E1213 04:03:43.695790 1388 projected.go:200] Error preparing data for projected volume hubble-tls for pod kube-system/cilium-trpmn: failed to sync secret cache: timed out waiting for the condition Dec 13 04:03:43.697417 kubelet[1388]: E1213 04:03:43.697348 1388 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7eaef61f-6e14-4177-a3a2-5ca32ef66f21-hubble-tls podName:7eaef61f-6e14-4177-a3a2-5ca32ef66f21 nodeName:}" failed. No retries permitted until 2024-12-13 04:03:44.197234955 +0000 UTC m=+89.906905053 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "hubble-tls" (UniqueName: "kubernetes.io/projected/7eaef61f-6e14-4177-a3a2-5ca32ef66f21-hubble-tls") pod "cilium-trpmn" (UID: "7eaef61f-6e14-4177-a3a2-5ca32ef66f21") : failed to sync secret cache: timed out waiting for the condition Dec 13 04:03:44.084705 kubelet[1388]: E1213 04:03:44.084629 1388 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:03:44.262519 env[1143]: time="2024-12-13T04:03:44.261409069Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-4fmtm,Uid:d08d3fd7-ab04-498a-94e4-c8e3956f17fc,Namespace:kube-system,Attempt:0,}" Dec 13 04:03:44.293510 env[1143]: time="2024-12-13T04:03:44.293290557Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 04:03:44.293878 env[1143]: time="2024-12-13T04:03:44.293780407Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 04:03:44.294210 env[1143]: time="2024-12-13T04:03:44.294146227Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 04:03:44.295266 env[1143]: time="2024-12-13T04:03:44.295193035Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-trpmn,Uid:7eaef61f-6e14-4177-a3a2-5ca32ef66f21,Namespace:kube-system,Attempt:0,}" Dec 13 04:03:44.296141 env[1143]: time="2024-12-13T04:03:44.294721998Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/01f78ff02c17dd08c24756e900c30db1c67453a3d9aa467df71f58474dc8473b pid=2934 runtime=io.containerd.runc.v2 Dec 13 04:03:44.343983 systemd[1]: Started cri-containerd-01f78ff02c17dd08c24756e900c30db1c67453a3d9aa467df71f58474dc8473b.scope. Dec 13 04:03:44.356994 env[1143]: time="2024-12-13T04:03:44.356886960Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 04:03:44.357160 env[1143]: time="2024-12-13T04:03:44.356990022Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 04:03:44.357160 env[1143]: time="2024-12-13T04:03:44.357027822Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 04:03:44.357330 env[1143]: time="2024-12-13T04:03:44.357281193Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d1c4a39219d054b443e3c7de06ea87c8a38e1a94d56fbda33bd91808f7c8321b pid=2969 runtime=io.containerd.runc.v2 Dec 13 04:03:44.382865 systemd[1]: Started cri-containerd-d1c4a39219d054b443e3c7de06ea87c8a38e1a94d56fbda33bd91808f7c8321b.scope. Dec 13 04:03:44.419123 env[1143]: time="2024-12-13T04:03:44.419045660Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-4fmtm,Uid:d08d3fd7-ab04-498a-94e4-c8e3956f17fc,Namespace:kube-system,Attempt:0,} returns sandbox id \"01f78ff02c17dd08c24756e900c30db1c67453a3d9aa467df71f58474dc8473b\"" Dec 13 04:03:44.421239 env[1143]: time="2024-12-13T04:03:44.421027306Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Dec 13 04:03:44.431181 env[1143]: time="2024-12-13T04:03:44.431048894Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-trpmn,Uid:7eaef61f-6e14-4177-a3a2-5ca32ef66f21,Namespace:kube-system,Attempt:0,} returns sandbox id \"d1c4a39219d054b443e3c7de06ea87c8a38e1a94d56fbda33bd91808f7c8321b\"" Dec 13 04:03:44.434654 env[1143]: time="2024-12-13T04:03:44.434612844Z" level=info msg="CreateContainer within sandbox \"d1c4a39219d054b443e3c7de06ea87c8a38e1a94d56fbda33bd91808f7c8321b\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 04:03:44.455570 env[1143]: time="2024-12-13T04:03:44.455483702Z" level=info msg="CreateContainer within sandbox \"d1c4a39219d054b443e3c7de06ea87c8a38e1a94d56fbda33bd91808f7c8321b\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"4a870352085700e79ba281793d60a219f994426c14c0c256547c5118af322bcf\"" Dec 13 04:03:44.456621 env[1143]: time="2024-12-13T04:03:44.456523845Z" level=info msg="StartContainer for \"4a870352085700e79ba281793d60a219f994426c14c0c256547c5118af322bcf\"" Dec 13 04:03:44.471925 systemd[1]: Started cri-containerd-4a870352085700e79ba281793d60a219f994426c14c0c256547c5118af322bcf.scope. Dec 13 04:03:44.484388 systemd[1]: cri-containerd-4a870352085700e79ba281793d60a219f994426c14c0c256547c5118af322bcf.scope: Deactivated successfully. Dec 13 04:03:44.527499 env[1143]: time="2024-12-13T04:03:44.527411560Z" level=info msg="shim disconnected" id=4a870352085700e79ba281793d60a219f994426c14c0c256547c5118af322bcf Dec 13 04:03:44.528079 env[1143]: time="2024-12-13T04:03:44.527996729Z" level=warning msg="cleaning up after shim disconnected" id=4a870352085700e79ba281793d60a219f994426c14c0c256547c5118af322bcf namespace=k8s.io Dec 13 04:03:44.528277 env[1143]: time="2024-12-13T04:03:44.528241434Z" level=info msg="cleaning up dead shim" Dec 13 04:03:44.545671 env[1143]: time="2024-12-13T04:03:44.545533987Z" level=warning msg="cleanup warnings time=\"2024-12-13T04:03:44Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3033 runtime=io.containerd.runc.v2\ntime=\"2024-12-13T04:03:44Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/4a870352085700e79ba281793d60a219f994426c14c0c256547c5118af322bcf/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Dec 13 04:03:44.546361 env[1143]: time="2024-12-13T04:03:44.546117983Z" level=error msg="copy shim log" error="read /proc/self/fd/67: file already closed" Dec 13 04:03:44.550321 env[1143]: time="2024-12-13T04:03:44.550225042Z" level=error msg="Failed to pipe stdout of container \"4a870352085700e79ba281793d60a219f994426c14c0c256547c5118af322bcf\"" error="reading from a closed fifo" Dec 13 04:03:44.550461 env[1143]: time="2024-12-13T04:03:44.550357929Z" level=error msg="Failed to pipe stderr of container \"4a870352085700e79ba281793d60a219f994426c14c0c256547c5118af322bcf\"" error="reading from a closed fifo" Dec 13 04:03:44.554549 env[1143]: time="2024-12-13T04:03:44.554436565Z" level=error msg="StartContainer for \"4a870352085700e79ba281793d60a219f994426c14c0c256547c5118af322bcf\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Dec 13 04:03:44.555287 kubelet[1388]: E1213 04:03:44.555186 1388 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="4a870352085700e79ba281793d60a219f994426c14c0c256547c5118af322bcf" Dec 13 04:03:44.559030 kubelet[1388]: E1213 04:03:44.558957 1388 kuberuntime_manager.go:1262] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Dec 13 04:03:44.559030 kubelet[1388]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Dec 13 04:03:44.559030 kubelet[1388]: rm /hostbin/cilium-mount Dec 13 04:03:44.559347 kubelet[1388]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-xhvsr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-trpmn_kube-system(7eaef61f-6e14-4177-a3a2-5ca32ef66f21): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Dec 13 04:03:44.559347 kubelet[1388]: E1213 04:03:44.559194 1388 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-trpmn" podUID="7eaef61f-6e14-4177-a3a2-5ca32ef66f21" Dec 13 04:03:44.834571 env[1143]: time="2024-12-13T04:03:44.834469021Z" level=info msg="StopPodSandbox for \"d1c4a39219d054b443e3c7de06ea87c8a38e1a94d56fbda33bd91808f7c8321b\"" Dec 13 04:03:44.834949 env[1143]: time="2024-12-13T04:03:44.834678581Z" level=info msg="Container to stop \"4a870352085700e79ba281793d60a219f994426c14c0c256547c5118af322bcf\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 04:03:44.850595 systemd[1]: cri-containerd-d1c4a39219d054b443e3c7de06ea87c8a38e1a94d56fbda33bd91808f7c8321b.scope: Deactivated successfully. Dec 13 04:03:44.907687 env[1143]: time="2024-12-13T04:03:44.907576535Z" level=info msg="shim disconnected" id=d1c4a39219d054b443e3c7de06ea87c8a38e1a94d56fbda33bd91808f7c8321b Dec 13 04:03:44.907687 env[1143]: time="2024-12-13T04:03:44.907684155Z" level=warning msg="cleaning up after shim disconnected" id=d1c4a39219d054b443e3c7de06ea87c8a38e1a94d56fbda33bd91808f7c8321b namespace=k8s.io Dec 13 04:03:44.908110 env[1143]: time="2024-12-13T04:03:44.907710353Z" level=info msg="cleaning up dead shim" Dec 13 04:03:44.928122 env[1143]: time="2024-12-13T04:03:44.928015539Z" level=warning msg="cleanup warnings time=\"2024-12-13T04:03:44Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3066 runtime=io.containerd.runc.v2\n" Dec 13 04:03:44.929184 env[1143]: time="2024-12-13T04:03:44.929124652Z" level=info msg="TearDown network for sandbox \"d1c4a39219d054b443e3c7de06ea87c8a38e1a94d56fbda33bd91808f7c8321b\" successfully" Dec 13 04:03:44.929404 env[1143]: time="2024-12-13T04:03:44.929357184Z" level=info msg="StopPodSandbox for \"d1c4a39219d054b443e3c7de06ea87c8a38e1a94d56fbda33bd91808f7c8321b\" returns successfully" Dec 13 04:03:45.013078 kubelet[1388]: I1213 04:03:45.012965 1388 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7eaef61f-6e14-4177-a3a2-5ca32ef66f21-clustermesh-secrets\") pod \"7eaef61f-6e14-4177-a3a2-5ca32ef66f21\" (UID: \"7eaef61f-6e14-4177-a3a2-5ca32ef66f21\") " Dec 13 04:03:45.013078 kubelet[1388]: I1213 04:03:45.013093 1388 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7eaef61f-6e14-4177-a3a2-5ca32ef66f21-bpf-maps\") pod \"7eaef61f-6e14-4177-a3a2-5ca32ef66f21\" (UID: \"7eaef61f-6e14-4177-a3a2-5ca32ef66f21\") " Dec 13 04:03:45.013503 kubelet[1388]: I1213 04:03:45.013165 1388 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/7eaef61f-6e14-4177-a3a2-5ca32ef66f21-cilium-ipsec-secrets\") pod \"7eaef61f-6e14-4177-a3a2-5ca32ef66f21\" (UID: \"7eaef61f-6e14-4177-a3a2-5ca32ef66f21\") " Dec 13 04:03:45.013503 kubelet[1388]: I1213 04:03:45.013217 1388 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7eaef61f-6e14-4177-a3a2-5ca32ef66f21-cilium-run\") pod \"7eaef61f-6e14-4177-a3a2-5ca32ef66f21\" (UID: \"7eaef61f-6e14-4177-a3a2-5ca32ef66f21\") " Dec 13 04:03:45.013503 kubelet[1388]: I1213 04:03:45.013269 1388 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7eaef61f-6e14-4177-a3a2-5ca32ef66f21-cni-path\") pod \"7eaef61f-6e14-4177-a3a2-5ca32ef66f21\" (UID: \"7eaef61f-6e14-4177-a3a2-5ca32ef66f21\") " Dec 13 04:03:45.013503 kubelet[1388]: I1213 04:03:45.013328 1388 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xhvsr\" (UniqueName: \"kubernetes.io/projected/7eaef61f-6e14-4177-a3a2-5ca32ef66f21-kube-api-access-xhvsr\") pod \"7eaef61f-6e14-4177-a3a2-5ca32ef66f21\" (UID: \"7eaef61f-6e14-4177-a3a2-5ca32ef66f21\") " Dec 13 04:03:45.013503 kubelet[1388]: I1213 04:03:45.013377 1388 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7eaef61f-6e14-4177-a3a2-5ca32ef66f21-hostproc\") pod \"7eaef61f-6e14-4177-a3a2-5ca32ef66f21\" (UID: \"7eaef61f-6e14-4177-a3a2-5ca32ef66f21\") " Dec 13 04:03:45.013503 kubelet[1388]: I1213 04:03:45.013427 1388 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7eaef61f-6e14-4177-a3a2-5ca32ef66f21-etc-cni-netd\") pod \"7eaef61f-6e14-4177-a3a2-5ca32ef66f21\" (UID: \"7eaef61f-6e14-4177-a3a2-5ca32ef66f21\") " Dec 13 04:03:45.013503 kubelet[1388]: I1213 04:03:45.013489 1388 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7eaef61f-6e14-4177-a3a2-5ca32ef66f21-cilium-config-path\") pod \"7eaef61f-6e14-4177-a3a2-5ca32ef66f21\" (UID: \"7eaef61f-6e14-4177-a3a2-5ca32ef66f21\") " Dec 13 04:03:45.014363 kubelet[1388]: I1213 04:03:45.013541 1388 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7eaef61f-6e14-4177-a3a2-5ca32ef66f21-host-proc-sys-kernel\") pod \"7eaef61f-6e14-4177-a3a2-5ca32ef66f21\" (UID: \"7eaef61f-6e14-4177-a3a2-5ca32ef66f21\") " Dec 13 04:03:45.014363 kubelet[1388]: I1213 04:03:45.013615 1388 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7eaef61f-6e14-4177-a3a2-5ca32ef66f21-cilium-cgroup\") pod \"7eaef61f-6e14-4177-a3a2-5ca32ef66f21\" (UID: \"7eaef61f-6e14-4177-a3a2-5ca32ef66f21\") " Dec 13 04:03:45.014363 kubelet[1388]: I1213 04:03:45.013671 1388 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7eaef61f-6e14-4177-a3a2-5ca32ef66f21-hubble-tls\") pod \"7eaef61f-6e14-4177-a3a2-5ca32ef66f21\" (UID: \"7eaef61f-6e14-4177-a3a2-5ca32ef66f21\") " Dec 13 04:03:45.014363 kubelet[1388]: I1213 04:03:45.013719 1388 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7eaef61f-6e14-4177-a3a2-5ca32ef66f21-lib-modules\") pod \"7eaef61f-6e14-4177-a3a2-5ca32ef66f21\" (UID: \"7eaef61f-6e14-4177-a3a2-5ca32ef66f21\") " Dec 13 04:03:45.014363 kubelet[1388]: I1213 04:03:45.013764 1388 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7eaef61f-6e14-4177-a3a2-5ca32ef66f21-xtables-lock\") pod \"7eaef61f-6e14-4177-a3a2-5ca32ef66f21\" (UID: \"7eaef61f-6e14-4177-a3a2-5ca32ef66f21\") " Dec 13 04:03:45.014363 kubelet[1388]: I1213 04:03:45.013814 1388 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7eaef61f-6e14-4177-a3a2-5ca32ef66f21-host-proc-sys-net\") pod \"7eaef61f-6e14-4177-a3a2-5ca32ef66f21\" (UID: \"7eaef61f-6e14-4177-a3a2-5ca32ef66f21\") " Dec 13 04:03:45.014363 kubelet[1388]: I1213 04:03:45.013942 1388 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7eaef61f-6e14-4177-a3a2-5ca32ef66f21-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "7eaef61f-6e14-4177-a3a2-5ca32ef66f21" (UID: "7eaef61f-6e14-4177-a3a2-5ca32ef66f21"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 04:03:45.014363 kubelet[1388]: I1213 04:03:45.014172 1388 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7eaef61f-6e14-4177-a3a2-5ca32ef66f21-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "7eaef61f-6e14-4177-a3a2-5ca32ef66f21" (UID: "7eaef61f-6e14-4177-a3a2-5ca32ef66f21"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 04:03:45.014363 kubelet[1388]: I1213 04:03:45.014252 1388 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7eaef61f-6e14-4177-a3a2-5ca32ef66f21-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "7eaef61f-6e14-4177-a3a2-5ca32ef66f21" (UID: "7eaef61f-6e14-4177-a3a2-5ca32ef66f21"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 04:03:45.015869 kubelet[1388]: I1213 04:03:45.015804 1388 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7eaef61f-6e14-4177-a3a2-5ca32ef66f21-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "7eaef61f-6e14-4177-a3a2-5ca32ef66f21" (UID: "7eaef61f-6e14-4177-a3a2-5ca32ef66f21"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 04:03:45.016095 kubelet[1388]: I1213 04:03:45.015899 1388 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7eaef61f-6e14-4177-a3a2-5ca32ef66f21-cni-path" (OuterVolumeSpecName: "cni-path") pod "7eaef61f-6e14-4177-a3a2-5ca32ef66f21" (UID: "7eaef61f-6e14-4177-a3a2-5ca32ef66f21"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 04:03:45.016766 kubelet[1388]: I1213 04:03:45.016706 1388 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7eaef61f-6e14-4177-a3a2-5ca32ef66f21-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "7eaef61f-6e14-4177-a3a2-5ca32ef66f21" (UID: "7eaef61f-6e14-4177-a3a2-5ca32ef66f21"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 04:03:45.017016 kubelet[1388]: I1213 04:03:45.016977 1388 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7eaef61f-6e14-4177-a3a2-5ca32ef66f21-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "7eaef61f-6e14-4177-a3a2-5ca32ef66f21" (UID: "7eaef61f-6e14-4177-a3a2-5ca32ef66f21"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 04:03:45.024191 kubelet[1388]: I1213 04:03:45.020991 1388 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7eaef61f-6e14-4177-a3a2-5ca32ef66f21-hostproc" (OuterVolumeSpecName: "hostproc") pod "7eaef61f-6e14-4177-a3a2-5ca32ef66f21" (UID: "7eaef61f-6e14-4177-a3a2-5ca32ef66f21"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 04:03:45.025158 kubelet[1388]: I1213 04:03:45.024993 1388 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7eaef61f-6e14-4177-a3a2-5ca32ef66f21-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "7eaef61f-6e14-4177-a3a2-5ca32ef66f21" (UID: "7eaef61f-6e14-4177-a3a2-5ca32ef66f21"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 04:03:45.029017 kubelet[1388]: I1213 04:03:45.025049 1388 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7eaef61f-6e14-4177-a3a2-5ca32ef66f21-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "7eaef61f-6e14-4177-a3a2-5ca32ef66f21" (UID: "7eaef61f-6e14-4177-a3a2-5ca32ef66f21"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 04:03:45.029294 kubelet[1388]: I1213 04:03:45.028767 1388 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7eaef61f-6e14-4177-a3a2-5ca32ef66f21-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "7eaef61f-6e14-4177-a3a2-5ca32ef66f21" (UID: "7eaef61f-6e14-4177-a3a2-5ca32ef66f21"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 04:03:45.029460 kubelet[1388]: I1213 04:03:45.028916 1388 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7eaef61f-6e14-4177-a3a2-5ca32ef66f21-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "7eaef61f-6e14-4177-a3a2-5ca32ef66f21" (UID: "7eaef61f-6e14-4177-a3a2-5ca32ef66f21"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 04:03:45.033180 kubelet[1388]: I1213 04:03:45.033035 1388 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7eaef61f-6e14-4177-a3a2-5ca32ef66f21-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "7eaef61f-6e14-4177-a3a2-5ca32ef66f21" (UID: "7eaef61f-6e14-4177-a3a2-5ca32ef66f21"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 04:03:45.034233 kubelet[1388]: I1213 04:03:45.034018 1388 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7eaef61f-6e14-4177-a3a2-5ca32ef66f21-kube-api-access-xhvsr" (OuterVolumeSpecName: "kube-api-access-xhvsr") pod "7eaef61f-6e14-4177-a3a2-5ca32ef66f21" (UID: "7eaef61f-6e14-4177-a3a2-5ca32ef66f21"). InnerVolumeSpecName "kube-api-access-xhvsr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 04:03:45.036791 kubelet[1388]: I1213 04:03:45.036737 1388 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7eaef61f-6e14-4177-a3a2-5ca32ef66f21-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "7eaef61f-6e14-4177-a3a2-5ca32ef66f21" (UID: "7eaef61f-6e14-4177-a3a2-5ca32ef66f21"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 04:03:45.086211 kubelet[1388]: E1213 04:03:45.085922 1388 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:03:45.114796 kubelet[1388]: I1213 04:03:45.114548 1388 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7eaef61f-6e14-4177-a3a2-5ca32ef66f21-cilium-cgroup\") on node \"172.24.4.88\" DevicePath \"\"" Dec 13 04:03:45.115346 kubelet[1388]: I1213 04:03:45.115307 1388 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7eaef61f-6e14-4177-a3a2-5ca32ef66f21-hubble-tls\") on node \"172.24.4.88\" DevicePath \"\"" Dec 13 04:03:45.115652 kubelet[1388]: I1213 04:03:45.115621 1388 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7eaef61f-6e14-4177-a3a2-5ca32ef66f21-lib-modules\") on node \"172.24.4.88\" DevicePath \"\"" Dec 13 04:03:45.116025 kubelet[1388]: I1213 04:03:45.115920 1388 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7eaef61f-6e14-4177-a3a2-5ca32ef66f21-xtables-lock\") on node \"172.24.4.88\" DevicePath \"\"" Dec 13 04:03:45.116314 kubelet[1388]: I1213 04:03:45.116283 1388 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7eaef61f-6e14-4177-a3a2-5ca32ef66f21-host-proc-sys-net\") on node \"172.24.4.88\" DevicePath \"\"" Dec 13 04:03:45.116628 kubelet[1388]: I1213 04:03:45.116599 1388 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-xhvsr\" (UniqueName: \"kubernetes.io/projected/7eaef61f-6e14-4177-a3a2-5ca32ef66f21-kube-api-access-xhvsr\") on node \"172.24.4.88\" DevicePath \"\"" Dec 13 04:03:45.116854 kubelet[1388]: I1213 04:03:45.116825 1388 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7eaef61f-6e14-4177-a3a2-5ca32ef66f21-clustermesh-secrets\") on node \"172.24.4.88\" DevicePath \"\"" Dec 13 04:03:45.117117 kubelet[1388]: I1213 04:03:45.117087 1388 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7eaef61f-6e14-4177-a3a2-5ca32ef66f21-bpf-maps\") on node \"172.24.4.88\" DevicePath \"\"" Dec 13 04:03:45.117354 kubelet[1388]: I1213 04:03:45.117324 1388 reconciler_common.go:300] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/7eaef61f-6e14-4177-a3a2-5ca32ef66f21-cilium-ipsec-secrets\") on node \"172.24.4.88\" DevicePath \"\"" Dec 13 04:03:45.117621 kubelet[1388]: I1213 04:03:45.117586 1388 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7eaef61f-6e14-4177-a3a2-5ca32ef66f21-cilium-run\") on node \"172.24.4.88\" DevicePath \"\"" Dec 13 04:03:45.117870 kubelet[1388]: I1213 04:03:45.117834 1388 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7eaef61f-6e14-4177-a3a2-5ca32ef66f21-cni-path\") on node \"172.24.4.88\" DevicePath \"\"" Dec 13 04:03:45.118342 kubelet[1388]: I1213 04:03:45.118307 1388 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7eaef61f-6e14-4177-a3a2-5ca32ef66f21-hostproc\") on node \"172.24.4.88\" DevicePath \"\"" Dec 13 04:03:45.118614 kubelet[1388]: I1213 04:03:45.118580 1388 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7eaef61f-6e14-4177-a3a2-5ca32ef66f21-etc-cni-netd\") on node \"172.24.4.88\" DevicePath \"\"" Dec 13 04:03:45.118866 kubelet[1388]: I1213 04:03:45.118834 1388 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7eaef61f-6e14-4177-a3a2-5ca32ef66f21-cilium-config-path\") on node \"172.24.4.88\" DevicePath \"\"" Dec 13 04:03:45.119177 kubelet[1388]: I1213 04:03:45.119142 1388 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7eaef61f-6e14-4177-a3a2-5ca32ef66f21-host-proc-sys-kernel\") on node \"172.24.4.88\" DevicePath \"\"" Dec 13 04:03:45.207438 kubelet[1388]: E1213 04:03:45.207401 1388 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 04:03:45.225481 systemd[1]: run-containerd-runc-k8s.io-01f78ff02c17dd08c24756e900c30db1c67453a3d9aa467df71f58474dc8473b-runc.JPI38K.mount: Deactivated successfully. Dec 13 04:03:45.225685 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d1c4a39219d054b443e3c7de06ea87c8a38e1a94d56fbda33bd91808f7c8321b-shm.mount: Deactivated successfully. Dec 13 04:03:45.225828 systemd[1]: var-lib-kubelet-pods-7eaef61f\x2d6e14\x2d4177\x2da3a2\x2d5ca32ef66f21-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 04:03:45.225961 systemd[1]: var-lib-kubelet-pods-7eaef61f\x2d6e14\x2d4177\x2da3a2\x2d5ca32ef66f21-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 04:03:45.226141 systemd[1]: var-lib-kubelet-pods-7eaef61f\x2d6e14\x2d4177\x2da3a2\x2d5ca32ef66f21-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Dec 13 04:03:45.226286 systemd[1]: var-lib-kubelet-pods-7eaef61f\x2d6e14\x2d4177\x2da3a2\x2d5ca32ef66f21-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dxhvsr.mount: Deactivated successfully. Dec 13 04:03:45.258114 systemd[1]: Removed slice kubepods-burstable-pod7eaef61f_6e14_4177_a3a2_5ca32ef66f21.slice. Dec 13 04:03:45.844875 kubelet[1388]: I1213 04:03:45.844801 1388 scope.go:117] "RemoveContainer" containerID="4a870352085700e79ba281793d60a219f994426c14c0c256547c5118af322bcf" Dec 13 04:03:45.854564 env[1143]: time="2024-12-13T04:03:45.854343252Z" level=info msg="RemoveContainer for \"4a870352085700e79ba281793d60a219f994426c14c0c256547c5118af322bcf\"" Dec 13 04:03:45.892713 env[1143]: time="2024-12-13T04:03:45.892609718Z" level=info msg="RemoveContainer for \"4a870352085700e79ba281793d60a219f994426c14c0c256547c5118af322bcf\" returns successfully" Dec 13 04:03:46.088466 kubelet[1388]: E1213 04:03:46.088288 1388 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:03:46.334301 kubelet[1388]: I1213 04:03:46.334246 1388 topology_manager.go:215] "Topology Admit Handler" podUID="5c359506-9590-4171-b949-3658d7fd2611" podNamespace="kube-system" podName="cilium-kwfms" Dec 13 04:03:46.334674 kubelet[1388]: E1213 04:03:46.334645 1388 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7eaef61f-6e14-4177-a3a2-5ca32ef66f21" containerName="mount-cgroup" Dec 13 04:03:46.334865 kubelet[1388]: I1213 04:03:46.334840 1388 memory_manager.go:354] "RemoveStaleState removing state" podUID="7eaef61f-6e14-4177-a3a2-5ca32ef66f21" containerName="mount-cgroup" Dec 13 04:03:46.348735 systemd[1]: Created slice kubepods-burstable-pod5c359506_9590_4171_b949_3658d7fd2611.slice. Dec 13 04:03:46.429327 kubelet[1388]: I1213 04:03:46.429267 1388 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5c359506-9590-4171-b949-3658d7fd2611-etc-cni-netd\") pod \"cilium-kwfms\" (UID: \"5c359506-9590-4171-b949-3658d7fd2611\") " pod="kube-system/cilium-kwfms" Dec 13 04:03:46.429778 kubelet[1388]: I1213 04:03:46.429748 1388 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5c359506-9590-4171-b949-3658d7fd2611-bpf-maps\") pod \"cilium-kwfms\" (UID: \"5c359506-9590-4171-b949-3658d7fd2611\") " pod="kube-system/cilium-kwfms" Dec 13 04:03:46.430147 kubelet[1388]: I1213 04:03:46.430044 1388 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/5c359506-9590-4171-b949-3658d7fd2611-cilium-ipsec-secrets\") pod \"cilium-kwfms\" (UID: \"5c359506-9590-4171-b949-3658d7fd2611\") " pod="kube-system/cilium-kwfms" Dec 13 04:03:46.430447 kubelet[1388]: I1213 04:03:46.430413 1388 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5c359506-9590-4171-b949-3658d7fd2611-clustermesh-secrets\") pod \"cilium-kwfms\" (UID: \"5c359506-9590-4171-b949-3658d7fd2611\") " pod="kube-system/cilium-kwfms" Dec 13 04:03:46.430782 kubelet[1388]: I1213 04:03:46.430744 1388 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5c359506-9590-4171-b949-3658d7fd2611-hubble-tls\") pod \"cilium-kwfms\" (UID: \"5c359506-9590-4171-b949-3658d7fd2611\") " pod="kube-system/cilium-kwfms" Dec 13 04:03:46.431202 kubelet[1388]: I1213 04:03:46.431169 1388 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7j8nx\" (UniqueName: \"kubernetes.io/projected/5c359506-9590-4171-b949-3658d7fd2611-kube-api-access-7j8nx\") pod \"cilium-kwfms\" (UID: \"5c359506-9590-4171-b949-3658d7fd2611\") " pod="kube-system/cilium-kwfms" Dec 13 04:03:46.431516 kubelet[1388]: I1213 04:03:46.431487 1388 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5c359506-9590-4171-b949-3658d7fd2611-cilium-run\") pod \"cilium-kwfms\" (UID: \"5c359506-9590-4171-b949-3658d7fd2611\") " pod="kube-system/cilium-kwfms" Dec 13 04:03:46.431873 kubelet[1388]: I1213 04:03:46.431831 1388 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5c359506-9590-4171-b949-3658d7fd2611-host-proc-sys-net\") pod \"cilium-kwfms\" (UID: \"5c359506-9590-4171-b949-3658d7fd2611\") " pod="kube-system/cilium-kwfms" Dec 13 04:03:46.432198 kubelet[1388]: I1213 04:03:46.432168 1388 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5c359506-9590-4171-b949-3658d7fd2611-host-proc-sys-kernel\") pod \"cilium-kwfms\" (UID: \"5c359506-9590-4171-b949-3658d7fd2611\") " pod="kube-system/cilium-kwfms" Dec 13 04:03:46.432540 kubelet[1388]: I1213 04:03:46.432512 1388 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5c359506-9590-4171-b949-3658d7fd2611-cilium-config-path\") pod \"cilium-kwfms\" (UID: \"5c359506-9590-4171-b949-3658d7fd2611\") " pod="kube-system/cilium-kwfms" Dec 13 04:03:46.432828 kubelet[1388]: I1213 04:03:46.432803 1388 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5c359506-9590-4171-b949-3658d7fd2611-lib-modules\") pod \"cilium-kwfms\" (UID: \"5c359506-9590-4171-b949-3658d7fd2611\") " pod="kube-system/cilium-kwfms" Dec 13 04:03:46.433142 kubelet[1388]: I1213 04:03:46.433117 1388 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5c359506-9590-4171-b949-3658d7fd2611-cilium-cgroup\") pod \"cilium-kwfms\" (UID: \"5c359506-9590-4171-b949-3658d7fd2611\") " pod="kube-system/cilium-kwfms" Dec 13 04:03:46.433459 kubelet[1388]: I1213 04:03:46.433433 1388 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5c359506-9590-4171-b949-3658d7fd2611-xtables-lock\") pod \"cilium-kwfms\" (UID: \"5c359506-9590-4171-b949-3658d7fd2611\") " pod="kube-system/cilium-kwfms" Dec 13 04:03:46.433904 kubelet[1388]: I1213 04:03:46.433836 1388 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5c359506-9590-4171-b949-3658d7fd2611-cni-path\") pod \"cilium-kwfms\" (UID: \"5c359506-9590-4171-b949-3658d7fd2611\") " pod="kube-system/cilium-kwfms" Dec 13 04:03:46.434259 kubelet[1388]: I1213 04:03:46.434232 1388 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5c359506-9590-4171-b949-3658d7fd2611-hostproc\") pod \"cilium-kwfms\" (UID: \"5c359506-9590-4171-b949-3658d7fd2611\") " pod="kube-system/cilium-kwfms" Dec 13 04:03:46.661428 env[1143]: time="2024-12-13T04:03:46.659112439Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kwfms,Uid:5c359506-9590-4171-b949-3658d7fd2611,Namespace:kube-system,Attempt:0,}" Dec 13 04:03:47.088907 kubelet[1388]: E1213 04:03:47.088780 1388 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:03:47.235134 kubelet[1388]: I1213 04:03:47.235085 1388 setters.go:568] "Node became not ready" node="172.24.4.88" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-12-13T04:03:47Z","lastTransitionTime":"2024-12-13T04:03:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Dec 13 04:03:47.249768 kubelet[1388]: I1213 04:03:47.249718 1388 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="7eaef61f-6e14-4177-a3a2-5ca32ef66f21" path="/var/lib/kubelet/pods/7eaef61f-6e14-4177-a3a2-5ca32ef66f21/volumes" Dec 13 04:03:47.509611 env[1143]: time="2024-12-13T04:03:47.509430173Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 04:03:47.509611 env[1143]: time="2024-12-13T04:03:47.509529226Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 04:03:47.509611 env[1143]: time="2024-12-13T04:03:47.509562749Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 04:03:47.510917 env[1143]: time="2024-12-13T04:03:47.510800528Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2c7fe9e88140cd1d54904c47c43d4eeb45353eb5abfda0653d944eda82d04682 pid=3097 runtime=io.containerd.runc.v2 Dec 13 04:03:47.538328 systemd[1]: Started cri-containerd-2c7fe9e88140cd1d54904c47c43d4eeb45353eb5abfda0653d944eda82d04682.scope. Dec 13 04:03:47.610534 env[1143]: time="2024-12-13T04:03:47.610485867Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kwfms,Uid:5c359506-9590-4171-b949-3658d7fd2611,Namespace:kube-system,Attempt:0,} returns sandbox id \"2c7fe9e88140cd1d54904c47c43d4eeb45353eb5abfda0653d944eda82d04682\"" Dec 13 04:03:47.614514 env[1143]: time="2024-12-13T04:03:47.614484153Z" level=info msg="CreateContainer within sandbox \"2c7fe9e88140cd1d54904c47c43d4eeb45353eb5abfda0653d944eda82d04682\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 04:03:47.638919 kubelet[1388]: W1213 04:03:47.638846 1388 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7eaef61f_6e14_4177_a3a2_5ca32ef66f21.slice/cri-containerd-4a870352085700e79ba281793d60a219f994426c14c0c256547c5118af322bcf.scope WatchSource:0}: container "4a870352085700e79ba281793d60a219f994426c14c0c256547c5118af322bcf" in namespace "k8s.io": not found Dec 13 04:03:48.089646 kubelet[1388]: E1213 04:03:48.089512 1388 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:03:48.162678 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1935741209.mount: Deactivated successfully. Dec 13 04:03:48.358149 env[1143]: time="2024-12-13T04:03:48.357880860Z" level=info msg="CreateContainer within sandbox \"2c7fe9e88140cd1d54904c47c43d4eeb45353eb5abfda0653d944eda82d04682\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"900ed7a56244a2bf4ce3099acf992af0c4a630be18cf29d2974711ffe137942d\"" Dec 13 04:03:48.359704 env[1143]: time="2024-12-13T04:03:48.359588797Z" level=info msg="StartContainer for \"900ed7a56244a2bf4ce3099acf992af0c4a630be18cf29d2974711ffe137942d\"" Dec 13 04:03:48.404948 systemd[1]: Started cri-containerd-900ed7a56244a2bf4ce3099acf992af0c4a630be18cf29d2974711ffe137942d.scope. Dec 13 04:03:48.564320 env[1143]: time="2024-12-13T04:03:48.564165860Z" level=info msg="StartContainer for \"900ed7a56244a2bf4ce3099acf992af0c4a630be18cf29d2974711ffe137942d\" returns successfully" Dec 13 04:03:48.608042 systemd[1]: cri-containerd-900ed7a56244a2bf4ce3099acf992af0c4a630be18cf29d2974711ffe137942d.scope: Deactivated successfully. Dec 13 04:03:48.646707 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-900ed7a56244a2bf4ce3099acf992af0c4a630be18cf29d2974711ffe137942d-rootfs.mount: Deactivated successfully. Dec 13 04:03:48.679422 env[1143]: time="2024-12-13T04:03:48.679339701Z" level=info msg="shim disconnected" id=900ed7a56244a2bf4ce3099acf992af0c4a630be18cf29d2974711ffe137942d Dec 13 04:03:48.679943 env[1143]: time="2024-12-13T04:03:48.679896369Z" level=warning msg="cleaning up after shim disconnected" id=900ed7a56244a2bf4ce3099acf992af0c4a630be18cf29d2974711ffe137942d namespace=k8s.io Dec 13 04:03:48.680138 env[1143]: time="2024-12-13T04:03:48.680035970Z" level=info msg="cleaning up dead shim" Dec 13 04:03:48.696750 env[1143]: time="2024-12-13T04:03:48.696686376Z" level=warning msg="cleanup warnings time=\"2024-12-13T04:03:48Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3184 runtime=io.containerd.runc.v2\n" Dec 13 04:03:48.868847 env[1143]: time="2024-12-13T04:03:48.868633549Z" level=info msg="CreateContainer within sandbox \"2c7fe9e88140cd1d54904c47c43d4eeb45353eb5abfda0653d944eda82d04682\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 04:03:48.900451 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2457018456.mount: Deactivated successfully. Dec 13 04:03:48.906175 env[1143]: time="2024-12-13T04:03:48.906097481Z" level=info msg="CreateContainer within sandbox \"2c7fe9e88140cd1d54904c47c43d4eeb45353eb5abfda0653d944eda82d04682\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"7f5276a4b65dfef9188150d5dead6316336e7d1194be8e8d92ec974e5fe57693\"" Dec 13 04:03:48.907294 env[1143]: time="2024-12-13T04:03:48.907237479Z" level=info msg="StartContainer for \"7f5276a4b65dfef9188150d5dead6316336e7d1194be8e8d92ec974e5fe57693\"" Dec 13 04:03:48.941401 systemd[1]: Started cri-containerd-7f5276a4b65dfef9188150d5dead6316336e7d1194be8e8d92ec974e5fe57693.scope. Dec 13 04:03:48.986446 env[1143]: time="2024-12-13T04:03:48.986392734Z" level=info msg="StartContainer for \"7f5276a4b65dfef9188150d5dead6316336e7d1194be8e8d92ec974e5fe57693\" returns successfully" Dec 13 04:03:49.012131 systemd[1]: cri-containerd-7f5276a4b65dfef9188150d5dead6316336e7d1194be8e8d92ec974e5fe57693.scope: Deactivated successfully. Dec 13 04:03:49.044215 env[1143]: time="2024-12-13T04:03:49.044129149Z" level=info msg="shim disconnected" id=7f5276a4b65dfef9188150d5dead6316336e7d1194be8e8d92ec974e5fe57693 Dec 13 04:03:49.044215 env[1143]: time="2024-12-13T04:03:49.044210982Z" level=warning msg="cleaning up after shim disconnected" id=7f5276a4b65dfef9188150d5dead6316336e7d1194be8e8d92ec974e5fe57693 namespace=k8s.io Dec 13 04:03:49.044215 env[1143]: time="2024-12-13T04:03:49.044225339Z" level=info msg="cleaning up dead shim" Dec 13 04:03:49.054005 env[1143]: time="2024-12-13T04:03:49.053946427Z" level=warning msg="cleanup warnings time=\"2024-12-13T04:03:49Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3245 runtime=io.containerd.runc.v2\n" Dec 13 04:03:49.090507 kubelet[1388]: E1213 04:03:49.090378 1388 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:03:49.552538 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7f5276a4b65dfef9188150d5dead6316336e7d1194be8e8d92ec974e5fe57693-rootfs.mount: Deactivated successfully. Dec 13 04:03:49.617377 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2644550159.mount: Deactivated successfully. Dec 13 04:03:49.873811 env[1143]: time="2024-12-13T04:03:49.873274127Z" level=info msg="CreateContainer within sandbox \"2c7fe9e88140cd1d54904c47c43d4eeb45353eb5abfda0653d944eda82d04682\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 04:03:49.902244 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2578593616.mount: Deactivated successfully. Dec 13 04:03:49.905030 env[1143]: time="2024-12-13T04:03:49.904969621Z" level=info msg="CreateContainer within sandbox \"2c7fe9e88140cd1d54904c47c43d4eeb45353eb5abfda0653d944eda82d04682\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"78866fab48be4484c36adbe3340a29714d0e059362260571423a635acac39adc\"" Dec 13 04:03:49.906019 env[1143]: time="2024-12-13T04:03:49.905995207Z" level=info msg="StartContainer for \"78866fab48be4484c36adbe3340a29714d0e059362260571423a635acac39adc\"" Dec 13 04:03:49.941335 systemd[1]: Started cri-containerd-78866fab48be4484c36adbe3340a29714d0e059362260571423a635acac39adc.scope. Dec 13 04:03:49.997919 env[1143]: time="2024-12-13T04:03:49.997826347Z" level=info msg="StartContainer for \"78866fab48be4484c36adbe3340a29714d0e059362260571423a635acac39adc\" returns successfully" Dec 13 04:03:50.006333 systemd[1]: cri-containerd-78866fab48be4484c36adbe3340a29714d0e059362260571423a635acac39adc.scope: Deactivated successfully. Dec 13 04:03:50.075763 env[1143]: time="2024-12-13T04:03:50.075684106Z" level=info msg="shim disconnected" id=78866fab48be4484c36adbe3340a29714d0e059362260571423a635acac39adc Dec 13 04:03:50.075763 env[1143]: time="2024-12-13T04:03:50.075746163Z" level=warning msg="cleaning up after shim disconnected" id=78866fab48be4484c36adbe3340a29714d0e059362260571423a635acac39adc namespace=k8s.io Dec 13 04:03:50.075763 env[1143]: time="2024-12-13T04:03:50.075758906Z" level=info msg="cleaning up dead shim" Dec 13 04:03:50.089454 env[1143]: time="2024-12-13T04:03:50.089398122Z" level=warning msg="cleanup warnings time=\"2024-12-13T04:03:50Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3300 runtime=io.containerd.runc.v2\n" Dec 13 04:03:50.090873 kubelet[1388]: E1213 04:03:50.090801 1388 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:03:50.211361 kubelet[1388]: E1213 04:03:50.209971 1388 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 04:03:50.884025 env[1143]: time="2024-12-13T04:03:50.883931471Z" level=info msg="CreateContainer within sandbox \"2c7fe9e88140cd1d54904c47c43d4eeb45353eb5abfda0653d944eda82d04682\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 04:03:50.941870 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount869341710.mount: Deactivated successfully. Dec 13 04:03:50.956868 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1364910482.mount: Deactivated successfully. Dec 13 04:03:50.977896 env[1143]: time="2024-12-13T04:03:50.977749280Z" level=info msg="CreateContainer within sandbox \"2c7fe9e88140cd1d54904c47c43d4eeb45353eb5abfda0653d944eda82d04682\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"7a813a7398a8676721312292e1ecb8228c2db721b8c8429aab5abc494cc06907\"" Dec 13 04:03:50.980770 env[1143]: time="2024-12-13T04:03:50.979373877Z" level=info msg="StartContainer for \"7a813a7398a8676721312292e1ecb8228c2db721b8c8429aab5abc494cc06907\"" Dec 13 04:03:51.022917 systemd[1]: Started cri-containerd-7a813a7398a8676721312292e1ecb8228c2db721b8c8429aab5abc494cc06907.scope. Dec 13 04:03:51.077001 systemd[1]: cri-containerd-7a813a7398a8676721312292e1ecb8228c2db721b8c8429aab5abc494cc06907.scope: Deactivated successfully. Dec 13 04:03:51.079511 env[1143]: time="2024-12-13T04:03:51.079315462Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5c359506_9590_4171_b949_3658d7fd2611.slice/cri-containerd-7a813a7398a8676721312292e1ecb8228c2db721b8c8429aab5abc494cc06907.scope/memory.events\": no such file or directory" Dec 13 04:03:51.087048 env[1143]: time="2024-12-13T04:03:51.086953662Z" level=info msg="StartContainer for \"7a813a7398a8676721312292e1ecb8228c2db721b8c8429aab5abc494cc06907\" returns successfully" Dec 13 04:03:51.091099 kubelet[1388]: E1213 04:03:51.091031 1388 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:03:51.271362 env[1143]: time="2024-12-13T04:03:51.271207266Z" level=info msg="shim disconnected" id=7a813a7398a8676721312292e1ecb8228c2db721b8c8429aab5abc494cc06907 Dec 13 04:03:51.271362 env[1143]: time="2024-12-13T04:03:51.271324555Z" level=warning msg="cleaning up after shim disconnected" id=7a813a7398a8676721312292e1ecb8228c2db721b8c8429aab5abc494cc06907 namespace=k8s.io Dec 13 04:03:51.271362 env[1143]: time="2024-12-13T04:03:51.271360743Z" level=info msg="cleaning up dead shim" Dec 13 04:03:51.299520 env[1143]: time="2024-12-13T04:03:51.299441585Z" level=warning msg="cleanup warnings time=\"2024-12-13T04:03:51Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3359 runtime=io.containerd.runc.v2\n" Dec 13 04:03:51.886784 env[1143]: time="2024-12-13T04:03:51.886731529Z" level=info msg="CreateContainer within sandbox \"2c7fe9e88140cd1d54904c47c43d4eeb45353eb5abfda0653d944eda82d04682\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 04:03:51.928597 env[1143]: time="2024-12-13T04:03:51.928530204Z" level=info msg="CreateContainer within sandbox \"2c7fe9e88140cd1d54904c47c43d4eeb45353eb5abfda0653d944eda82d04682\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"2055bbb53f5ea26364d9f286d4bb3ea33cd87e5d689059db2ce38c5af30384dd\"" Dec 13 04:03:51.929615 env[1143]: time="2024-12-13T04:03:51.929569398Z" level=info msg="StartContainer for \"2055bbb53f5ea26364d9f286d4bb3ea33cd87e5d689059db2ce38c5af30384dd\"" Dec 13 04:03:51.984919 systemd[1]: Started cri-containerd-2055bbb53f5ea26364d9f286d4bb3ea33cd87e5d689059db2ce38c5af30384dd.scope. Dec 13 04:03:52.042371 env[1143]: time="2024-12-13T04:03:52.042304313Z" level=info msg="StartContainer for \"2055bbb53f5ea26364d9f286d4bb3ea33cd87e5d689059db2ce38c5af30384dd\" returns successfully" Dec 13 04:03:52.091274 kubelet[1388]: E1213 04:03:52.091226 1388 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:03:52.626467 env[1143]: time="2024-12-13T04:03:52.626389327Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 04:03:52.637717 env[1143]: time="2024-12-13T04:03:52.637618229Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 04:03:52.645896 env[1143]: time="2024-12-13T04:03:52.645825584Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 04:03:52.649187 env[1143]: time="2024-12-13T04:03:52.648028327Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Dec 13 04:03:52.654353 env[1143]: time="2024-12-13T04:03:52.654247170Z" level=info msg="CreateContainer within sandbox \"01f78ff02c17dd08c24756e900c30db1c67453a3d9aa467df71f58474dc8473b\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Dec 13 04:03:52.699872 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2640275961.mount: Deactivated successfully. Dec 13 04:03:52.710119 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount875518904.mount: Deactivated successfully. Dec 13 04:03:52.730158 env[1143]: time="2024-12-13T04:03:52.729992704Z" level=info msg="CreateContainer within sandbox \"01f78ff02c17dd08c24756e900c30db1c67453a3d9aa467df71f58474dc8473b\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"8d87dcadcada0fccf3bbe2fd1513ac68461f2528f0f5d79864c1176a8c510b89\"" Dec 13 04:03:52.731591 env[1143]: time="2024-12-13T04:03:52.731525043Z" level=info msg="StartContainer for \"8d87dcadcada0fccf3bbe2fd1513ac68461f2528f0f5d79864c1176a8c510b89\"" Dec 13 04:03:52.778258 systemd[1]: Started cri-containerd-8d87dcadcada0fccf3bbe2fd1513ac68461f2528f0f5d79864c1176a8c510b89.scope. Dec 13 04:03:52.864324 env[1143]: time="2024-12-13T04:03:52.864203572Z" level=info msg="StartContainer for \"8d87dcadcada0fccf3bbe2fd1513ac68461f2528f0f5d79864c1176a8c510b89\" returns successfully" Dec 13 04:03:53.061755 kubelet[1388]: I1213 04:03:53.061708 1388 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-5cc964979-4fmtm" podStartSLOduration=2.832623555 podStartE2EDuration="11.061662818s" podCreationTimestamp="2024-12-13 04:03:42 +0000 UTC" firstStartedPulling="2024-12-13 04:03:44.420611913 +0000 UTC m=+90.130281871" lastFinishedPulling="2024-12-13 04:03:52.649651136 +0000 UTC m=+98.359321134" observedRunningTime="2024-12-13 04:03:53.014110241 +0000 UTC m=+98.723780319" watchObservedRunningTime="2024-12-13 04:03:53.061662818 +0000 UTC m=+98.771332786" Dec 13 04:03:53.061996 kubelet[1388]: I1213 04:03:53.061939 1388 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-kwfms" podStartSLOduration=7.061914269 podStartE2EDuration="7.061914269s" podCreationTimestamp="2024-12-13 04:03:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 04:03:53.061286072 +0000 UTC m=+98.770956050" watchObservedRunningTime="2024-12-13 04:03:53.061914269 +0000 UTC m=+98.771584227" Dec 13 04:03:53.091659 kubelet[1388]: E1213 04:03:53.091554 1388 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:03:53.536101 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 04:03:53.611094 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm_base(ctr(aes-generic),ghash-generic)))) Dec 13 04:03:54.092169 kubelet[1388]: E1213 04:03:54.092089 1388 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:03:55.001090 kubelet[1388]: E1213 04:03:55.000969 1388 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:03:55.094229 kubelet[1388]: E1213 04:03:55.094157 1388 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:03:55.696840 systemd[1]: run-containerd-runc-k8s.io-2055bbb53f5ea26364d9f286d4bb3ea33cd87e5d689059db2ce38c5af30384dd-runc.k1Xk1J.mount: Deactivated successfully. Dec 13 04:03:56.095286 kubelet[1388]: E1213 04:03:56.095186 1388 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:03:57.095849 kubelet[1388]: E1213 04:03:57.095781 1388 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:03:57.286465 systemd-networkd[974]: lxc_health: Link UP Dec 13 04:03:57.293506 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Dec 13 04:03:57.292948 systemd-networkd[974]: lxc_health: Gained carrier Dec 13 04:03:58.096767 kubelet[1388]: E1213 04:03:58.096711 1388 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:03:58.649291 systemd-networkd[974]: lxc_health: Gained IPv6LL Dec 13 04:03:59.097216 kubelet[1388]: E1213 04:03:59.097138 1388 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:04:00.097400 kubelet[1388]: E1213 04:04:00.097337 1388 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:04:00.196690 systemd[1]: run-containerd-runc-k8s.io-2055bbb53f5ea26364d9f286d4bb3ea33cd87e5d689059db2ce38c5af30384dd-runc.Kk3wux.mount: Deactivated successfully. Dec 13 04:04:01.099018 kubelet[1388]: E1213 04:04:01.098295 1388 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:04:02.099116 kubelet[1388]: E1213 04:04:02.099030 1388 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:04:02.421812 systemd[1]: run-containerd-runc-k8s.io-2055bbb53f5ea26364d9f286d4bb3ea33cd87e5d689059db2ce38c5af30384dd-runc.rIEcop.mount: Deactivated successfully. Dec 13 04:04:03.100315 kubelet[1388]: E1213 04:04:03.100179 1388 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:04:04.101312 kubelet[1388]: E1213 04:04:04.101278 1388 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:04:04.800200 systemd[1]: run-containerd-runc-k8s.io-2055bbb53f5ea26364d9f286d4bb3ea33cd87e5d689059db2ce38c5af30384dd-runc.wTZ0HT.mount: Deactivated successfully. Dec 13 04:04:05.102609 kubelet[1388]: E1213 04:04:05.102457 1388 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:04:06.103946 kubelet[1388]: E1213 04:04:06.103779 1388 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:04:07.104107 kubelet[1388]: E1213 04:04:07.104003 1388 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:04:08.105895 kubelet[1388]: E1213 04:04:08.105835 1388 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:04:09.107407 kubelet[1388]: E1213 04:04:09.107311 1388 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:04:10.107634 kubelet[1388]: E1213 04:04:10.107566 1388 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 04:04:11.109043 kubelet[1388]: E1213 04:04:11.108949 1388 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"