Feb 8 23:30:22.051659 kernel: Linux version 5.15.148-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Thu Feb 8 21:14:17 -00 2024 Feb 8 23:30:22.051709 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=ae7db544026ede4699ee2036449b75950d3fb7929b25a6731d0ad396f1aa37c9 Feb 8 23:30:22.051737 kernel: BIOS-provided physical RAM map: Feb 8 23:30:22.051755 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Feb 8 23:30:22.051772 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Feb 8 23:30:22.051788 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Feb 8 23:30:22.070512 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdcfff] usable Feb 8 23:30:22.070537 kernel: BIOS-e820: [mem 0x000000007ffdd000-0x000000007fffffff] reserved Feb 8 23:30:22.070563 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Feb 8 23:30:22.070581 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Feb 8 23:30:22.070598 kernel: NX (Execute Disable) protection: active Feb 8 23:30:22.070615 kernel: SMBIOS 2.8 present. Feb 8 23:30:22.070632 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014 Feb 8 23:30:22.070650 kernel: Hypervisor detected: KVM Feb 8 23:30:22.070670 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Feb 8 23:30:22.070693 kernel: kvm-clock: cpu 0, msr 31faa001, primary cpu clock Feb 8 23:30:22.070710 kernel: kvm-clock: using sched offset of 5742567242 cycles Feb 8 23:30:22.070730 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Feb 8 23:30:22.070749 kernel: tsc: Detected 1996.249 MHz processor Feb 8 23:30:22.070768 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 8 23:30:22.070788 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 8 23:30:22.070807 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Feb 8 23:30:22.070826 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 8 23:30:22.070849 kernel: ACPI: Early table checksum verification disabled Feb 8 23:30:22.070868 kernel: ACPI: RSDP 0x00000000000F5930 000014 (v00 BOCHS ) Feb 8 23:30:22.070887 kernel: ACPI: RSDT 0x000000007FFE1848 000030 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 8 23:30:22.070906 kernel: ACPI: FACP 0x000000007FFE172C 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 8 23:30:22.070924 kernel: ACPI: DSDT 0x000000007FFE0040 0016EC (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 8 23:30:22.070943 kernel: ACPI: FACS 0x000000007FFE0000 000040 Feb 8 23:30:22.070961 kernel: ACPI: APIC 0x000000007FFE17A0 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 8 23:30:22.070979 kernel: ACPI: WAET 0x000000007FFE1820 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 8 23:30:22.070998 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe172c-0x7ffe179f] Feb 8 23:30:22.071020 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe172b] Feb 8 23:30:22.071039 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Feb 8 23:30:22.071057 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17a0-0x7ffe181f] Feb 8 23:30:22.071075 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe1820-0x7ffe1847] Feb 8 23:30:22.071093 kernel: No NUMA configuration found Feb 8 23:30:22.071112 kernel: Faking a node at [mem 0x0000000000000000-0x000000007ffdcfff] Feb 8 23:30:22.071130 kernel: NODE_DATA(0) allocated [mem 0x7ffd7000-0x7ffdcfff] Feb 8 23:30:22.071149 kernel: Zone ranges: Feb 8 23:30:22.071192 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 8 23:30:22.071212 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdcfff] Feb 8 23:30:22.071231 kernel: Normal empty Feb 8 23:30:22.071251 kernel: Movable zone start for each node Feb 8 23:30:22.071270 kernel: Early memory node ranges Feb 8 23:30:22.071289 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Feb 8 23:30:22.071312 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdcfff] Feb 8 23:30:22.071332 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdcfff] Feb 8 23:30:22.071351 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 8 23:30:22.071370 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Feb 8 23:30:22.071420 kernel: On node 0, zone DMA32: 35 pages in unavailable ranges Feb 8 23:30:22.071450 kernel: ACPI: PM-Timer IO Port: 0x608 Feb 8 23:30:22.071474 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Feb 8 23:30:22.071493 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Feb 8 23:30:22.071513 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Feb 8 23:30:22.071538 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Feb 8 23:30:22.071558 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 8 23:30:22.071577 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Feb 8 23:30:22.071596 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Feb 8 23:30:22.071615 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 8 23:30:22.071634 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Feb 8 23:30:22.071653 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Feb 8 23:30:22.071672 kernel: Booting paravirtualized kernel on KVM Feb 8 23:30:22.071692 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 8 23:30:22.071711 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 Feb 8 23:30:22.071735 kernel: percpu: Embedded 55 pages/cpu s185624 r8192 d31464 u1048576 Feb 8 23:30:22.071754 kernel: pcpu-alloc: s185624 r8192 d31464 u1048576 alloc=1*2097152 Feb 8 23:30:22.071773 kernel: pcpu-alloc: [0] 0 1 Feb 8 23:30:22.071791 kernel: kvm-guest: stealtime: cpu 0, msr 7dc1c0c0 Feb 8 23:30:22.071811 kernel: kvm-guest: PV spinlocks disabled, no host support Feb 8 23:30:22.071830 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515805 Feb 8 23:30:22.071849 kernel: Policy zone: DMA32 Feb 8 23:30:22.071871 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=ae7db544026ede4699ee2036449b75950d3fb7929b25a6731d0ad396f1aa37c9 Feb 8 23:30:22.071896 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 8 23:30:22.071937 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 8 23:30:22.071957 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Feb 8 23:30:22.071976 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 8 23:30:22.071997 kernel: Memory: 1975340K/2096620K available (12294K kernel code, 2275K rwdata, 13700K rodata, 45496K init, 4048K bss, 121020K reserved, 0K cma-reserved) Feb 8 23:30:22.072016 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 8 23:30:22.072035 kernel: ftrace: allocating 34475 entries in 135 pages Feb 8 23:30:22.072055 kernel: ftrace: allocated 135 pages with 4 groups Feb 8 23:30:22.072078 kernel: rcu: Hierarchical RCU implementation. Feb 8 23:30:22.072099 kernel: rcu: RCU event tracing is enabled. Feb 8 23:30:22.072119 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 8 23:30:22.072139 kernel: Rude variant of Tasks RCU enabled. Feb 8 23:30:22.072158 kernel: Tracing variant of Tasks RCU enabled. Feb 8 23:30:22.072178 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 8 23:30:22.072197 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 8 23:30:22.072216 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Feb 8 23:30:22.072235 kernel: Console: colour VGA+ 80x25 Feb 8 23:30:22.072257 kernel: printk: console [tty0] enabled Feb 8 23:30:22.072277 kernel: printk: console [ttyS0] enabled Feb 8 23:30:22.072296 kernel: ACPI: Core revision 20210730 Feb 8 23:30:22.072315 kernel: APIC: Switch to symmetric I/O mode setup Feb 8 23:30:22.072335 kernel: x2apic enabled Feb 8 23:30:22.072354 kernel: Switched APIC routing to physical x2apic. Feb 8 23:30:22.072373 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Feb 8 23:30:22.072430 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Feb 8 23:30:22.072450 kernel: Calibrating delay loop (skipped) preset value.. 3992.49 BogoMIPS (lpj=1996249) Feb 8 23:30:22.072468 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Feb 8 23:30:22.072487 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Feb 8 23:30:22.072501 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 8 23:30:22.072516 kernel: Spectre V2 : Mitigation: Retpolines Feb 8 23:30:22.072531 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 8 23:30:22.072545 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 8 23:30:22.072560 kernel: Speculative Store Bypass: Vulnerable Feb 8 23:30:22.072574 kernel: x86/fpu: x87 FPU will use FXSAVE Feb 8 23:30:22.072588 kernel: Freeing SMP alternatives memory: 32K Feb 8 23:30:22.072602 kernel: pid_max: default: 32768 minimum: 301 Feb 8 23:30:22.072621 kernel: LSM: Security Framework initializing Feb 8 23:30:22.072635 kernel: SELinux: Initializing. Feb 8 23:30:22.072649 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Feb 8 23:30:22.072664 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Feb 8 23:30:22.072679 kernel: smpboot: CPU0: AMD Intel Core i7 9xx (Nehalem Class Core i7) (family: 0x6, model: 0x1a, stepping: 0x3) Feb 8 23:30:22.072694 kernel: Performance Events: AMD PMU driver. Feb 8 23:30:22.072708 kernel: ... version: 0 Feb 8 23:30:22.072751 kernel: ... bit width: 48 Feb 8 23:30:22.072769 kernel: ... generic registers: 4 Feb 8 23:30:22.072798 kernel: ... value mask: 0000ffffffffffff Feb 8 23:30:22.072813 kernel: ... max period: 00007fffffffffff Feb 8 23:30:22.072831 kernel: ... fixed-purpose events: 0 Feb 8 23:30:22.072845 kernel: ... event mask: 000000000000000f Feb 8 23:30:22.072860 kernel: signal: max sigframe size: 1440 Feb 8 23:30:22.072875 kernel: rcu: Hierarchical SRCU implementation. Feb 8 23:30:22.072890 kernel: smp: Bringing up secondary CPUs ... Feb 8 23:30:22.072934 kernel: x86: Booting SMP configuration: Feb 8 23:30:22.072954 kernel: .... node #0, CPUs: #1 Feb 8 23:30:22.072969 kernel: kvm-clock: cpu 1, msr 31faa041, secondary cpu clock Feb 8 23:30:22.073010 kernel: kvm-guest: stealtime: cpu 1, msr 7dd1c0c0 Feb 8 23:30:22.073028 kernel: smp: Brought up 1 node, 2 CPUs Feb 8 23:30:22.073043 kernel: smpboot: Max logical packages: 2 Feb 8 23:30:22.073083 kernel: smpboot: Total of 2 processors activated (7984.99 BogoMIPS) Feb 8 23:30:22.073101 kernel: devtmpfs: initialized Feb 8 23:30:22.073116 kernel: x86/mm: Memory block size: 128MB Feb 8 23:30:22.073158 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 8 23:30:22.073178 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 8 23:30:22.073193 kernel: pinctrl core: initialized pinctrl subsystem Feb 8 23:30:22.073237 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 8 23:30:22.073253 kernel: audit: initializing netlink subsys (disabled) Feb 8 23:30:22.073295 kernel: audit: type=2000 audit(1707435020.759:1): state=initialized audit_enabled=0 res=1 Feb 8 23:30:22.073311 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 8 23:30:22.073353 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 8 23:30:22.073370 kernel: cpuidle: using governor menu Feb 8 23:30:22.073433 kernel: ACPI: bus type PCI registered Feb 8 23:30:22.073455 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 8 23:30:22.073470 kernel: dca service started, version 1.12.1 Feb 8 23:30:22.073485 kernel: PCI: Using configuration type 1 for base access Feb 8 23:30:22.073501 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 8 23:30:22.073516 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Feb 8 23:30:22.073531 kernel: ACPI: Added _OSI(Module Device) Feb 8 23:30:22.073546 kernel: ACPI: Added _OSI(Processor Device) Feb 8 23:30:22.073561 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 8 23:30:22.073576 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 8 23:30:22.073594 kernel: ACPI: Added _OSI(Linux-Dell-Video) Feb 8 23:30:22.073609 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Feb 8 23:30:22.073624 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Feb 8 23:30:22.073639 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 8 23:30:22.073654 kernel: ACPI: Interpreter enabled Feb 8 23:30:22.073669 kernel: ACPI: PM: (supports S0 S3 S5) Feb 8 23:30:22.073684 kernel: ACPI: Using IOAPIC for interrupt routing Feb 8 23:30:22.073699 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 8 23:30:22.073714 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Feb 8 23:30:22.073732 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 8 23:30:22.073991 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Feb 8 23:30:22.074155 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. Feb 8 23:30:22.074179 kernel: acpiphp: Slot [3] registered Feb 8 23:30:22.074195 kernel: acpiphp: Slot [4] registered Feb 8 23:30:22.074210 kernel: acpiphp: Slot [5] registered Feb 8 23:30:22.074225 kernel: acpiphp: Slot [6] registered Feb 8 23:30:22.074246 kernel: acpiphp: Slot [7] registered Feb 8 23:30:22.074261 kernel: acpiphp: Slot [8] registered Feb 8 23:30:22.074275 kernel: acpiphp: Slot [9] registered Feb 8 23:30:22.074290 kernel: acpiphp: Slot [10] registered Feb 8 23:30:22.074305 kernel: acpiphp: Slot [11] registered Feb 8 23:30:22.074320 kernel: acpiphp: Slot [12] registered Feb 8 23:30:22.074335 kernel: acpiphp: Slot [13] registered Feb 8 23:30:22.074350 kernel: acpiphp: Slot [14] registered Feb 8 23:30:22.074365 kernel: acpiphp: Slot [15] registered Feb 8 23:30:22.074414 kernel: acpiphp: Slot [16] registered Feb 8 23:30:22.074434 kernel: acpiphp: Slot [17] registered Feb 8 23:30:22.074449 kernel: acpiphp: Slot [18] registered Feb 8 23:30:22.074464 kernel: acpiphp: Slot [19] registered Feb 8 23:30:22.074478 kernel: acpiphp: Slot [20] registered Feb 8 23:30:22.074493 kernel: acpiphp: Slot [21] registered Feb 8 23:30:22.074508 kernel: acpiphp: Slot [22] registered Feb 8 23:30:22.074551 kernel: acpiphp: Slot [23] registered Feb 8 23:30:22.074569 kernel: acpiphp: Slot [24] registered Feb 8 23:30:22.074583 kernel: acpiphp: Slot [25] registered Feb 8 23:30:22.074603 kernel: acpiphp: Slot [26] registered Feb 8 23:30:22.074618 kernel: acpiphp: Slot [27] registered Feb 8 23:30:22.074633 kernel: acpiphp: Slot [28] registered Feb 8 23:30:22.074648 kernel: acpiphp: Slot [29] registered Feb 8 23:30:22.074692 kernel: acpiphp: Slot [30] registered Feb 8 23:30:22.074708 kernel: acpiphp: Slot [31] registered Feb 8 23:30:22.074723 kernel: PCI host bridge to bus 0000:00 Feb 8 23:30:22.074970 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 8 23:30:22.075231 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 8 23:30:22.080989 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 8 23:30:22.081073 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Feb 8 23:30:22.081154 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Feb 8 23:30:22.081228 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 8 23:30:22.081348 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Feb 8 23:30:22.081472 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Feb 8 23:30:22.081582 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Feb 8 23:30:22.081679 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc120-0xc12f] Feb 8 23:30:22.081773 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Feb 8 23:30:22.081861 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Feb 8 23:30:22.081948 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Feb 8 23:30:22.082035 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Feb 8 23:30:22.082130 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Feb 8 23:30:22.082223 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Feb 8 23:30:22.082311 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Feb 8 23:30:22.082437 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Feb 8 23:30:22.082530 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Feb 8 23:30:22.082619 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Feb 8 23:30:22.082707 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfeb90000-0xfeb90fff] Feb 8 23:30:22.082809 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfeb80000-0xfeb8ffff pref] Feb 8 23:30:22.082900 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Feb 8 23:30:22.082997 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Feb 8 23:30:22.083085 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc080-0xc0bf] Feb 8 23:30:22.083173 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfeb91000-0xfeb91fff] Feb 8 23:30:22.083259 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Feb 8 23:30:22.083346 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb00000-0xfeb7ffff pref] Feb 8 23:30:22.083480 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Feb 8 23:30:22.083566 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Feb 8 23:30:22.083647 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfeb92000-0xfeb92fff] Feb 8 23:30:22.083730 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Feb 8 23:30:22.083826 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 Feb 8 23:30:22.083924 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc0c0-0xc0ff] Feb 8 23:30:22.084011 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Feb 8 23:30:22.084127 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 Feb 8 23:30:22.084212 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc100-0xc11f] Feb 8 23:30:22.084299 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Feb 8 23:30:22.084312 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Feb 8 23:30:22.084322 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Feb 8 23:30:22.084331 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Feb 8 23:30:22.084340 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Feb 8 23:30:22.084348 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Feb 8 23:30:22.084360 kernel: iommu: Default domain type: Translated Feb 8 23:30:22.084369 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 8 23:30:22.088561 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Feb 8 23:30:22.088650 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Feb 8 23:30:22.088731 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Feb 8 23:30:22.088743 kernel: vgaarb: loaded Feb 8 23:30:22.088752 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 8 23:30:22.088760 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 8 23:30:22.088768 kernel: PTP clock support registered Feb 8 23:30:22.088789 kernel: PCI: Using ACPI for IRQ routing Feb 8 23:30:22.088798 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 8 23:30:22.088806 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Feb 8 23:30:22.088814 kernel: e820: reserve RAM buffer [mem 0x7ffdd000-0x7fffffff] Feb 8 23:30:22.088822 kernel: clocksource: Switched to clocksource kvm-clock Feb 8 23:30:22.088831 kernel: VFS: Disk quotas dquot_6.6.0 Feb 8 23:30:22.088839 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 8 23:30:22.088847 kernel: pnp: PnP ACPI init Feb 8 23:30:22.088946 kernel: pnp 00:03: [dma 2] Feb 8 23:30:22.088963 kernel: pnp: PnP ACPI: found 5 devices Feb 8 23:30:22.088971 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 8 23:30:22.088980 kernel: NET: Registered PF_INET protocol family Feb 8 23:30:22.088988 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 8 23:30:22.088997 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Feb 8 23:30:22.089005 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 8 23:30:22.089014 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 8 23:30:22.089022 kernel: TCP bind hash table entries: 16384 (order: 6, 262144 bytes, linear) Feb 8 23:30:22.089032 kernel: TCP: Hash tables configured (established 16384 bind 16384) Feb 8 23:30:22.089041 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Feb 8 23:30:22.089049 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Feb 8 23:30:22.089057 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 8 23:30:22.089065 kernel: NET: Registered PF_XDP protocol family Feb 8 23:30:22.089159 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 8 23:30:22.089237 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 8 23:30:22.089309 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 8 23:30:22.089437 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Feb 8 23:30:22.089519 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Feb 8 23:30:22.089601 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Feb 8 23:30:22.089682 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Feb 8 23:30:22.089762 kernel: pci 0000:00:01.0: Activating ISA DMA hang workarounds Feb 8 23:30:22.089774 kernel: PCI: CLS 0 bytes, default 64 Feb 8 23:30:22.089782 kernel: Initialise system trusted keyrings Feb 8 23:30:22.089791 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Feb 8 23:30:22.089802 kernel: Key type asymmetric registered Feb 8 23:30:22.089810 kernel: Asymmetric key parser 'x509' registered Feb 8 23:30:22.089818 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Feb 8 23:30:22.089827 kernel: io scheduler mq-deadline registered Feb 8 23:30:22.089835 kernel: io scheduler kyber registered Feb 8 23:30:22.089843 kernel: io scheduler bfq registered Feb 8 23:30:22.089851 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 8 23:30:22.089860 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Feb 8 23:30:22.089868 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Feb 8 23:30:22.089876 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Feb 8 23:30:22.089887 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Feb 8 23:30:22.089895 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 8 23:30:22.089903 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 8 23:30:22.089911 kernel: random: crng init done Feb 8 23:30:22.089920 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Feb 8 23:30:22.089928 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Feb 8 23:30:22.089936 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Feb 8 23:30:22.090029 kernel: rtc_cmos 00:04: RTC can wake from S4 Feb 8 23:30:22.090046 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Feb 8 23:30:22.090119 kernel: rtc_cmos 00:04: registered as rtc0 Feb 8 23:30:22.090193 kernel: rtc_cmos 00:04: setting system clock to 2024-02-08T23:30:21 UTC (1707435021) Feb 8 23:30:22.090266 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Feb 8 23:30:22.090278 kernel: NET: Registered PF_INET6 protocol family Feb 8 23:30:22.090286 kernel: Segment Routing with IPv6 Feb 8 23:30:22.090294 kernel: In-situ OAM (IOAM) with IPv6 Feb 8 23:30:22.090302 kernel: NET: Registered PF_PACKET protocol family Feb 8 23:30:22.090311 kernel: Key type dns_resolver registered Feb 8 23:30:22.090322 kernel: IPI shorthand broadcast: enabled Feb 8 23:30:22.090330 kernel: sched_clock: Marking stable (731771765, 125762605)->(924031741, -66497371) Feb 8 23:30:22.090338 kernel: registered taskstats version 1 Feb 8 23:30:22.090346 kernel: Loading compiled-in X.509 certificates Feb 8 23:30:22.090355 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.148-flatcar: e9d857ae0e8100c174221878afd1046acbb054a6' Feb 8 23:30:22.090363 kernel: Key type .fscrypt registered Feb 8 23:30:22.090372 kernel: Key type fscrypt-provisioning registered Feb 8 23:30:22.090394 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 8 23:30:22.090406 kernel: ima: Allocated hash algorithm: sha1 Feb 8 23:30:22.090414 kernel: ima: No architecture policies found Feb 8 23:30:22.090423 kernel: Freeing unused kernel image (initmem) memory: 45496K Feb 8 23:30:22.090431 kernel: Write protecting the kernel read-only data: 28672k Feb 8 23:30:22.090439 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Feb 8 23:30:22.090448 kernel: Freeing unused kernel image (rodata/data gap) memory: 636K Feb 8 23:30:22.090456 kernel: Run /init as init process Feb 8 23:30:22.090464 kernel: with arguments: Feb 8 23:30:22.090472 kernel: /init Feb 8 23:30:22.090482 kernel: with environment: Feb 8 23:30:22.090491 kernel: HOME=/ Feb 8 23:30:22.090498 kernel: TERM=linux Feb 8 23:30:22.090506 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 8 23:30:22.090517 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 8 23:30:22.090529 systemd[1]: Detected virtualization kvm. Feb 8 23:30:22.090538 systemd[1]: Detected architecture x86-64. Feb 8 23:30:22.090547 systemd[1]: Running in initrd. Feb 8 23:30:22.090558 systemd[1]: No hostname configured, using default hostname. Feb 8 23:30:22.090566 systemd[1]: Hostname set to . Feb 8 23:30:22.090576 systemd[1]: Initializing machine ID from VM UUID. Feb 8 23:30:22.090585 systemd[1]: Queued start job for default target initrd.target. Feb 8 23:30:22.090593 systemd[1]: Started systemd-ask-password-console.path. Feb 8 23:30:22.090602 systemd[1]: Reached target cryptsetup.target. Feb 8 23:30:22.090611 systemd[1]: Reached target paths.target. Feb 8 23:30:22.090619 systemd[1]: Reached target slices.target. Feb 8 23:30:22.090630 systemd[1]: Reached target swap.target. Feb 8 23:30:22.090639 systemd[1]: Reached target timers.target. Feb 8 23:30:22.090649 systemd[1]: Listening on iscsid.socket. Feb 8 23:30:22.090657 systemd[1]: Listening on iscsiuio.socket. Feb 8 23:30:22.090666 systemd[1]: Listening on systemd-journald-audit.socket. Feb 8 23:30:22.090675 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 8 23:30:22.090683 systemd[1]: Listening on systemd-journald.socket. Feb 8 23:30:22.090692 systemd[1]: Listening on systemd-networkd.socket. Feb 8 23:30:22.090703 systemd[1]: Listening on systemd-udevd-control.socket. Feb 8 23:30:22.090712 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 8 23:30:22.090721 systemd[1]: Reached target sockets.target. Feb 8 23:30:22.090730 systemd[1]: Starting kmod-static-nodes.service... Feb 8 23:30:22.090747 systemd[1]: Finished network-cleanup.service. Feb 8 23:30:22.090759 systemd[1]: Starting systemd-fsck-usr.service... Feb 8 23:30:22.090769 systemd[1]: Starting systemd-journald.service... Feb 8 23:30:22.090779 systemd[1]: Starting systemd-modules-load.service... Feb 8 23:30:22.090787 systemd[1]: Starting systemd-resolved.service... Feb 8 23:30:22.090796 systemd[1]: Starting systemd-vconsole-setup.service... Feb 8 23:30:22.090805 systemd[1]: Finished kmod-static-nodes.service. Feb 8 23:30:22.090814 systemd[1]: Finished systemd-fsck-usr.service. Feb 8 23:30:22.090823 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 8 23:30:22.090833 kernel: audit: type=1130 audit(1707435022.051:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:30:22.090846 systemd-journald[185]: Journal started Feb 8 23:30:22.090896 systemd-journald[185]: Runtime Journal (/run/log/journal/c6890bc9b82c4b56af34ff0db62524ca) is 4.9M, max 39.5M, 34.5M free. Feb 8 23:30:22.051000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:30:22.056354 systemd-modules-load[186]: Inserted module 'overlay' Feb 8 23:30:22.125075 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 8 23:30:22.125098 kernel: Bridge firewalling registered Feb 8 23:30:22.096976 systemd-resolved[187]: Positive Trust Anchors: Feb 8 23:30:22.126949 systemd[1]: Started systemd-journald.service. Feb 8 23:30:22.096989 systemd-resolved[187]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 8 23:30:22.132676 kernel: audit: type=1130 audit(1707435022.127:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:30:22.132710 kernel: SCSI subsystem initialized Feb 8 23:30:22.127000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:30:22.097026 systemd-resolved[187]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 8 23:30:22.140116 kernel: audit: type=1130 audit(1707435022.132:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:30:22.132000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:30:22.099740 systemd-resolved[187]: Defaulting to hostname 'linux'. Feb 8 23:30:22.154966 kernel: audit: type=1130 audit(1707435022.140:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:30:22.154991 kernel: audit: type=1130 audit(1707435022.144:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:30:22.155003 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 8 23:30:22.155015 kernel: device-mapper: uevent: version 1.0.3 Feb 8 23:30:22.155025 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Feb 8 23:30:22.140000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:30:22.144000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:30:22.107039 systemd-modules-load[186]: Inserted module 'br_netfilter' Feb 8 23:30:22.127563 systemd[1]: Started systemd-resolved.service. Feb 8 23:30:22.133300 systemd[1]: Finished systemd-vconsole-setup.service. Feb 8 23:30:22.140782 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 8 23:30:22.145272 systemd[1]: Reached target nss-lookup.target. Feb 8 23:30:22.150047 systemd[1]: Starting dracut-cmdline-ask.service... Feb 8 23:30:22.158526 systemd-modules-load[186]: Inserted module 'dm_multipath' Feb 8 23:30:22.159309 systemd[1]: Finished systemd-modules-load.service. Feb 8 23:30:22.160000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:30:22.164893 systemd[1]: Starting systemd-sysctl.service... Feb 8 23:30:22.165422 kernel: audit: type=1130 audit(1707435022.160:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:30:22.172941 systemd[1]: Finished systemd-sysctl.service. Feb 8 23:30:22.173000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:30:22.177393 kernel: audit: type=1130 audit(1707435022.173:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:30:22.180921 systemd[1]: Finished dracut-cmdline-ask.service. Feb 8 23:30:22.182241 systemd[1]: Starting dracut-cmdline.service... Feb 8 23:30:22.180000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:30:22.187396 kernel: audit: type=1130 audit(1707435022.180:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:30:22.197934 dracut-cmdline[208]: dracut-dracut-053 Feb 8 23:30:22.201315 dracut-cmdline[208]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=ae7db544026ede4699ee2036449b75950d3fb7929b25a6731d0ad396f1aa37c9 Feb 8 23:30:22.270433 kernel: Loading iSCSI transport class v2.0-870. Feb 8 23:30:22.284427 kernel: iscsi: registered transport (tcp) Feb 8 23:30:22.309443 kernel: iscsi: registered transport (qla4xxx) Feb 8 23:30:22.309527 kernel: QLogic iSCSI HBA Driver Feb 8 23:30:22.363330 systemd[1]: Finished dracut-cmdline.service. Feb 8 23:30:22.368723 kernel: audit: type=1130 audit(1707435022.363:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:30:22.363000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:30:22.364836 systemd[1]: Starting dracut-pre-udev.service... Feb 8 23:30:22.451540 kernel: raid6: sse2x4 gen() 11966 MB/s Feb 8 23:30:22.468480 kernel: raid6: sse2x4 xor() 5020 MB/s Feb 8 23:30:22.485475 kernel: raid6: sse2x2 gen() 14055 MB/s Feb 8 23:30:22.502478 kernel: raid6: sse2x2 xor() 8718 MB/s Feb 8 23:30:22.519603 kernel: raid6: sse2x1 gen() 11033 MB/s Feb 8 23:30:22.537160 kernel: raid6: sse2x1 xor() 6884 MB/s Feb 8 23:30:22.537231 kernel: raid6: using algorithm sse2x2 gen() 14055 MB/s Feb 8 23:30:22.537262 kernel: raid6: .... xor() 8718 MB/s, rmw enabled Feb 8 23:30:22.537970 kernel: raid6: using ssse3x2 recovery algorithm Feb 8 23:30:22.553475 kernel: xor: measuring software checksum speed Feb 8 23:30:22.553538 kernel: prefetch64-sse : 17233 MB/sec Feb 8 23:30:22.556779 kernel: generic_sse : 15697 MB/sec Feb 8 23:30:22.556838 kernel: xor: using function: prefetch64-sse (17233 MB/sec) Feb 8 23:30:22.673454 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Feb 8 23:30:22.691000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:30:22.690535 systemd[1]: Finished dracut-pre-udev.service. Feb 8 23:30:22.692000 audit: BPF prog-id=7 op=LOAD Feb 8 23:30:22.693000 audit: BPF prog-id=8 op=LOAD Feb 8 23:30:22.694847 systemd[1]: Starting systemd-udevd.service... Feb 8 23:30:22.710197 systemd-udevd[385]: Using default interface naming scheme 'v252'. Feb 8 23:30:22.715000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:30:22.714976 systemd[1]: Started systemd-udevd.service. Feb 8 23:30:22.718945 systemd[1]: Starting dracut-pre-trigger.service... Feb 8 23:30:22.744778 dracut-pre-trigger[399]: rd.md=0: removing MD RAID activation Feb 8 23:30:22.792150 systemd[1]: Finished dracut-pre-trigger.service. Feb 8 23:30:22.792000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:30:22.795111 systemd[1]: Starting systemd-udev-trigger.service... Feb 8 23:30:22.845000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:30:22.844205 systemd[1]: Finished systemd-udev-trigger.service. Feb 8 23:30:22.913409 kernel: virtio_blk virtio2: [vda] 41943040 512-byte logical blocks (21.5 GB/20.0 GiB) Feb 8 23:30:22.927507 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 8 23:30:22.927571 kernel: GPT:17805311 != 41943039 Feb 8 23:30:22.927583 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 8 23:30:22.927595 kernel: GPT:17805311 != 41943039 Feb 8 23:30:22.927606 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 8 23:30:22.927617 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 8 23:30:22.952418 kernel: libata version 3.00 loaded. Feb 8 23:30:22.960413 kernel: ata_piix 0000:00:01.1: version 2.13 Feb 8 23:30:22.960653 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (439) Feb 8 23:30:22.967224 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Feb 8 23:30:23.008889 kernel: scsi host0: ata_piix Feb 8 23:30:23.009074 kernel: scsi host1: ata_piix Feb 8 23:30:23.009183 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc120 irq 14 Feb 8 23:30:23.009196 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc128 irq 15 Feb 8 23:30:23.015078 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Feb 8 23:30:23.015875 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Feb 8 23:30:23.020057 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Feb 8 23:30:23.024281 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 8 23:30:23.026948 systemd[1]: Starting disk-uuid.service... Feb 8 23:30:23.039217 disk-uuid[461]: Primary Header is updated. Feb 8 23:30:23.039217 disk-uuid[461]: Secondary Entries is updated. Feb 8 23:30:23.039217 disk-uuid[461]: Secondary Header is updated. Feb 8 23:30:23.048621 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 8 23:30:23.054063 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 8 23:30:24.071455 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 8 23:30:24.072177 disk-uuid[462]: The operation has completed successfully. Feb 8 23:30:24.142801 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 8 23:30:24.143000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:30:24.143000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:30:24.143021 systemd[1]: Finished disk-uuid.service. Feb 8 23:30:24.164646 systemd[1]: Starting verity-setup.service... Feb 8 23:30:24.185250 kernel: device-mapper: verity: sha256 using implementation "sha256-ssse3" Feb 8 23:30:24.277904 systemd[1]: Found device dev-mapper-usr.device. Feb 8 23:30:24.279235 systemd[1]: Mounting sysusr-usr.mount... Feb 8 23:30:24.291331 systemd[1]: Finished verity-setup.service. Feb 8 23:30:24.291000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:30:24.424486 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Feb 8 23:30:24.425991 systemd[1]: Mounted sysusr-usr.mount. Feb 8 23:30:24.427084 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Feb 8 23:30:24.428762 systemd[1]: Starting ignition-setup.service... Feb 8 23:30:24.430565 systemd[1]: Starting parse-ip-for-networkd.service... Feb 8 23:30:24.479688 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 8 23:30:24.479773 kernel: BTRFS info (device vda6): using free space tree Feb 8 23:30:24.479788 kernel: BTRFS info (device vda6): has skinny extents Feb 8 23:30:24.542605 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 8 23:30:24.571761 systemd[1]: Finished ignition-setup.service. Feb 8 23:30:24.572000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:30:24.574606 systemd[1]: Starting ignition-fetch-offline.service... Feb 8 23:30:24.591753 systemd[1]: Finished parse-ip-for-networkd.service. Feb 8 23:30:24.593000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:30:24.594000 audit: BPF prog-id=9 op=LOAD Feb 8 23:30:24.596654 systemd[1]: Starting systemd-networkd.service... Feb 8 23:30:24.636532 systemd-networkd[633]: lo: Link UP Feb 8 23:30:24.636550 systemd-networkd[633]: lo: Gained carrier Feb 8 23:30:24.637902 systemd-networkd[633]: Enumeration completed Feb 8 23:30:24.638349 systemd-networkd[633]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 8 23:30:24.639579 systemd[1]: Started systemd-networkd.service. Feb 8 23:30:24.640000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:30:24.640992 systemd[1]: Reached target network.target. Feb 8 23:30:24.641276 systemd-networkd[633]: eth0: Link UP Feb 8 23:30:24.641281 systemd-networkd[633]: eth0: Gained carrier Feb 8 23:30:24.643849 systemd[1]: Starting iscsiuio.service... Feb 8 23:30:24.650950 systemd[1]: Started iscsiuio.service. Feb 8 23:30:24.651000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:30:24.657328 systemd[1]: Starting iscsid.service... Feb 8 23:30:24.664785 iscsid[638]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Feb 8 23:30:24.664785 iscsid[638]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Feb 8 23:30:24.664785 iscsid[638]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Feb 8 23:30:24.664785 iscsid[638]: If using hardware iscsi like qla4xxx this message can be ignored. Feb 8 23:30:24.664785 iscsid[638]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Feb 8 23:30:24.664785 iscsid[638]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Feb 8 23:30:24.670000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:30:24.666045 systemd[1]: Started iscsid.service. Feb 8 23:30:24.669562 systemd-networkd[633]: eth0: DHCPv4 address 172.24.4.234/24, gateway 172.24.4.1 acquired from 172.24.4.1 Feb 8 23:30:24.672254 systemd[1]: Starting dracut-initqueue.service... Feb 8 23:30:24.686000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:30:24.686825 systemd[1]: Finished dracut-initqueue.service. Feb 8 23:30:24.687448 systemd[1]: Reached target remote-fs-pre.target. Feb 8 23:30:24.687993 systemd[1]: Reached target remote-cryptsetup.target. Feb 8 23:30:24.688588 systemd[1]: Reached target remote-fs.target. Feb 8 23:30:24.689868 systemd[1]: Starting dracut-pre-mount.service... Feb 8 23:30:24.701000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:30:24.701638 systemd[1]: Finished dracut-pre-mount.service. Feb 8 23:30:24.862768 ignition[629]: Ignition 2.14.0 Feb 8 23:30:24.864717 ignition[629]: Stage: fetch-offline Feb 8 23:30:24.866025 ignition[629]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 8 23:30:24.867865 ignition[629]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Feb 8 23:30:24.870719 ignition[629]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 8 23:30:24.870949 ignition[629]: parsed url from cmdline: "" Feb 8 23:30:24.870959 ignition[629]: no config URL provided Feb 8 23:30:24.870973 ignition[629]: reading system config file "/usr/lib/ignition/user.ign" Feb 8 23:30:24.873801 systemd[1]: Finished ignition-fetch-offline.service. Feb 8 23:30:24.874000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:30:24.870991 ignition[629]: no config at "/usr/lib/ignition/user.ign" Feb 8 23:30:24.877317 systemd[1]: Starting ignition-fetch.service... Feb 8 23:30:24.871003 ignition[629]: failed to fetch config: resource requires networking Feb 8 23:30:24.871523 ignition[629]: Ignition finished successfully Feb 8 23:30:24.899900 ignition[656]: Ignition 2.14.0 Feb 8 23:30:24.899956 ignition[656]: Stage: fetch Feb 8 23:30:24.900244 ignition[656]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 8 23:30:24.900291 ignition[656]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Feb 8 23:30:24.903001 ignition[656]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 8 23:30:24.903310 ignition[656]: parsed url from cmdline: "" Feb 8 23:30:24.903321 ignition[656]: no config URL provided Feb 8 23:30:24.903337 ignition[656]: reading system config file "/usr/lib/ignition/user.ign" Feb 8 23:30:24.903359 ignition[656]: no config at "/usr/lib/ignition/user.ign" Feb 8 23:30:24.908564 ignition[656]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Feb 8 23:30:24.908639 ignition[656]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Feb 8 23:30:24.909076 ignition[656]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Feb 8 23:30:25.108553 ignition[656]: GET result: OK Feb 8 23:30:25.108735 ignition[656]: parsing config with SHA512: 14da0de91f92050849821f00edb0a17a297b828bdad6cfd82db7d4437dac2fe47122de161e0da44b7edf62a51f1619578425ba60ce97055a5a7c07cbee386889 Feb 8 23:30:25.169610 unknown[656]: fetched base config from "system" Feb 8 23:30:25.169657 unknown[656]: fetched base config from "system" Feb 8 23:30:25.170798 ignition[656]: fetch: fetch complete Feb 8 23:30:25.169673 unknown[656]: fetched user config from "openstack" Feb 8 23:30:25.170812 ignition[656]: fetch: fetch passed Feb 8 23:30:25.176000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:30:25.174521 systemd[1]: Finished ignition-fetch.service. Feb 8 23:30:25.170897 ignition[656]: Ignition finished successfully Feb 8 23:30:25.178918 systemd[1]: Starting ignition-kargs.service... Feb 8 23:30:25.202043 ignition[662]: Ignition 2.14.0 Feb 8 23:30:25.202072 ignition[662]: Stage: kargs Feb 8 23:30:25.202418 ignition[662]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 8 23:30:25.202469 ignition[662]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Feb 8 23:30:25.204815 ignition[662]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 8 23:30:25.207588 ignition[662]: kargs: kargs passed Feb 8 23:30:25.217824 systemd[1]: Finished ignition-kargs.service. Feb 8 23:30:25.218000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:30:25.207686 ignition[662]: Ignition finished successfully Feb 8 23:30:25.221781 systemd[1]: Starting ignition-disks.service... Feb 8 23:30:25.235724 ignition[667]: Ignition 2.14.0 Feb 8 23:30:25.235738 ignition[667]: Stage: disks Feb 8 23:30:25.240000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:30:25.239554 systemd[1]: Finished ignition-disks.service. Feb 8 23:30:25.235873 ignition[667]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 8 23:30:25.241554 systemd[1]: Reached target initrd-root-device.target. Feb 8 23:30:25.235896 ignition[667]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Feb 8 23:30:25.242752 systemd[1]: Reached target local-fs-pre.target. Feb 8 23:30:25.236814 ignition[667]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 8 23:30:25.243854 systemd[1]: Reached target local-fs.target. Feb 8 23:30:25.237864 ignition[667]: disks: disks passed Feb 8 23:30:25.245515 systemd[1]: Reached target sysinit.target. Feb 8 23:30:25.237915 ignition[667]: Ignition finished successfully Feb 8 23:30:25.247158 systemd[1]: Reached target basic.target. Feb 8 23:30:25.250749 systemd[1]: Starting systemd-fsck-root.service... Feb 8 23:30:25.283318 systemd-fsck[675]: ROOT: clean, 602/1628000 files, 124051/1617920 blocks Feb 8 23:30:25.293734 systemd[1]: Finished systemd-fsck-root.service. Feb 8 23:30:25.294000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:30:25.296095 systemd[1]: Mounting sysroot.mount... Feb 8 23:30:25.323447 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Feb 8 23:30:25.323898 systemd[1]: Mounted sysroot.mount. Feb 8 23:30:25.325844 systemd[1]: Reached target initrd-root-fs.target. Feb 8 23:30:25.330086 systemd[1]: Mounting sysroot-usr.mount... Feb 8 23:30:25.331820 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Feb 8 23:30:25.333330 systemd[1]: Starting flatcar-openstack-hostname.service... Feb 8 23:30:25.335131 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 8 23:30:25.335220 systemd[1]: Reached target ignition-diskful.target. Feb 8 23:30:25.341297 systemd[1]: Mounted sysroot-usr.mount. Feb 8 23:30:25.346463 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 8 23:30:25.348407 systemd[1]: Starting initrd-setup-root.service... Feb 8 23:30:25.356300 initrd-setup-root[687]: cut: /sysroot/etc/passwd: No such file or directory Feb 8 23:30:25.362863 initrd-setup-root[695]: cut: /sysroot/etc/group: No such file or directory Feb 8 23:30:25.367421 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (682) Feb 8 23:30:25.373036 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 8 23:30:25.373073 kernel: BTRFS info (device vda6): using free space tree Feb 8 23:30:25.373086 kernel: BTRFS info (device vda6): has skinny extents Feb 8 23:30:25.379262 initrd-setup-root[719]: cut: /sysroot/etc/shadow: No such file or directory Feb 8 23:30:25.387516 initrd-setup-root[727]: cut: /sysroot/etc/gshadow: No such file or directory Feb 8 23:30:25.394588 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 8 23:30:25.749080 systemd[1]: Finished initrd-setup-root.service. Feb 8 23:30:25.750000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:30:25.752317 systemd[1]: Starting ignition-mount.service... Feb 8 23:30:25.757237 systemd[1]: Starting sysroot-boot.service... Feb 8 23:30:25.774770 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Feb 8 23:30:25.774996 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Feb 8 23:30:25.833871 systemd[1]: Finished sysroot-boot.service. Feb 8 23:30:25.835000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:30:25.839366 ignition[750]: INFO : Ignition 2.14.0 Feb 8 23:30:25.840793 ignition[750]: INFO : Stage: mount Feb 8 23:30:25.841711 ignition[750]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 8 23:30:25.842477 ignition[750]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Feb 8 23:30:25.844794 ignition[750]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 8 23:30:25.846736 ignition[750]: INFO : mount: mount passed Feb 8 23:30:25.847491 ignition[750]: INFO : Ignition finished successfully Feb 8 23:30:25.849000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:30:25.849105 systemd[1]: Finished ignition-mount.service. Feb 8 23:30:25.869190 coreos-metadata[681]: Feb 08 23:30:25.869 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Feb 8 23:30:25.885694 coreos-metadata[681]: Feb 08 23:30:25.885 INFO Fetch successful Feb 8 23:30:25.886677 coreos-metadata[681]: Feb 08 23:30:25.886 INFO wrote hostname ci-3510-3-2-9-158debf268.novalocal to /sysroot/etc/hostname Feb 8 23:30:25.890431 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Feb 8 23:30:25.890628 systemd[1]: Finished flatcar-openstack-hostname.service. Feb 8 23:30:25.891000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:30:25.892000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:30:25.894190 systemd[1]: Starting ignition-files.service... Feb 8 23:30:25.905965 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 8 23:30:25.915448 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (758) Feb 8 23:30:25.918429 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 8 23:30:25.918487 kernel: BTRFS info (device vda6): using free space tree Feb 8 23:30:25.920640 kernel: BTRFS info (device vda6): has skinny extents Feb 8 23:30:25.929916 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 8 23:30:25.948341 ignition[777]: INFO : Ignition 2.14.0 Feb 8 23:30:25.949726 ignition[777]: INFO : Stage: files Feb 8 23:30:25.950906 ignition[777]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 8 23:30:25.952326 ignition[777]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Feb 8 23:30:25.956159 ignition[777]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 8 23:30:25.961101 ignition[777]: DEBUG : files: compiled without relabeling support, skipping Feb 8 23:30:25.963716 ignition[777]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 8 23:30:25.965152 ignition[777]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 8 23:30:25.972066 ignition[777]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 8 23:30:25.973723 ignition[777]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 8 23:30:25.976429 unknown[777]: wrote ssh authorized keys file for user: core Feb 8 23:30:25.977815 ignition[777]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 8 23:30:25.981835 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.3.0.tgz" Feb 8 23:30:25.983758 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://github.com/containernetworking/plugins/releases/download/v1.3.0/cni-plugins-linux-amd64-v1.3.0.tgz: attempt #1 Feb 8 23:30:26.415027 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 8 23:30:26.616486 systemd-networkd[633]: eth0: Gained IPv6LL Feb 8 23:30:27.370453 ignition[777]: DEBUG : files: createFilesystemsFiles: createFiles: op(3): file matches expected sum of: 5d0324ca8a3c90c680b6e1fddb245a2255582fa15949ba1f3c6bb7323df9d3af754dae98d6e40ac9ccafb2999c932df2c4288d418949a4915d928eb23c090540 Feb 8 23:30:27.374036 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.3.0.tgz" Feb 8 23:30:27.374036 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/crictl-v1.27.0-linux-amd64.tar.gz" Feb 8 23:30:27.374036 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.27.0/crictl-v1.27.0-linux-amd64.tar.gz: attempt #1 Feb 8 23:30:27.729329 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 8 23:30:28.218665 ignition[777]: DEBUG : files: createFilesystemsFiles: createFiles: op(4): file matches expected sum of: aa622325bf05520939f9e020d7a28ab48ac23e2fae6f47d5a4e52174c88c1ebc31b464853e4fd65bd8f5331f330a6ca96fd370d247d3eeaed042da4ee2d1219a Feb 8 23:30:28.218665 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/crictl-v1.27.0-linux-amd64.tar.gz" Feb 8 23:30:28.224075 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/kubeadm" Feb 8 23:30:28.224075 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://dl.k8s.io/release/v1.27.2/bin/linux/amd64/kubeadm: attempt #1 Feb 8 23:30:28.360887 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Feb 8 23:30:29.290182 ignition[777]: DEBUG : files: createFilesystemsFiles: createFiles: op(5): file matches expected sum of: f40216b7d14046931c58072d10c7122934eac5a23c08821371f8b08ac1779443ad11d3458a4c5dcde7cf80fc600a9fefb14b1942aa46a52330248d497ca88836 Feb 8 23:30:29.290182 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/kubeadm" Feb 8 23:30:29.290182 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/bin/kubelet" Feb 8 23:30:29.306975 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://dl.k8s.io/release/v1.27.2/bin/linux/amd64/kubelet: attempt #1 Feb 8 23:30:29.400341 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Feb 8 23:30:31.416260 ignition[777]: DEBUG : files: createFilesystemsFiles: createFiles: op(6): file matches expected sum of: a283da2224d456958b2cb99b4f6faf4457c4ed89e9e95f37d970c637f6a7f64ff4dd4d2bfce538759b2d2090933bece599a285ef8fd132eb383fece9a3941560 Feb 8 23:30:31.418492 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/bin/kubelet" Feb 8 23:30:31.418492 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/install.sh" Feb 8 23:30:31.418492 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/install.sh" Feb 8 23:30:31.418492 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/docker/daemon.json" Feb 8 23:30:31.418492 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/docker/daemon.json" Feb 8 23:30:31.418492 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 8 23:30:31.418492 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 8 23:30:31.418492 ignition[777]: INFO : files: op(a): [started] processing unit "coreos-metadata-sshkeys@.service" Feb 8 23:30:31.429033 ignition[777]: INFO : files: op(a): op(b): [started] writing systemd drop-in "20-clct-provider-override.conf" at "/sysroot/etc/systemd/system/coreos-metadata-sshkeys@.service.d/20-clct-provider-override.conf" Feb 8 23:30:31.429033 ignition[777]: INFO : files: op(a): op(b): [finished] writing systemd drop-in "20-clct-provider-override.conf" at "/sysroot/etc/systemd/system/coreos-metadata-sshkeys@.service.d/20-clct-provider-override.conf" Feb 8 23:30:31.429033 ignition[777]: INFO : files: op(a): [finished] processing unit "coreos-metadata-sshkeys@.service" Feb 8 23:30:31.429033 ignition[777]: INFO : files: op(c): [started] processing unit "prepare-cni-plugins.service" Feb 8 23:30:31.429033 ignition[777]: INFO : files: op(c): op(d): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 8 23:30:31.429033 ignition[777]: INFO : files: op(c): op(d): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 8 23:30:31.429033 ignition[777]: INFO : files: op(c): [finished] processing unit "prepare-cni-plugins.service" Feb 8 23:30:31.429033 ignition[777]: INFO : files: op(e): [started] processing unit "prepare-critools.service" Feb 8 23:30:31.429033 ignition[777]: INFO : files: op(e): op(f): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 8 23:30:31.429033 ignition[777]: INFO : files: op(e): op(f): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 8 23:30:31.429033 ignition[777]: INFO : files: op(e): [finished] processing unit "prepare-critools.service" Feb 8 23:30:31.429033 ignition[777]: INFO : files: op(10): [started] processing unit "coreos-metadata.service" Feb 8 23:30:31.429033 ignition[777]: INFO : files: op(10): op(11): [started] writing systemd drop-in "20-clct-provider-override.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/20-clct-provider-override.conf" Feb 8 23:30:31.429033 ignition[777]: INFO : files: op(10): op(11): [finished] writing systemd drop-in "20-clct-provider-override.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/20-clct-provider-override.conf" Feb 8 23:30:31.429033 ignition[777]: INFO : files: op(10): [finished] processing unit "coreos-metadata.service" Feb 8 23:30:31.429033 ignition[777]: INFO : files: op(12): [started] setting preset to enabled for "prepare-critools.service" Feb 8 23:30:31.475561 kernel: kauditd_printk_skb: 27 callbacks suppressed Feb 8 23:30:31.475595 kernel: audit: type=1130 audit(1707435031.442:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:30:31.475617 kernel: audit: type=1130 audit(1707435031.461:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:30:31.475638 kernel: audit: type=1131 audit(1707435031.461:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:30:31.475659 kernel: audit: type=1130 audit(1707435031.469:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:30:31.442000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:30:31.461000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:30:31.461000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:30:31.469000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:30:31.441054 systemd[1]: Finished ignition-files.service. Feb 8 23:30:31.476787 ignition[777]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-critools.service" Feb 8 23:30:31.476787 ignition[777]: INFO : files: op(13): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Feb 8 23:30:31.476787 ignition[777]: INFO : files: op(13): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Feb 8 23:30:31.476787 ignition[777]: INFO : files: op(14): [started] setting preset to enabled for "prepare-cni-plugins.service" Feb 8 23:30:31.476787 ignition[777]: INFO : files: op(14): [finished] setting preset to enabled for "prepare-cni-plugins.service" Feb 8 23:30:31.476787 ignition[777]: INFO : files: createResultFile: createFiles: op(15): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 8 23:30:31.476787 ignition[777]: INFO : files: createResultFile: createFiles: op(15): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 8 23:30:31.476787 ignition[777]: INFO : files: files passed Feb 8 23:30:31.476787 ignition[777]: INFO : Ignition finished successfully Feb 8 23:30:31.444178 systemd[1]: Starting initrd-setup-root-after-ignition.service... Feb 8 23:30:31.453353 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Feb 8 23:30:31.488092 initrd-setup-root-after-ignition[801]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 8 23:30:31.455046 systemd[1]: Starting ignition-quench.service... Feb 8 23:30:31.460636 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 8 23:30:31.460757 systemd[1]: Finished ignition-quench.service. Feb 8 23:30:31.462068 systemd[1]: Finished initrd-setup-root-after-ignition.service. Feb 8 23:30:31.470208 systemd[1]: Reached target ignition-complete.target. Feb 8 23:30:31.475421 systemd[1]: Starting initrd-parse-etc.service... Feb 8 23:30:31.495781 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 8 23:30:31.504452 kernel: audit: type=1130 audit(1707435031.496:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:30:31.504500 kernel: audit: type=1131 audit(1707435031.496:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:30:31.496000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:30:31.496000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:30:31.495877 systemd[1]: Finished initrd-parse-etc.service. Feb 8 23:30:31.496729 systemd[1]: Reached target initrd-fs.target. Feb 8 23:30:31.504944 systemd[1]: Reached target initrd.target. Feb 8 23:30:31.506011 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Feb 8 23:30:31.507190 systemd[1]: Starting dracut-pre-pivot.service... Feb 8 23:30:31.518635 systemd[1]: Finished dracut-pre-pivot.service. Feb 8 23:30:31.530446 kernel: audit: type=1130 audit(1707435031.518:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:30:31.518000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:30:31.519798 systemd[1]: Starting initrd-cleanup.service... Feb 8 23:30:31.529942 systemd[1]: Stopped target nss-lookup.target. Feb 8 23:30:31.531134 systemd[1]: Stopped target remote-cryptsetup.target. Feb 8 23:30:31.532774 systemd[1]: Stopped target timers.target. Feb 8 23:30:31.533506 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 8 23:30:31.533622 systemd[1]: Stopped dracut-pre-pivot.service. Feb 8 23:30:31.534000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:30:31.535487 systemd[1]: Stopped target initrd.target. Feb 8 23:30:31.546695 kernel: audit: type=1131 audit(1707435031.534:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:30:31.543862 systemd[1]: Stopped target basic.target. Feb 8 23:30:31.544482 systemd[1]: Stopped target ignition-complete.target. Feb 8 23:30:31.545081 systemd[1]: Stopped target ignition-diskful.target. Feb 8 23:30:31.545696 systemd[1]: Stopped target initrd-root-device.target. Feb 8 23:30:31.547273 systemd[1]: Stopped target remote-fs.target. Feb 8 23:30:31.548782 systemd[1]: Stopped target remote-fs-pre.target. Feb 8 23:30:31.550334 systemd[1]: Stopped target sysinit.target. Feb 8 23:30:31.551873 systemd[1]: Stopped target local-fs.target. Feb 8 23:30:31.574661 kernel: audit: type=1131 audit(1707435031.557:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:30:31.557000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:30:31.553502 systemd[1]: Stopped target local-fs-pre.target. Feb 8 23:30:31.554957 systemd[1]: Stopped target swap.target. Feb 8 23:30:31.580709 kernel: audit: type=1131 audit(1707435031.576:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:30:31.576000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:30:31.556392 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 8 23:30:31.580000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:30:31.556560 systemd[1]: Stopped dracut-pre-mount.service. Feb 8 23:30:31.581000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:30:31.557961 systemd[1]: Stopped target cryptsetup.target. Feb 8 23:30:31.575105 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 8 23:30:31.575260 systemd[1]: Stopped dracut-initqueue.service. Feb 8 23:30:31.576638 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 8 23:30:31.576798 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Feb 8 23:30:31.581396 systemd[1]: ignition-files.service: Deactivated successfully. Feb 8 23:30:31.581554 systemd[1]: Stopped ignition-files.service. Feb 8 23:30:31.583202 systemd[1]: Stopping ignition-mount.service... Feb 8 23:30:31.583741 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 8 23:30:31.590000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:30:31.583922 systemd[1]: Stopped kmod-static-nodes.service. Feb 8 23:30:31.592112 systemd[1]: Stopping sysroot-boot.service... Feb 8 23:30:31.592665 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 8 23:30:31.593000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:30:31.594000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:30:31.597000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:30:31.597000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:30:31.592839 systemd[1]: Stopped systemd-udev-trigger.service. Feb 8 23:30:31.600000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:30:31.601000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:30:31.602000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:30:31.602000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:30:31.603778 ignition[815]: INFO : Ignition 2.14.0 Feb 8 23:30:31.603778 ignition[815]: INFO : Stage: umount Feb 8 23:30:31.603778 ignition[815]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 8 23:30:31.603778 ignition[815]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Feb 8 23:30:31.603778 ignition[815]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 8 23:30:31.603778 ignition[815]: INFO : umount: umount passed Feb 8 23:30:31.603778 ignition[815]: INFO : Ignition finished successfully Feb 8 23:30:31.608000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:30:31.612000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:30:31.593770 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 8 23:30:31.593928 systemd[1]: Stopped dracut-pre-trigger.service. Feb 8 23:30:31.597425 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 8 23:30:31.597522 systemd[1]: Finished initrd-cleanup.service. Feb 8 23:30:31.622000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:30:31.599093 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 8 23:30:31.599184 systemd[1]: Stopped ignition-mount.service. Feb 8 23:30:31.624000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:30:31.626000 audit: BPF prog-id=6 op=UNLOAD Feb 8 23:30:31.601539 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 8 23:30:31.601589 systemd[1]: Stopped ignition-disks.service. Feb 8 23:30:31.602052 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 8 23:30:31.602091 systemd[1]: Stopped ignition-kargs.service. Feb 8 23:30:31.631000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:30:31.602565 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 8 23:30:31.634000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:30:31.602602 systemd[1]: Stopped ignition-fetch.service. Feb 8 23:30:31.636000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:30:31.603265 systemd[1]: Stopped target network.target. Feb 8 23:30:31.606219 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 8 23:30:31.606282 systemd[1]: Stopped ignition-fetch-offline.service. Feb 8 23:30:31.608741 systemd[1]: Stopped target paths.target. Feb 8 23:30:31.609267 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 8 23:30:31.610486 systemd[1]: Stopped systemd-ask-password-console.path. Feb 8 23:30:31.642000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:30:31.611008 systemd[1]: Stopped target slices.target. Feb 8 23:30:31.611438 systemd[1]: Stopped target sockets.target. Feb 8 23:30:31.644000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:30:31.611914 systemd[1]: iscsid.socket: Deactivated successfully. Feb 8 23:30:31.611944 systemd[1]: Closed iscsid.socket. Feb 8 23:30:31.612454 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 8 23:30:31.612479 systemd[1]: Closed iscsiuio.socket. Feb 8 23:30:31.612891 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 8 23:30:31.612932 systemd[1]: Stopped ignition-setup.service. Feb 8 23:30:31.650000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:30:31.613917 systemd[1]: Stopping systemd-networkd.service... Feb 8 23:30:31.651000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:30:31.614532 systemd[1]: Stopping systemd-resolved.service... Feb 8 23:30:31.652000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:30:31.619317 systemd-networkd[633]: eth0: DHCPv6 lease lost Feb 8 23:30:31.653000 audit: BPF prog-id=9 op=UNLOAD Feb 8 23:30:31.653000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:30:31.621339 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 8 23:30:31.622052 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 8 23:30:31.622245 systemd[1]: Stopped systemd-resolved.service. Feb 8 23:30:31.624233 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 8 23:30:31.624325 systemd[1]: Stopped systemd-networkd.service. Feb 8 23:30:31.626346 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 8 23:30:31.626675 systemd[1]: Closed systemd-networkd.socket. Feb 8 23:30:31.628192 systemd[1]: Stopping network-cleanup.service... Feb 8 23:30:31.628948 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 8 23:30:31.629015 systemd[1]: Stopped parse-ip-for-networkd.service. Feb 8 23:30:31.632151 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 8 23:30:31.632242 systemd[1]: Stopped systemd-sysctl.service. Feb 8 23:30:31.635575 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 8 23:30:31.635674 systemd[1]: Stopped systemd-modules-load.service. Feb 8 23:30:31.636759 systemd[1]: Stopping systemd-udevd.service... Feb 8 23:30:31.664000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:30:31.640430 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 8 23:30:31.641197 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 8 23:30:31.666000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:30:31.641422 systemd[1]: Stopped systemd-udevd.service. Feb 8 23:30:31.667000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:30:31.667000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:30:31.644363 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 8 23:30:31.644646 systemd[1]: Stopped sysroot-boot.service. Feb 8 23:30:31.646501 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 8 23:30:31.646559 systemd[1]: Closed systemd-udevd-control.socket. Feb 8 23:30:31.647299 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 8 23:30:31.647340 systemd[1]: Closed systemd-udevd-kernel.socket. Feb 8 23:30:31.649946 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 8 23:30:31.650015 systemd[1]: Stopped dracut-pre-udev.service. Feb 8 23:30:31.651059 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 8 23:30:31.651122 systemd[1]: Stopped dracut-cmdline.service. Feb 8 23:30:31.652066 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 8 23:30:31.652134 systemd[1]: Stopped dracut-cmdline-ask.service. Feb 8 23:30:31.653151 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 8 23:30:31.653207 systemd[1]: Stopped initrd-setup-root.service. Feb 8 23:30:31.655236 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Feb 8 23:30:31.663877 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 8 23:30:31.663975 systemd[1]: Stopped systemd-vconsole-setup.service. Feb 8 23:30:31.666006 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 8 23:30:31.666109 systemd[1]: Stopped network-cleanup.service. Feb 8 23:30:31.666939 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 8 23:30:31.667027 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Feb 8 23:30:31.667811 systemd[1]: Reached target initrd-switch-root.target. Feb 8 23:30:31.669469 systemd[1]: Starting initrd-switch-root.service... Feb 8 23:30:31.689294 systemd[1]: Switching root. Feb 8 23:30:31.707694 iscsid[638]: iscsid shutting down. Feb 8 23:30:31.708394 systemd-journald[185]: Received SIGTERM from PID 1 (systemd). Feb 8 23:30:31.708446 systemd-journald[185]: Journal stopped Feb 8 23:30:36.441946 kernel: SELinux: Class mctp_socket not defined in policy. Feb 8 23:30:36.442024 kernel: SELinux: Class anon_inode not defined in policy. Feb 8 23:30:36.442039 kernel: SELinux: the above unknown classes and permissions will be allowed Feb 8 23:30:36.442051 kernel: SELinux: policy capability network_peer_controls=1 Feb 8 23:30:36.442061 kernel: SELinux: policy capability open_perms=1 Feb 8 23:30:36.442078 kernel: SELinux: policy capability extended_socket_class=1 Feb 8 23:30:36.442108 kernel: SELinux: policy capability always_check_network=0 Feb 8 23:30:36.442119 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 8 23:30:36.442133 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 8 23:30:36.442145 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 8 23:30:36.442158 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 8 23:30:36.442170 systemd[1]: Successfully loaded SELinux policy in 126.043ms. Feb 8 23:30:36.442192 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 21.305ms. Feb 8 23:30:36.442207 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 8 23:30:36.442220 systemd[1]: Detected virtualization kvm. Feb 8 23:30:36.442235 systemd[1]: Detected architecture x86-64. Feb 8 23:30:36.442247 systemd[1]: Detected first boot. Feb 8 23:30:36.442262 systemd[1]: Hostname set to . Feb 8 23:30:36.442275 systemd[1]: Initializing machine ID from VM UUID. Feb 8 23:30:36.442287 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Feb 8 23:30:36.442299 systemd[1]: Populated /etc with preset unit settings. Feb 8 23:30:36.442312 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 8 23:30:36.442325 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 8 23:30:36.442339 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 8 23:30:36.442353 systemd[1]: iscsiuio.service: Deactivated successfully. Feb 8 23:30:36.442366 systemd[1]: Stopped iscsiuio.service. Feb 8 23:30:36.442770 systemd[1]: iscsid.service: Deactivated successfully. Feb 8 23:30:36.442791 systemd[1]: Stopped iscsid.service. Feb 8 23:30:36.442803 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 8 23:30:36.442815 systemd[1]: Stopped initrd-switch-root.service. Feb 8 23:30:36.442828 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 8 23:30:36.442841 systemd[1]: Created slice system-addon\x2dconfig.slice. Feb 8 23:30:36.442857 systemd[1]: Created slice system-addon\x2drun.slice. Feb 8 23:30:36.442870 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Feb 8 23:30:36.442882 systemd[1]: Created slice system-getty.slice. Feb 8 23:30:36.442894 systemd[1]: Created slice system-modprobe.slice. Feb 8 23:30:36.442906 systemd[1]: Created slice system-serial\x2dgetty.slice. Feb 8 23:30:36.442919 systemd[1]: Created slice system-system\x2dcloudinit.slice. Feb 8 23:30:36.442931 systemd[1]: Created slice system-systemd\x2dfsck.slice. Feb 8 23:30:36.442943 systemd[1]: Created slice user.slice. Feb 8 23:30:36.442957 systemd[1]: Started systemd-ask-password-console.path. Feb 8 23:30:36.442970 systemd[1]: Started systemd-ask-password-wall.path. Feb 8 23:30:36.442983 systemd[1]: Set up automount boot.automount. Feb 8 23:30:36.442995 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Feb 8 23:30:36.443006 systemd[1]: Stopped target initrd-switch-root.target. Feb 8 23:30:36.443018 systemd[1]: Stopped target initrd-fs.target. Feb 8 23:30:36.443030 systemd[1]: Stopped target initrd-root-fs.target. Feb 8 23:30:36.443044 systemd[1]: Reached target integritysetup.target. Feb 8 23:30:36.443056 systemd[1]: Reached target remote-cryptsetup.target. Feb 8 23:30:36.443068 systemd[1]: Reached target remote-fs.target. Feb 8 23:30:36.443081 systemd[1]: Reached target slices.target. Feb 8 23:30:36.443093 systemd[1]: Reached target swap.target. Feb 8 23:30:36.443106 systemd[1]: Reached target torcx.target. Feb 8 23:30:36.443118 systemd[1]: Reached target veritysetup.target. Feb 8 23:30:36.443129 systemd[1]: Listening on systemd-coredump.socket. Feb 8 23:30:36.443141 systemd[1]: Listening on systemd-initctl.socket. Feb 8 23:30:36.443155 systemd[1]: Listening on systemd-networkd.socket. Feb 8 23:30:36.443167 systemd[1]: Listening on systemd-udevd-control.socket. Feb 8 23:30:36.443179 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 8 23:30:36.443192 systemd[1]: Listening on systemd-userdbd.socket. Feb 8 23:30:36.443204 systemd[1]: Mounting dev-hugepages.mount... Feb 8 23:30:36.443215 systemd[1]: Mounting dev-mqueue.mount... Feb 8 23:30:36.443227 systemd[1]: Mounting media.mount... Feb 8 23:30:36.443239 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 8 23:30:36.443251 systemd[1]: Mounting sys-kernel-debug.mount... Feb 8 23:30:36.443265 systemd[1]: Mounting sys-kernel-tracing.mount... Feb 8 23:30:36.443277 systemd[1]: Mounting tmp.mount... Feb 8 23:30:36.443289 systemd[1]: Starting flatcar-tmpfiles.service... Feb 8 23:30:36.443301 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Feb 8 23:30:36.443317 systemd[1]: Starting kmod-static-nodes.service... Feb 8 23:30:36.443330 systemd[1]: Starting modprobe@configfs.service... Feb 8 23:30:36.443342 systemd[1]: Starting modprobe@dm_mod.service... Feb 8 23:30:36.443354 systemd[1]: Starting modprobe@drm.service... Feb 8 23:30:36.443366 systemd[1]: Starting modprobe@efi_pstore.service... Feb 8 23:30:36.443465 systemd[1]: Starting modprobe@fuse.service... Feb 8 23:30:36.443483 systemd[1]: Starting modprobe@loop.service... Feb 8 23:30:36.443496 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 8 23:30:36.443509 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 8 23:30:36.443522 systemd[1]: Stopped systemd-fsck-root.service. Feb 8 23:30:36.443534 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 8 23:30:36.443545 systemd[1]: Stopped systemd-fsck-usr.service. Feb 8 23:30:36.443557 systemd[1]: Stopped systemd-journald.service. Feb 8 23:30:36.443569 kernel: fuse: init (API version 7.34) Feb 8 23:30:36.443583 systemd[1]: Starting systemd-journald.service... Feb 8 23:30:36.443595 systemd[1]: Starting systemd-modules-load.service... Feb 8 23:30:36.443607 systemd[1]: Starting systemd-network-generator.service... Feb 8 23:30:36.443619 systemd[1]: Starting systemd-remount-fs.service... Feb 8 23:30:36.443631 systemd[1]: Starting systemd-udev-trigger.service... Feb 8 23:30:36.443643 systemd[1]: verity-setup.service: Deactivated successfully. Feb 8 23:30:36.443655 systemd[1]: Stopped verity-setup.service. Feb 8 23:30:36.443667 kernel: loop: module loaded Feb 8 23:30:36.443678 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 8 23:30:36.443690 systemd[1]: Mounted dev-hugepages.mount. Feb 8 23:30:36.443704 systemd[1]: Mounted dev-mqueue.mount. Feb 8 23:30:36.443715 systemd[1]: Mounted media.mount. Feb 8 23:30:36.443727 systemd[1]: Mounted sys-kernel-debug.mount. Feb 8 23:30:36.443739 systemd[1]: Mounted sys-kernel-tracing.mount. Feb 8 23:30:36.443750 systemd[1]: Mounted tmp.mount. Feb 8 23:30:36.443762 systemd[1]: Finished kmod-static-nodes.service. Feb 8 23:30:36.443776 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 8 23:30:36.443788 systemd[1]: Finished modprobe@configfs.service. Feb 8 23:30:36.443800 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 8 23:30:36.443812 systemd[1]: Finished modprobe@dm_mod.service. Feb 8 23:30:36.443824 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 8 23:30:36.443836 systemd[1]: Finished modprobe@drm.service. Feb 8 23:30:36.443848 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 8 23:30:36.443862 systemd[1]: Finished modprobe@efi_pstore.service. Feb 8 23:30:36.443874 systemd[1]: Finished flatcar-tmpfiles.service. Feb 8 23:30:36.443886 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 8 23:30:36.443911 systemd[1]: Finished modprobe@fuse.service. Feb 8 23:30:36.443924 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 8 23:30:36.443937 systemd[1]: Finished modprobe@loop.service. Feb 8 23:30:36.443948 systemd[1]: Finished systemd-modules-load.service. Feb 8 23:30:36.443959 systemd[1]: Finished systemd-network-generator.service. Feb 8 23:30:36.443973 systemd-journald[922]: Journal started Feb 8 23:30:36.444016 systemd-journald[922]: Runtime Journal (/run/log/journal/c6890bc9b82c4b56af34ff0db62524ca) is 4.9M, max 39.5M, 34.5M free. Feb 8 23:30:32.122000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 8 23:30:32.263000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 8 23:30:32.263000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 8 23:30:32.263000 audit: BPF prog-id=10 op=LOAD Feb 8 23:30:32.263000 audit: BPF prog-id=10 op=UNLOAD Feb 8 23:30:32.263000 audit: BPF prog-id=11 op=LOAD Feb 8 23:30:32.263000 audit: BPF prog-id=11 op=UNLOAD Feb 8 23:30:32.414000 audit[847]: AVC avc: denied { associate } for pid=847 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Feb 8 23:30:32.414000 audit[847]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c0001058d2 a1=c00002ae40 a2=c000029100 a3=32 items=0 ppid=830 pid=847 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:30:32.414000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 8 23:30:32.416000 audit[847]: AVC avc: denied { associate } for pid=847 comm="torcx-generator" name="bin" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Feb 8 23:30:32.416000 audit[847]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c0001059a9 a2=1ed a3=0 items=2 ppid=830 pid=847 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:30:32.416000 audit: CWD cwd="/" Feb 8 23:30:32.416000 audit: PATH item=0 name=(null) inode=2 dev=00:1a mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:30:32.416000 audit: PATH item=1 name=(null) inode=3 dev=00:1a mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:30:32.416000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 8 23:30:36.187000 audit: BPF prog-id=12 op=LOAD Feb 8 23:30:36.187000 audit: BPF prog-id=3 op=UNLOAD Feb 8 23:30:36.188000 audit: BPF prog-id=13 op=LOAD Feb 8 23:30:36.188000 audit: BPF prog-id=14 op=LOAD Feb 8 23:30:36.188000 audit: BPF prog-id=4 op=UNLOAD Feb 8 23:30:36.188000 audit: BPF prog-id=5 op=UNLOAD Feb 8 23:30:36.189000 audit: BPF prog-id=15 op=LOAD Feb 8 23:30:36.189000 audit: BPF prog-id=12 op=UNLOAD Feb 8 23:30:36.189000 audit: BPF prog-id=16 op=LOAD Feb 8 23:30:36.189000 audit: BPF prog-id=17 op=LOAD Feb 8 23:30:36.190000 audit: BPF prog-id=13 op=UNLOAD Feb 8 23:30:36.190000 audit: BPF prog-id=14 op=UNLOAD Feb 8 23:30:36.191000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:30:36.195000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:30:36.198000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:30:36.201000 audit: BPF prog-id=15 op=UNLOAD Feb 8 23:30:36.202000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:30:36.202000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:30:36.342000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:30:36.346000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:30:36.348000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:30:36.348000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:30:36.349000 audit: BPF prog-id=18 op=LOAD Feb 8 23:30:36.351000 audit: BPF prog-id=19 op=LOAD Feb 8 23:30:36.352000 audit: BPF prog-id=20 op=LOAD Feb 8 23:30:36.352000 audit: BPF prog-id=16 op=UNLOAD Feb 8 23:30:36.352000 audit: BPF prog-id=17 op=UNLOAD Feb 8 23:30:36.377000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:30:36.404000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:30:36.408000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:30:36.408000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:30:36.413000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:30:36.413000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:30:36.419000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:30:36.419000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:30:36.428000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:30:36.428000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:30:36.452409 systemd[1]: Finished systemd-remount-fs.service. Feb 8 23:30:36.452442 kernel: kauditd_printk_skb: 93 callbacks suppressed Feb 8 23:30:36.452458 kernel: audit: type=1130 audit(1707435036.446:132): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:30:36.452474 systemd[1]: Started systemd-journald.service. Feb 8 23:30:36.432000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:30:36.435000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:30:36.435000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:30:36.438000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:30:36.438000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:30:36.440000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:30:36.440000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 8 23:30:36.440000 audit[922]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7ffe5f884650 a2=4000 a3=7ffe5f8846ec items=0 ppid=1 pid=922 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:30:36.440000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Feb 8 23:30:36.442000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:30:36.446000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:30:32.411042 /usr/lib/systemd/system-generators/torcx-generator[847]: time="2024-02-08T23:30:32Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 8 23:30:36.185585 systemd[1]: Queued start job for default target multi-user.target. Feb 8 23:30:32.412120 /usr/lib/systemd/system-generators/torcx-generator[847]: time="2024-02-08T23:30:32Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 8 23:30:36.185599 systemd[1]: Unnecessary job was removed for dev-vda6.device. Feb 8 23:30:32.412144 /usr/lib/systemd/system-generators/torcx-generator[847]: time="2024-02-08T23:30:32Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 8 23:30:36.191606 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 8 23:30:32.412178 /usr/lib/systemd/system-generators/torcx-generator[847]: time="2024-02-08T23:30:32Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Feb 8 23:30:36.457356 kernel: audit: type=1130 audit(1707435036.453:133): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:30:36.453000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:30:32.412207 /usr/lib/systemd/system-generators/torcx-generator[847]: time="2024-02-08T23:30:32Z" level=debug msg="skipped missing lower profile" missing profile=oem Feb 8 23:30:36.453815 systemd[1]: Reached target network-pre.target. Feb 8 23:30:32.412242 /usr/lib/systemd/system-generators/torcx-generator[847]: time="2024-02-08T23:30:32Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Feb 8 23:30:32.412258 /usr/lib/systemd/system-generators/torcx-generator[847]: time="2024-02-08T23:30:32Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Feb 8 23:30:32.412494 /usr/lib/systemd/system-generators/torcx-generator[847]: time="2024-02-08T23:30:32Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Feb 8 23:30:32.412539 /usr/lib/systemd/system-generators/torcx-generator[847]: time="2024-02-08T23:30:32Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 8 23:30:32.412554 /usr/lib/systemd/system-generators/torcx-generator[847]: time="2024-02-08T23:30:32Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 8 23:30:32.413590 /usr/lib/systemd/system-generators/torcx-generator[847]: time="2024-02-08T23:30:32Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Feb 8 23:30:32.413634 /usr/lib/systemd/system-generators/torcx-generator[847]: time="2024-02-08T23:30:32Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Feb 8 23:30:32.413656 /usr/lib/systemd/system-generators/torcx-generator[847]: time="2024-02-08T23:30:32Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.2: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.2 Feb 8 23:30:32.413674 /usr/lib/systemd/system-generators/torcx-generator[847]: time="2024-02-08T23:30:32Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Feb 8 23:30:32.413693 /usr/lib/systemd/system-generators/torcx-generator[847]: time="2024-02-08T23:30:32Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.2: no such file or directory" path=/var/lib/torcx/store/3510.3.2 Feb 8 23:30:32.413709 /usr/lib/systemd/system-generators/torcx-generator[847]: time="2024-02-08T23:30:32Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Feb 8 23:30:35.726545 /usr/lib/systemd/system-generators/torcx-generator[847]: time="2024-02-08T23:30:35Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 8 23:30:35.726860 /usr/lib/systemd/system-generators/torcx-generator[847]: time="2024-02-08T23:30:35Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 8 23:30:35.726995 /usr/lib/systemd/system-generators/torcx-generator[847]: time="2024-02-08T23:30:35Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 8 23:30:36.458796 systemd[1]: Mounting sys-fs-fuse-connections.mount... Feb 8 23:30:35.727244 /usr/lib/systemd/system-generators/torcx-generator[847]: time="2024-02-08T23:30:35Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 8 23:30:35.727314 /usr/lib/systemd/system-generators/torcx-generator[847]: time="2024-02-08T23:30:35Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Feb 8 23:30:35.727428 /usr/lib/systemd/system-generators/torcx-generator[847]: time="2024-02-08T23:30:35Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Feb 8 23:30:36.462585 systemd[1]: Mounting sys-kernel-config.mount... Feb 8 23:30:36.465032 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 8 23:30:36.466967 systemd[1]: Starting systemd-hwdb-update.service... Feb 8 23:30:36.472065 systemd[1]: Starting systemd-journal-flush.service... Feb 8 23:30:36.472697 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 8 23:30:36.473873 systemd[1]: Starting systemd-random-seed.service... Feb 8 23:30:36.474403 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Feb 8 23:30:36.475420 systemd[1]: Starting systemd-sysctl.service... Feb 8 23:30:36.477904 systemd[1]: Starting systemd-sysusers.service... Feb 8 23:30:36.480903 systemd[1]: Finished systemd-udev-trigger.service. Feb 8 23:30:36.481000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:30:36.485554 systemd-journald[922]: Time spent on flushing to /var/log/journal/c6890bc9b82c4b56af34ff0db62524ca is 28.876ms for 1113 entries. Feb 8 23:30:36.485554 systemd-journald[922]: System Journal (/var/log/journal/c6890bc9b82c4b56af34ff0db62524ca) is 8.0M, max 584.8M, 576.8M free. Feb 8 23:30:36.533311 kernel: audit: type=1130 audit(1707435036.481:134): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:30:36.533349 systemd-journald[922]: Received client request to flush runtime journal. Feb 8 23:30:36.533395 kernel: audit: type=1130 audit(1707435036.506:135): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:30:36.533422 kernel: audit: type=1130 audit(1707435036.520:136): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:30:36.506000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:30:36.520000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:30:36.481743 systemd[1]: Mounted sys-fs-fuse-connections.mount. Feb 8 23:30:36.488929 systemd[1]: Mounted sys-kernel-config.mount. Feb 8 23:30:36.533780 udevadm[954]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 8 23:30:36.490749 systemd[1]: Starting systemd-udev-settle.service... Feb 8 23:30:36.533000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:30:36.506319 systemd[1]: Finished systemd-random-seed.service. Feb 8 23:30:36.506915 systemd[1]: Reached target first-boot-complete.target. Feb 8 23:30:36.520418 systemd[1]: Finished systemd-sysctl.service. Feb 8 23:30:36.533547 systemd[1]: Finished systemd-sysusers.service. Feb 8 23:30:36.534875 systemd[1]: Finished systemd-journal-flush.service. Feb 8 23:30:36.541759 kernel: audit: type=1130 audit(1707435036.533:137): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:30:36.541803 kernel: audit: type=1130 audit(1707435036.537:138): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:30:36.537000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:30:37.275770 systemd[1]: Finished systemd-hwdb-update.service. Feb 8 23:30:37.275000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:30:37.286000 audit: BPF prog-id=21 op=LOAD Feb 8 23:30:37.287481 systemd[1]: Starting systemd-udevd.service... Feb 8 23:30:37.290130 kernel: audit: type=1130 audit(1707435037.275:139): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:30:37.290182 kernel: audit: type=1334 audit(1707435037.286:140): prog-id=21 op=LOAD Feb 8 23:30:37.290202 kernel: audit: type=1334 audit(1707435037.286:141): prog-id=22 op=LOAD Feb 8 23:30:37.286000 audit: BPF prog-id=22 op=LOAD Feb 8 23:30:37.286000 audit: BPF prog-id=7 op=UNLOAD Feb 8 23:30:37.286000 audit: BPF prog-id=8 op=UNLOAD Feb 8 23:30:37.321565 systemd-udevd[957]: Using default interface naming scheme 'v252'. Feb 8 23:30:37.429443 systemd[1]: Started systemd-udevd.service. Feb 8 23:30:37.429000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:30:37.431000 audit: BPF prog-id=23 op=LOAD Feb 8 23:30:37.436165 systemd[1]: Starting systemd-networkd.service... Feb 8 23:30:37.454000 audit: BPF prog-id=24 op=LOAD Feb 8 23:30:37.455000 audit: BPF prog-id=25 op=LOAD Feb 8 23:30:37.455000 audit: BPF prog-id=26 op=LOAD Feb 8 23:30:37.457640 systemd[1]: Starting systemd-userdbd.service... Feb 8 23:30:37.513290 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Feb 8 23:30:37.518115 systemd[1]: Started systemd-userdbd.service. Feb 8 23:30:37.518000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:30:37.560007 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 8 23:30:37.609439 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Feb 8 23:30:37.616628 systemd-networkd[966]: lo: Link UP Feb 8 23:30:37.616638 systemd-networkd[966]: lo: Gained carrier Feb 8 23:30:37.617053 systemd-networkd[966]: Enumeration completed Feb 8 23:30:37.617149 systemd[1]: Started systemd-networkd.service. Feb 8 23:30:37.617000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:30:37.617886 systemd-networkd[966]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 8 23:30:37.619699 systemd-networkd[966]: eth0: Link UP Feb 8 23:30:37.619708 systemd-networkd[966]: eth0: Gained carrier Feb 8 23:30:37.628422 kernel: ACPI: button: Power Button [PWRF] Feb 8 23:30:37.631573 systemd-networkd[966]: eth0: DHCPv4 address 172.24.4.234/24, gateway 172.24.4.1 acquired from 172.24.4.1 Feb 8 23:30:37.623000 audit[958]: AVC avc: denied { confidentiality } for pid=958 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 8 23:30:37.623000 audit[958]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55f59d27ec50 a1=32194 a2=7f92d8967bc5 a3=5 items=108 ppid=957 pid=958 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:30:37.623000 audit: CWD cwd="/" Feb 8 23:30:37.623000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:30:37.623000 audit: PATH item=1 name=(null) inode=14404 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:30:37.623000 audit: PATH item=2 name=(null) inode=14404 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:30:37.623000 audit: PATH item=3 name=(null) inode=14405 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:30:37.623000 audit: PATH item=4 name=(null) inode=14404 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:30:37.623000 audit: PATH item=5 name=(null) inode=14406 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:30:37.623000 audit: PATH item=6 name=(null) inode=14404 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:30:37.623000 audit: PATH item=7 name=(null) inode=14407 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:30:37.623000 audit: PATH item=8 name=(null) inode=14407 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:30:37.623000 audit: PATH item=9 name=(null) inode=14408 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:30:37.623000 audit: PATH item=10 name=(null) inode=14407 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:30:37.623000 audit: PATH item=11 name=(null) inode=14409 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:30:37.623000 audit: PATH item=12 name=(null) inode=14407 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:30:37.623000 audit: PATH item=13 name=(null) inode=14410 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:30:37.623000 audit: PATH item=14 name=(null) inode=14407 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:30:37.623000 audit: PATH item=15 name=(null) inode=14411 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:30:37.623000 audit: PATH item=16 name=(null) inode=14407 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:30:37.623000 audit: PATH item=17 name=(null) inode=14412 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:30:37.623000 audit: PATH item=18 name=(null) inode=14404 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:30:37.623000 audit: PATH item=19 name=(null) inode=14413 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:30:37.623000 audit: PATH item=20 name=(null) inode=14413 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:30:37.623000 audit: PATH item=21 name=(null) inode=14414 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:30:37.623000 audit: PATH item=22 name=(null) inode=14413 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:30:37.623000 audit: PATH item=23 name=(null) inode=14415 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:30:37.623000 audit: PATH item=24 name=(null) inode=14413 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:30:37.623000 audit: PATH item=25 name=(null) inode=14416 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:30:37.623000 audit: PATH item=26 name=(null) inode=14413 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:30:37.623000 audit: PATH item=27 name=(null) inode=14417 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:30:37.623000 audit: PATH item=28 name=(null) inode=14413 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:30:37.623000 audit: PATH item=29 name=(null) inode=14418 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:30:37.623000 audit: PATH item=30 name=(null) inode=14404 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:30:37.623000 audit: PATH item=31 name=(null) inode=14419 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:30:37.623000 audit: PATH item=32 name=(null) inode=14419 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:30:37.623000 audit: PATH item=33 name=(null) inode=14420 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:30:37.623000 audit: PATH item=34 name=(null) inode=14419 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:30:37.623000 audit: PATH item=35 name=(null) inode=14421 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:30:37.623000 audit: PATH item=36 name=(null) inode=14419 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:30:37.623000 audit: PATH item=37 name=(null) inode=14422 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:30:37.623000 audit: PATH item=38 name=(null) inode=14419 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:30:37.623000 audit: PATH item=39 name=(null) inode=14423 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:30:37.623000 audit: PATH item=40 name=(null) inode=14419 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:30:37.623000 audit: PATH item=41 name=(null) inode=14424 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:30:37.623000 audit: PATH item=42 name=(null) inode=14404 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:30:37.623000 audit: PATH item=43 name=(null) inode=14425 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:30:37.623000 audit: PATH item=44 name=(null) inode=14425 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:30:37.623000 audit: PATH item=45 name=(null) inode=14426 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:30:37.623000 audit: PATH item=46 name=(null) inode=14425 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:30:37.623000 audit: PATH item=47 name=(null) inode=14427 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:30:37.623000 audit: PATH item=48 name=(null) inode=14425 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:30:37.623000 audit: PATH item=49 name=(null) inode=14428 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:30:37.623000 audit: PATH item=50 name=(null) inode=14425 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:30:37.623000 audit: PATH item=51 name=(null) inode=14429 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:30:37.623000 audit: PATH item=52 name=(null) inode=14425 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:30:37.623000 audit: PATH item=53 name=(null) inode=14430 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:30:37.623000 audit: PATH item=54 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:30:37.623000 audit: PATH item=55 name=(null) inode=14431 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:30:37.623000 audit: PATH item=56 name=(null) inode=14431 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:30:37.623000 audit: PATH item=57 name=(null) inode=14432 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:30:37.623000 audit: PATH item=58 name=(null) inode=14431 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:30:37.623000 audit: PATH item=59 name=(null) inode=14433 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:30:37.623000 audit: PATH item=60 name=(null) inode=14431 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:30:37.623000 audit: PATH item=61 name=(null) inode=14434 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:30:37.623000 audit: PATH item=62 name=(null) inode=14434 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:30:37.623000 audit: PATH item=63 name=(null) inode=14435 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:30:37.623000 audit: PATH item=64 name=(null) inode=14434 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:30:37.623000 audit: PATH item=65 name=(null) inode=14436 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:30:37.623000 audit: PATH item=66 name=(null) inode=14434 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:30:37.623000 audit: PATH item=67 name=(null) inode=14437 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:30:37.623000 audit: PATH item=68 name=(null) inode=14434 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:30:37.623000 audit: PATH item=69 name=(null) inode=14438 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:30:37.623000 audit: PATH item=70 name=(null) inode=14434 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:30:37.623000 audit: PATH item=71 name=(null) inode=14439 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:30:37.623000 audit: PATH item=72 name=(null) inode=14431 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:30:37.623000 audit: PATH item=73 name=(null) inode=14440 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:30:37.623000 audit: PATH item=74 name=(null) inode=14440 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:30:37.623000 audit: PATH item=75 name=(null) inode=14441 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:30:37.623000 audit: PATH item=76 name=(null) inode=14440 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:30:37.623000 audit: PATH item=77 name=(null) inode=14442 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:30:37.623000 audit: PATH item=78 name=(null) inode=14440 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:30:37.623000 audit: PATH item=79 name=(null) inode=14443 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:30:37.623000 audit: PATH item=80 name=(null) inode=14440 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:30:37.623000 audit: PATH item=81 name=(null) inode=14444 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:30:37.623000 audit: PATH item=82 name=(null) inode=14440 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:30:37.623000 audit: PATH item=83 name=(null) inode=14445 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:30:37.623000 audit: PATH item=84 name=(null) inode=14431 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:30:37.623000 audit: PATH item=85 name=(null) inode=14446 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:30:37.623000 audit: PATH item=86 name=(null) inode=14446 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:30:37.623000 audit: PATH item=87 name=(null) inode=14447 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:30:37.623000 audit: PATH item=88 name=(null) inode=14446 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:30:37.623000 audit: PATH item=89 name=(null) inode=14448 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:30:37.623000 audit: PATH item=90 name=(null) inode=14446 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:30:37.623000 audit: PATH item=91 name=(null) inode=14449 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:30:37.623000 audit: PATH item=92 name=(null) inode=14446 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:30:37.623000 audit: PATH item=93 name=(null) inode=14450 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:30:37.623000 audit: PATH item=94 name=(null) inode=14446 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:30:37.623000 audit: PATH item=95 name=(null) inode=14451 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:30:37.623000 audit: PATH item=96 name=(null) inode=14431 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:30:37.623000 audit: PATH item=97 name=(null) inode=14452 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:30:37.623000 audit: PATH item=98 name=(null) inode=14452 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:30:37.623000 audit: PATH item=99 name=(null) inode=14453 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:30:37.623000 audit: PATH item=100 name=(null) inode=14452 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:30:37.623000 audit: PATH item=101 name=(null) inode=14454 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:30:37.623000 audit: PATH item=102 name=(null) inode=14452 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:30:37.623000 audit: PATH item=103 name=(null) inode=14455 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:30:37.623000 audit: PATH item=104 name=(null) inode=14452 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:30:37.623000 audit: PATH item=105 name=(null) inode=14456 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:30:37.623000 audit: PATH item=106 name=(null) inode=14452 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:30:37.623000 audit: PATH item=107 name=(null) inode=14457 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:30:37.623000 audit: PROCTITLE proctitle="(udev-worker)" Feb 8 23:30:37.654469 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Feb 8 23:30:37.657415 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Feb 8 23:30:37.661404 kernel: mousedev: PS/2 mouse device common for all mice Feb 8 23:30:37.696053 systemd[1]: Finished systemd-udev-settle.service. Feb 8 23:30:37.696000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:30:37.698542 systemd[1]: Starting lvm2-activation-early.service... Feb 8 23:30:37.740448 lvm[986]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 8 23:30:37.775691 systemd[1]: Finished lvm2-activation-early.service. Feb 8 23:30:37.776000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:30:37.776776 systemd[1]: Reached target cryptsetup.target. Feb 8 23:30:37.779249 systemd[1]: Starting lvm2-activation.service... Feb 8 23:30:37.788755 lvm[987]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 8 23:30:37.828144 systemd[1]: Finished lvm2-activation.service. Feb 8 23:30:37.828000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:30:37.829521 systemd[1]: Reached target local-fs-pre.target. Feb 8 23:30:37.830629 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 8 23:30:37.830690 systemd[1]: Reached target local-fs.target. Feb 8 23:30:37.831782 systemd[1]: Reached target machines.target. Feb 8 23:30:37.835309 systemd[1]: Starting ldconfig.service... Feb 8 23:30:37.837614 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Feb 8 23:30:37.837732 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 8 23:30:37.839992 systemd[1]: Starting systemd-boot-update.service... Feb 8 23:30:37.844625 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Feb 8 23:30:37.852808 systemd[1]: Starting systemd-machine-id-commit.service... Feb 8 23:30:37.857020 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Feb 8 23:30:37.857132 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Feb 8 23:30:37.860179 systemd[1]: Starting systemd-tmpfiles-setup.service... Feb 8 23:30:37.865206 systemd[1]: boot.automount: Got automount request for /boot, triggered by 989 (bootctl) Feb 8 23:30:37.868640 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Feb 8 23:30:37.892844 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Feb 8 23:30:37.893000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:30:38.235348 systemd-tmpfiles[992]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Feb 8 23:30:38.502652 systemd-tmpfiles[992]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 8 23:30:38.545811 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 8 23:30:38.549359 systemd[1]: Finished systemd-machine-id-commit.service. Feb 8 23:30:38.550000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:30:38.556567 systemd-tmpfiles[992]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 8 23:30:38.710912 systemd-fsck[997]: fsck.fat 4.2 (2021-01-31) Feb 8 23:30:38.710912 systemd-fsck[997]: /dev/vda1: 789 files, 115332/258078 clusters Feb 8 23:30:38.714286 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Feb 8 23:30:38.715000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:30:38.719780 systemd[1]: Mounting boot.mount... Feb 8 23:30:38.751276 systemd[1]: Mounted boot.mount. Feb 8 23:30:38.781000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:30:38.781018 systemd[1]: Finished systemd-boot-update.service. Feb 8 23:30:38.867234 systemd[1]: Finished systemd-tmpfiles-setup.service. Feb 8 23:30:38.867000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:30:38.869138 systemd[1]: Starting audit-rules.service... Feb 8 23:30:38.870969 systemd[1]: Starting clean-ca-certificates.service... Feb 8 23:30:38.873458 systemd[1]: Starting systemd-journal-catalog-update.service... Feb 8 23:30:38.878000 audit: BPF prog-id=27 op=LOAD Feb 8 23:30:38.879470 systemd[1]: Starting systemd-resolved.service... Feb 8 23:30:38.880000 audit: BPF prog-id=28 op=LOAD Feb 8 23:30:38.881676 systemd[1]: Starting systemd-timesyncd.service... Feb 8 23:30:38.884808 systemd[1]: Starting systemd-update-utmp.service... Feb 8 23:30:38.895527 systemd[1]: Finished clean-ca-certificates.service. Feb 8 23:30:38.895000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:30:38.896162 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 8 23:30:38.901000 audit[1005]: SYSTEM_BOOT pid=1005 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Feb 8 23:30:38.903544 systemd-networkd[966]: eth0: Gained IPv6LL Feb 8 23:30:38.906576 systemd[1]: Finished systemd-update-utmp.service. Feb 8 23:30:38.906000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:30:38.936774 systemd[1]: Finished systemd-journal-catalog-update.service. Feb 8 23:30:38.936000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:30:38.974000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 8 23:30:38.974000 audit[1021]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd491f3a90 a2=420 a3=0 items=0 ppid=1000 pid=1021 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:30:38.974000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Feb 8 23:30:38.977297 augenrules[1021]: No rules Feb 8 23:30:38.977391 systemd[1]: Finished audit-rules.service. Feb 8 23:30:38.999888 systemd[1]: Started systemd-timesyncd.service. Feb 8 23:30:39.000522 systemd[1]: Reached target time-set.target. Feb 8 23:30:39.006525 systemd-resolved[1003]: Positive Trust Anchors: Feb 8 23:30:39.006866 systemd-resolved[1003]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 8 23:30:39.006971 systemd-resolved[1003]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 8 23:30:39.024425 systemd-timesyncd[1004]: Contacted time server 129.250.35.251:123 (0.flatcar.pool.ntp.org). Feb 8 23:30:39.024538 systemd-timesyncd[1004]: Initial clock synchronization to Thu 2024-02-08 23:30:38.972623 UTC. Feb 8 23:30:39.026094 systemd-resolved[1003]: Using system hostname 'ci-3510-3-2-9-158debf268.novalocal'. Feb 8 23:30:39.028044 systemd[1]: Started systemd-resolved.service. Feb 8 23:30:39.028665 systemd[1]: Reached target network.target. Feb 8 23:30:39.029096 systemd[1]: Reached target nss-lookup.target. Feb 8 23:30:39.207787 ldconfig[988]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 8 23:30:39.223212 systemd[1]: Finished ldconfig.service. Feb 8 23:30:39.227111 systemd[1]: Starting systemd-update-done.service... Feb 8 23:30:39.233094 systemd[1]: Finished systemd-update-done.service. Feb 8 23:30:39.234475 systemd[1]: Reached target sysinit.target. Feb 8 23:30:39.235641 systemd[1]: Started motdgen.path. Feb 8 23:30:39.236932 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Feb 8 23:30:39.238412 systemd[1]: Started logrotate.timer. Feb 8 23:30:39.239584 systemd[1]: Started mdadm.timer. Feb 8 23:30:39.240623 systemd[1]: Started systemd-tmpfiles-clean.timer. Feb 8 23:30:39.241735 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 8 23:30:39.241796 systemd[1]: Reached target paths.target. Feb 8 23:30:39.242848 systemd[1]: Reached target timers.target. Feb 8 23:30:39.245082 systemd[1]: Listening on dbus.socket. Feb 8 23:30:39.248119 systemd[1]: Starting docker.socket... Feb 8 23:30:39.254358 systemd[1]: Listening on sshd.socket. Feb 8 23:30:39.255685 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 8 23:30:39.256686 systemd[1]: Listening on docker.socket. Feb 8 23:30:39.257928 systemd[1]: Reached target sockets.target. Feb 8 23:30:39.259036 systemd[1]: Reached target basic.target. Feb 8 23:30:39.260306 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 8 23:30:39.260446 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 8 23:30:39.262659 systemd[1]: Starting containerd.service... Feb 8 23:30:39.268613 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Feb 8 23:30:39.274516 systemd[1]: Starting dbus.service... Feb 8 23:30:39.276007 systemd[1]: Starting enable-oem-cloudinit.service... Feb 8 23:30:39.277758 systemd[1]: Starting extend-filesystems.service... Feb 8 23:30:39.278358 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Feb 8 23:30:39.280410 systemd[1]: Starting motdgen.service... Feb 8 23:30:39.324026 jq[1035]: false Feb 8 23:30:39.282495 systemd[1]: Starting prepare-cni-plugins.service... Feb 8 23:30:39.284619 systemd[1]: Starting prepare-critools.service... Feb 8 23:30:39.287952 systemd[1]: Starting ssh-key-proc-cmdline.service... Feb 8 23:30:39.290117 systemd[1]: Starting sshd-keygen.service... Feb 8 23:30:39.296204 systemd[1]: Starting systemd-logind.service... Feb 8 23:30:39.296768 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 8 23:30:39.296827 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 8 23:30:39.326975 jq[1044]: true Feb 8 23:30:39.297293 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 8 23:30:39.298054 systemd[1]: Starting update-engine.service... Feb 8 23:30:39.300901 systemd[1]: Starting update-ssh-keys-after-ignition.service... Feb 8 23:30:39.305450 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 8 23:30:39.305634 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Feb 8 23:30:39.325407 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 8 23:30:39.325562 systemd[1]: Finished ssh-key-proc-cmdline.service. Feb 8 23:30:39.336409 tar[1046]: ./ Feb 8 23:30:39.336409 tar[1046]: ./loopback Feb 8 23:30:39.339115 tar[1047]: crictl Feb 8 23:30:39.365027 jq[1058]: true Feb 8 23:30:39.374511 extend-filesystems[1036]: Found vda Feb 8 23:30:39.376176 extend-filesystems[1036]: Found vda1 Feb 8 23:30:39.376776 extend-filesystems[1036]: Found vda2 Feb 8 23:30:39.377413 extend-filesystems[1036]: Found vda3 Feb 8 23:30:39.380144 systemd[1]: motdgen.service: Deactivated successfully. Feb 8 23:30:39.380399 extend-filesystems[1036]: Found usr Feb 8 23:30:39.380549 systemd[1]: Finished motdgen.service. Feb 8 23:30:39.381068 extend-filesystems[1036]: Found vda4 Feb 8 23:30:39.382429 extend-filesystems[1036]: Found vda6 Feb 8 23:30:39.382998 extend-filesystems[1036]: Found vda7 Feb 8 23:30:39.383628 extend-filesystems[1036]: Found vda9 Feb 8 23:30:39.388830 extend-filesystems[1036]: Checking size of /dev/vda9 Feb 8 23:30:39.400820 dbus-daemon[1034]: [system] SELinux support is enabled Feb 8 23:30:39.400984 systemd[1]: Started dbus.service. Feb 8 23:30:39.403795 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 8 23:30:39.403825 systemd[1]: Reached target system-config.target. Feb 8 23:30:39.404334 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 8 23:30:39.404357 systemd[1]: Reached target user-config.target. Feb 8 23:30:39.412447 extend-filesystems[1036]: Resized partition /dev/vda9 Feb 8 23:30:39.439926 extend-filesystems[1074]: resize2fs 1.46.5 (30-Dec-2021) Feb 8 23:30:39.462747 update_engine[1043]: I0208 23:30:39.457826 1043 main.cc:92] Flatcar Update Engine starting Feb 8 23:30:39.466982 env[1054]: time="2024-02-08T23:30:39.466923871Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Feb 8 23:30:39.474578 systemd[1]: Started update-engine.service. Feb 8 23:30:39.474920 update_engine[1043]: I0208 23:30:39.474622 1043 update_check_scheduler.cc:74] Next update check in 10m19s Feb 8 23:30:39.477528 systemd[1]: Started locksmithd.service. Feb 8 23:30:39.479430 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 4635643 blocks Feb 8 23:30:39.522688 env[1054]: time="2024-02-08T23:30:39.522630760Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 8 23:30:39.539647 env[1054]: time="2024-02-08T23:30:39.539568276Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 8 23:30:39.541575 env[1054]: time="2024-02-08T23:30:39.541514466Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.148-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 8 23:30:39.541575 env[1054]: time="2024-02-08T23:30:39.541567796Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 8 23:30:39.541898 env[1054]: time="2024-02-08T23:30:39.541864653Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 8 23:30:39.546716 env[1054]: time="2024-02-08T23:30:39.541911601Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 8 23:30:39.546716 env[1054]: time="2024-02-08T23:30:39.541930396Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Feb 8 23:30:39.546716 env[1054]: time="2024-02-08T23:30:39.541943300Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 8 23:30:39.546716 env[1054]: time="2024-02-08T23:30:39.542053186Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 8 23:30:39.546716 env[1054]: time="2024-02-08T23:30:39.545430450Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 8 23:30:39.546716 env[1054]: time="2024-02-08T23:30:39.545613033Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 8 23:30:39.546716 env[1054]: time="2024-02-08T23:30:39.545652878Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 8 23:30:39.546716 env[1054]: time="2024-02-08T23:30:39.545713020Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Feb 8 23:30:39.546716 env[1054]: time="2024-02-08T23:30:39.545750581Z" level=info msg="metadata content store policy set" policy=shared Feb 8 23:30:39.544948 systemd-logind[1042]: Watching system buttons on /dev/input/event1 (Power Button) Feb 8 23:30:39.544967 systemd-logind[1042]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 8 23:30:39.547363 systemd-logind[1042]: New seat seat0. Feb 8 23:30:39.551713 systemd[1]: Started systemd-logind.service. Feb 8 23:30:39.584767 tar[1046]: ./bandwidth Feb 8 23:30:39.590104 bash[1087]: Updated "/home/core/.ssh/authorized_keys" Feb 8 23:30:39.590608 systemd[1]: Finished update-ssh-keys-after-ignition.service. Feb 8 23:30:39.595403 kernel: EXT4-fs (vda9): resized filesystem to 4635643 Feb 8 23:30:39.728271 extend-filesystems[1074]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 8 23:30:39.728271 extend-filesystems[1074]: old_desc_blocks = 1, new_desc_blocks = 3 Feb 8 23:30:39.728271 extend-filesystems[1074]: The filesystem on /dev/vda9 is now 4635643 (4k) blocks long. Feb 8 23:30:39.725651 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 8 23:30:39.752734 coreos-metadata[1031]: Feb 08 23:30:39.601 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Feb 8 23:30:39.752734 coreos-metadata[1031]: Feb 08 23:30:39.622 INFO Fetch successful Feb 8 23:30:39.752734 coreos-metadata[1031]: Feb 08 23:30:39.622 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Feb 8 23:30:39.752734 coreos-metadata[1031]: Feb 08 23:30:39.636 INFO Fetch successful Feb 8 23:30:39.753142 env[1054]: time="2024-02-08T23:30:39.735733063Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 8 23:30:39.753142 env[1054]: time="2024-02-08T23:30:39.735814756Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 8 23:30:39.753142 env[1054]: time="2024-02-08T23:30:39.735851495Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 8 23:30:39.753142 env[1054]: time="2024-02-08T23:30:39.735941494Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 8 23:30:39.753142 env[1054]: time="2024-02-08T23:30:39.735996637Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 8 23:30:39.753142 env[1054]: time="2024-02-08T23:30:39.736021183Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 8 23:30:39.753142 env[1054]: time="2024-02-08T23:30:39.736075826Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 8 23:30:39.753142 env[1054]: time="2024-02-08T23:30:39.736098869Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 8 23:30:39.753142 env[1054]: time="2024-02-08T23:30:39.736166065Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Feb 8 23:30:39.753142 env[1054]: time="2024-02-08T23:30:39.736186153Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 8 23:30:39.753142 env[1054]: time="2024-02-08T23:30:39.736201912Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 8 23:30:39.753142 env[1054]: time="2024-02-08T23:30:39.736239783Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 8 23:30:39.753142 env[1054]: time="2024-02-08T23:30:39.736420582Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 8 23:30:39.753142 env[1054]: time="2024-02-08T23:30:39.736546468Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 8 23:30:39.756957 extend-filesystems[1036]: Resized filesystem in /dev/vda9 Feb 8 23:30:39.725832 systemd[1]: Finished extend-filesystems.service. Feb 8 23:30:39.760951 env[1054]: time="2024-02-08T23:30:39.736978469Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 8 23:30:39.760951 env[1054]: time="2024-02-08T23:30:39.737029935Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 8 23:30:39.760951 env[1054]: time="2024-02-08T23:30:39.737047458Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 8 23:30:39.760951 env[1054]: time="2024-02-08T23:30:39.737122178Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 8 23:30:39.760951 env[1054]: time="2024-02-08T23:30:39.737141635Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 8 23:30:39.760951 env[1054]: time="2024-02-08T23:30:39.737176700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 8 23:30:39.760951 env[1054]: time="2024-02-08T23:30:39.737197670Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 8 23:30:39.760951 env[1054]: time="2024-02-08T23:30:39.737213319Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 8 23:30:39.760951 env[1054]: time="2024-02-08T23:30:39.737234339Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 8 23:30:39.760951 env[1054]: time="2024-02-08T23:30:39.737269755Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 8 23:30:39.760951 env[1054]: time="2024-02-08T23:30:39.737286456Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 8 23:30:39.760951 env[1054]: time="2024-02-08T23:30:39.737302917Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 8 23:30:39.760951 env[1054]: time="2024-02-08T23:30:39.737659416Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 8 23:30:39.760951 env[1054]: time="2024-02-08T23:30:39.737682699Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 8 23:30:39.760951 env[1054]: time="2024-02-08T23:30:39.737697477Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 8 23:30:39.734791 unknown[1031]: wrote ssh authorized keys file for user: core Feb 8 23:30:39.764200 env[1054]: time="2024-02-08T23:30:39.737735358Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 8 23:30:39.764200 env[1054]: time="2024-02-08T23:30:39.737753102Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Feb 8 23:30:39.764200 env[1054]: time="2024-02-08T23:30:39.737767619Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 8 23:30:39.764200 env[1054]: time="2024-02-08T23:30:39.737803897Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Feb 8 23:30:39.764200 env[1054]: time="2024-02-08T23:30:39.737843812Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 8 23:30:39.740325 systemd[1]: Started containerd.service. Feb 8 23:30:39.764874 env[1054]: time="2024-02-08T23:30:39.738113938Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 8 23:30:39.764874 env[1054]: time="2024-02-08T23:30:39.738199268Z" level=info msg="Connect containerd service" Feb 8 23:30:39.764874 env[1054]: time="2024-02-08T23:30:39.738360160Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 8 23:30:39.764874 env[1054]: time="2024-02-08T23:30:39.739206147Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 8 23:30:39.764874 env[1054]: time="2024-02-08T23:30:39.740136552Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 8 23:30:39.764874 env[1054]: time="2024-02-08T23:30:39.740204890Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 8 23:30:39.764874 env[1054]: time="2024-02-08T23:30:39.740704657Z" level=info msg="Start subscribing containerd event" Feb 8 23:30:39.764874 env[1054]: time="2024-02-08T23:30:39.740847736Z" level=info msg="Start recovering state" Feb 8 23:30:39.764874 env[1054]: time="2024-02-08T23:30:39.741004149Z" level=info msg="Start event monitor" Feb 8 23:30:39.764874 env[1054]: time="2024-02-08T23:30:39.741034536Z" level=info msg="Start snapshots syncer" Feb 8 23:30:39.764874 env[1054]: time="2024-02-08T23:30:39.741057910Z" level=info msg="Start cni network conf syncer for default" Feb 8 23:30:39.764874 env[1054]: time="2024-02-08T23:30:39.741079660Z" level=info msg="Start streaming server" Feb 8 23:30:39.764874 env[1054]: time="2024-02-08T23:30:39.741593013Z" level=info msg="containerd successfully booted in 0.280713s" Feb 8 23:30:39.775022 update-ssh-keys[1098]: Updated "/home/core/.ssh/authorized_keys" Feb 8 23:30:39.774783 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Feb 8 23:30:39.835938 tar[1046]: ./ptp Feb 8 23:30:39.933745 tar[1046]: ./vlan Feb 8 23:30:40.029831 tar[1046]: ./host-device Feb 8 23:30:40.104664 tar[1046]: ./tuning Feb 8 23:30:40.153005 tar[1046]: ./vrf Feb 8 23:30:40.183929 systemd[1]: Created slice system-sshd.slice. Feb 8 23:30:40.190397 tar[1046]: ./sbr Feb 8 23:30:40.225559 tar[1046]: ./tap Feb 8 23:30:40.298353 tar[1046]: ./dhcp Feb 8 23:30:40.433588 systemd[1]: Finished prepare-critools.service. Feb 8 23:30:40.442306 tar[1046]: ./static Feb 8 23:30:40.471050 tar[1046]: ./firewall Feb 8 23:30:40.514741 tar[1046]: ./macvlan Feb 8 23:30:40.554462 tar[1046]: ./dummy Feb 8 23:30:40.593245 tar[1046]: ./bridge Feb 8 23:30:40.635836 tar[1046]: ./ipvlan Feb 8 23:30:40.675439 tar[1046]: ./portmap Feb 8 23:30:40.712646 tar[1046]: ./host-local Feb 8 23:30:40.751406 locksmithd[1091]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 8 23:30:40.757052 systemd[1]: Finished prepare-cni-plugins.service. Feb 8 23:30:40.931171 sshd_keygen[1063]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 8 23:30:40.964802 systemd[1]: Finished sshd-keygen.service. Feb 8 23:30:40.966736 systemd[1]: Starting issuegen.service... Feb 8 23:30:40.968111 systemd[1]: Started sshd@0-172.24.4.234:22-172.24.4.1:37260.service. Feb 8 23:30:40.977923 systemd[1]: issuegen.service: Deactivated successfully. Feb 8 23:30:40.978120 systemd[1]: Finished issuegen.service. Feb 8 23:30:40.979933 systemd[1]: Starting systemd-user-sessions.service... Feb 8 23:30:40.987782 systemd[1]: Finished systemd-user-sessions.service. Feb 8 23:30:40.989561 systemd[1]: Started getty@tty1.service. Feb 8 23:30:40.991525 systemd[1]: Started serial-getty@ttyS0.service. Feb 8 23:30:40.992231 systemd[1]: Reached target getty.target. Feb 8 23:30:40.993186 systemd[1]: Reached target multi-user.target. Feb 8 23:30:40.995504 systemd[1]: Starting systemd-update-utmp-runlevel.service... Feb 8 23:30:41.004515 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Feb 8 23:30:41.004668 systemd[1]: Finished systemd-update-utmp-runlevel.service. Feb 8 23:30:41.005309 systemd[1]: Startup finished in 1.050s (kernel) + 10.143s (initrd) + 9.034s (userspace) = 20.227s. Feb 8 23:30:42.041821 sshd[1112]: Accepted publickey for core from 172.24.4.1 port 37260 ssh2: RSA SHA256:hSCdy28aHh0WFAXHFi8tWlQhiCOOiQrn91fhtzGNenI Feb 8 23:30:42.046309 sshd[1112]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:30:42.072045 systemd[1]: Created slice user-500.slice. Feb 8 23:30:42.075761 systemd[1]: Starting user-runtime-dir@500.service... Feb 8 23:30:42.082101 systemd-logind[1042]: New session 1 of user core. Feb 8 23:30:42.097264 systemd[1]: Finished user-runtime-dir@500.service. Feb 8 23:30:42.102133 systemd[1]: Starting user@500.service... Feb 8 23:30:42.109676 (systemd)[1121]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:30:42.238806 systemd[1121]: Queued start job for default target default.target. Feb 8 23:30:42.239643 systemd[1121]: Reached target paths.target. Feb 8 23:30:42.239771 systemd[1121]: Reached target sockets.target. Feb 8 23:30:42.239872 systemd[1121]: Reached target timers.target. Feb 8 23:30:42.239967 systemd[1121]: Reached target basic.target. Feb 8 23:30:42.240165 systemd[1121]: Reached target default.target. Feb 8 23:30:42.240385 systemd[1121]: Startup finished in 118ms. Feb 8 23:30:42.240604 systemd[1]: Started user@500.service. Feb 8 23:30:42.243565 systemd[1]: Started session-1.scope. Feb 8 23:30:42.730319 systemd[1]: Started sshd@1-172.24.4.234:22-172.24.4.1:37274.service. Feb 8 23:30:44.286996 sshd[1130]: Accepted publickey for core from 172.24.4.1 port 37274 ssh2: RSA SHA256:hSCdy28aHh0WFAXHFi8tWlQhiCOOiQrn91fhtzGNenI Feb 8 23:30:44.292008 sshd[1130]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:30:44.305809 systemd-logind[1042]: New session 2 of user core. Feb 8 23:30:44.306582 systemd[1]: Started session-2.scope. Feb 8 23:30:45.028821 sshd[1130]: pam_unix(sshd:session): session closed for user core Feb 8 23:30:45.040293 systemd[1]: Started sshd@2-172.24.4.234:22-172.24.4.1:36880.service. Feb 8 23:30:45.042236 systemd[1]: sshd@1-172.24.4.234:22-172.24.4.1:37274.service: Deactivated successfully. Feb 8 23:30:45.044871 systemd[1]: session-2.scope: Deactivated successfully. Feb 8 23:30:45.050094 systemd-logind[1042]: Session 2 logged out. Waiting for processes to exit. Feb 8 23:30:45.053824 systemd-logind[1042]: Removed session 2. Feb 8 23:30:46.471755 sshd[1135]: Accepted publickey for core from 172.24.4.1 port 36880 ssh2: RSA SHA256:hSCdy28aHh0WFAXHFi8tWlQhiCOOiQrn91fhtzGNenI Feb 8 23:30:46.474697 sshd[1135]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:30:46.484524 systemd-logind[1042]: New session 3 of user core. Feb 8 23:30:46.487499 systemd[1]: Started session-3.scope. Feb 8 23:30:47.113972 sshd[1135]: pam_unix(sshd:session): session closed for user core Feb 8 23:30:47.117006 systemd[1]: Started sshd@3-172.24.4.234:22-172.24.4.1:36886.service. Feb 8 23:30:47.122044 systemd[1]: sshd@2-172.24.4.234:22-172.24.4.1:36880.service: Deactivated successfully. Feb 8 23:30:47.122934 systemd[1]: session-3.scope: Deactivated successfully. Feb 8 23:30:47.124230 systemd-logind[1042]: Session 3 logged out. Waiting for processes to exit. Feb 8 23:30:47.126197 systemd-logind[1042]: Removed session 3. Feb 8 23:30:48.298086 sshd[1141]: Accepted publickey for core from 172.24.4.1 port 36886 ssh2: RSA SHA256:hSCdy28aHh0WFAXHFi8tWlQhiCOOiQrn91fhtzGNenI Feb 8 23:30:48.300625 sshd[1141]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:30:48.309425 systemd-logind[1042]: New session 4 of user core. Feb 8 23:30:48.310256 systemd[1]: Started session-4.scope. Feb 8 23:30:48.940497 sshd[1141]: pam_unix(sshd:session): session closed for user core Feb 8 23:30:48.946070 systemd[1]: sshd@3-172.24.4.234:22-172.24.4.1:36886.service: Deactivated successfully. Feb 8 23:30:48.947741 systemd[1]: session-4.scope: Deactivated successfully. Feb 8 23:30:48.949643 systemd-logind[1042]: Session 4 logged out. Waiting for processes to exit. Feb 8 23:30:48.954188 systemd[1]: Started sshd@4-172.24.4.234:22-172.24.4.1:36894.service. Feb 8 23:30:48.957624 systemd-logind[1042]: Removed session 4. Feb 8 23:30:50.128450 sshd[1148]: Accepted publickey for core from 172.24.4.1 port 36894 ssh2: RSA SHA256:hSCdy28aHh0WFAXHFi8tWlQhiCOOiQrn91fhtzGNenI Feb 8 23:30:50.130686 sshd[1148]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:30:50.141199 systemd-logind[1042]: New session 5 of user core. Feb 8 23:30:50.141921 systemd[1]: Started session-5.scope. Feb 8 23:30:50.619884 sudo[1151]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 8 23:30:50.620373 sudo[1151]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 8 23:30:51.224565 systemd[1]: Reloading. Feb 8 23:30:51.390534 /usr/lib/systemd/system-generators/torcx-generator[1207]: time="2024-02-08T23:30:51Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 8 23:30:51.390575 /usr/lib/systemd/system-generators/torcx-generator[1207]: time="2024-02-08T23:30:51Z" level=info msg="torcx already run" Feb 8 23:30:51.433911 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 8 23:30:51.434100 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 8 23:30:51.457724 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 8 23:30:51.543328 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 8 23:30:51.557281 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 8 23:30:51.557830 systemd[1]: Reached target network-online.target. Feb 8 23:30:51.559714 systemd[1]: Started kubelet.service. Feb 8 23:30:51.572680 systemd[1]: Starting coreos-metadata.service... Feb 8 23:30:51.637926 coreos-metadata[1235]: Feb 08 23:30:51.637 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Feb 8 23:30:51.639171 kubelet[1227]: E0208 23:30:51.639136 1227 run.go:74] "command failed" err="failed to load kubelet config file, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory, path: /var/lib/kubelet/config.yaml" Feb 8 23:30:51.640938 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 8 23:30:51.641065 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 8 23:30:51.864760 coreos-metadata[1235]: Feb 08 23:30:51.864 INFO Fetch successful Feb 8 23:30:51.864760 coreos-metadata[1235]: Feb 08 23:30:51.864 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Feb 8 23:30:51.883679 coreos-metadata[1235]: Feb 08 23:30:51.883 INFO Fetch successful Feb 8 23:30:51.883679 coreos-metadata[1235]: Feb 08 23:30:51.883 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Feb 8 23:30:51.900944 coreos-metadata[1235]: Feb 08 23:30:51.900 INFO Fetch successful Feb 8 23:30:51.900944 coreos-metadata[1235]: Feb 08 23:30:51.900 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Feb 8 23:30:51.918837 coreos-metadata[1235]: Feb 08 23:30:51.918 INFO Fetch successful Feb 8 23:30:51.918837 coreos-metadata[1235]: Feb 08 23:30:51.918 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Feb 8 23:30:51.934909 coreos-metadata[1235]: Feb 08 23:30:51.934 INFO Fetch successful Feb 8 23:30:51.951334 systemd[1]: Finished coreos-metadata.service. Feb 8 23:30:52.705266 systemd[1]: Stopped kubelet.service. Feb 8 23:30:52.742454 systemd[1]: Reloading. Feb 8 23:30:52.885937 /usr/lib/systemd/system-generators/torcx-generator[1289]: time="2024-02-08T23:30:52Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 8 23:30:52.885995 /usr/lib/systemd/system-generators/torcx-generator[1289]: time="2024-02-08T23:30:52Z" level=info msg="torcx already run" Feb 8 23:30:52.954170 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 8 23:30:52.954192 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 8 23:30:52.977034 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 8 23:30:53.080260 systemd[1]: Started kubelet.service. Feb 8 23:30:53.138532 kubelet[1336]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 8 23:30:53.138919 kubelet[1336]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 8 23:30:53.138997 kubelet[1336]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 8 23:30:53.139125 kubelet[1336]: I0208 23:30:53.139097 1336 server.go:199] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 8 23:30:53.748925 kubelet[1336]: I0208 23:30:53.748873 1336 server.go:415] "Kubelet version" kubeletVersion="v1.27.2" Feb 8 23:30:53.748925 kubelet[1336]: I0208 23:30:53.748928 1336 server.go:417] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 8 23:30:53.749358 kubelet[1336]: I0208 23:30:53.749327 1336 server.go:837] "Client rotation is on, will bootstrap in background" Feb 8 23:30:53.757184 kubelet[1336]: I0208 23:30:53.757156 1336 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 8 23:30:53.757475 kubelet[1336]: I0208 23:30:53.757460 1336 server.go:662] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 8 23:30:53.757763 kubelet[1336]: I0208 23:30:53.757748 1336 container_manager_linux.go:266] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 8 23:30:53.757915 kubelet[1336]: I0208 23:30:53.757902 1336 container_manager_linux.go:271] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] TopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] PodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms TopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 8 23:30:53.758062 kubelet[1336]: I0208 23:30:53.758049 1336 topology_manager.go:136] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 8 23:30:53.758127 kubelet[1336]: I0208 23:30:53.758118 1336 container_manager_linux.go:302] "Creating device plugin manager" Feb 8 23:30:53.758273 kubelet[1336]: I0208 23:30:53.758261 1336 state_mem.go:36] "Initialized new in-memory state store" Feb 8 23:30:53.762168 kubelet[1336]: I0208 23:30:53.762151 1336 kubelet.go:405] "Attempting to sync node with API server" Feb 8 23:30:53.762296 kubelet[1336]: I0208 23:30:53.762285 1336 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 8 23:30:53.762400 kubelet[1336]: I0208 23:30:53.762369 1336 kubelet.go:309] "Adding apiserver pod source" Feb 8 23:30:53.762493 kubelet[1336]: I0208 23:30:53.762482 1336 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 8 23:30:53.762984 kubelet[1336]: E0208 23:30:53.762971 1336 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:30:53.763076 kubelet[1336]: E0208 23:30:53.763065 1336 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:30:53.763879 kubelet[1336]: I0208 23:30:53.763867 1336 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 8 23:30:53.764170 kubelet[1336]: W0208 23:30:53.764158 1336 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 8 23:30:53.764710 kubelet[1336]: I0208 23:30:53.764695 1336 server.go:1168] "Started kubelet" Feb 8 23:30:53.767609 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Feb 8 23:30:53.767690 kubelet[1336]: E0208 23:30:53.766515 1336 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 8 23:30:53.767690 kubelet[1336]: E0208 23:30:53.766565 1336 kubelet.go:1400] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 8 23:30:53.767864 kubelet[1336]: I0208 23:30:53.767850 1336 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 8 23:30:53.772803 kubelet[1336]: I0208 23:30:53.772761 1336 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Feb 8 23:30:53.774122 kubelet[1336]: I0208 23:30:53.774084 1336 server.go:461] "Adding debug handlers to kubelet server" Feb 8 23:30:53.776885 kubelet[1336]: I0208 23:30:53.776852 1336 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Feb 8 23:30:53.778532 kubelet[1336]: I0208 23:30:53.778520 1336 volume_manager.go:284] "Starting Kubelet Volume Manager" Feb 8 23:30:53.778734 kubelet[1336]: I0208 23:30:53.778722 1336 desired_state_of_world_populator.go:145] "Desired state populator starts to run" Feb 8 23:30:53.783168 kubelet[1336]: E0208 23:30:53.783155 1336 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.234\" not found" Feb 8 23:30:53.805807 kubelet[1336]: W0208 23:30:53.805751 1336 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 8 23:30:53.805951 kubelet[1336]: E0208 23:30:53.805872 1336 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 8 23:30:53.806086 kubelet[1336]: W0208 23:30:53.806046 1336 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "172.24.4.234" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 8 23:30:53.806144 kubelet[1336]: E0208 23:30:53.806089 1336 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "172.24.4.234" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 8 23:30:53.806177 kubelet[1336]: W0208 23:30:53.806167 1336 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 8 23:30:53.806207 kubelet[1336]: E0208 23:30:53.806191 1336 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 8 23:30:53.806497 kubelet[1336]: E0208 23:30:53.806276 1336 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.234.17b2072290b18730", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.234", UID:"172.24.4.234", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.234"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 30, 53, 764675376, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 30, 53, 764675376, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 8 23:30:53.806859 kubelet[1336]: E0208 23:30:53.806821 1336 controller.go:146] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"172.24.4.234\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Feb 8 23:30:53.808337 kubelet[1336]: I0208 23:30:53.808272 1336 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 8 23:30:53.808337 kubelet[1336]: I0208 23:30:53.808311 1336 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 8 23:30:53.808472 kubelet[1336]: I0208 23:30:53.808346 1336 state_mem.go:36] "Initialized new in-memory state store" Feb 8 23:30:53.808747 kubelet[1336]: E0208 23:30:53.808668 1336 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.234.17b2072290ce10c5", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.234", UID:"172.24.4.234", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"InvalidDiskCapacity", Message:"invalid capacity 0 on image filesystem", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.234"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 30, 53, 766545605, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 30, 53, 766545605, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 8 23:30:53.810510 kubelet[1336]: E0208 23:30:53.810458 1336 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.234.17b20722933bf5ed", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.234", UID:"172.24.4.234", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.24.4.234 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.234"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 30, 53, 807302125, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 30, 53, 807302125, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 8 23:30:53.813612 kubelet[1336]: E0208 23:30:53.813462 1336 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.234.17b20722933c20e3", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.234", UID:"172.24.4.234", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.24.4.234 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.234"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 30, 53, 807313123, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 30, 53, 807313123, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 8 23:30:53.815873 kubelet[1336]: E0208 23:30:53.815741 1336 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.234.17b20722933c3825", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.234", UID:"172.24.4.234", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.24.4.234 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.234"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 30, 53, 807319077, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 30, 53, 807319077, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 8 23:30:53.819845 kubelet[1336]: I0208 23:30:53.819768 1336 policy_none.go:49] "None policy: Start" Feb 8 23:30:53.823957 kubelet[1336]: I0208 23:30:53.823882 1336 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 8 23:30:53.823957 kubelet[1336]: I0208 23:30:53.823941 1336 state_mem.go:35] "Initializing new in-memory state store" Feb 8 23:30:53.838889 systemd[1]: Created slice kubepods.slice. Feb 8 23:30:53.847410 systemd[1]: Created slice kubepods-burstable.slice. Feb 8 23:30:53.854158 systemd[1]: Created slice kubepods-besteffort.slice. Feb 8 23:30:53.859188 kubelet[1336]: I0208 23:30:53.859143 1336 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 8 23:30:53.859518 kubelet[1336]: I0208 23:30:53.859489 1336 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 8 23:30:53.861985 kubelet[1336]: E0208 23:30:53.861733 1336 eviction_manager.go:262] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172.24.4.234\" not found" Feb 8 23:30:53.864432 kubelet[1336]: E0208 23:30:53.864256 1336 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.234.17b207229682cdc8", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.234", UID:"172.24.4.234", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeAllocatableEnforced", Message:"Updated Node Allocatable limit across pods", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.234"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 30, 53, 862276552, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 30, 53, 862276552, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 8 23:30:53.884188 kubelet[1336]: I0208 23:30:53.884148 1336 kubelet_node_status.go:70] "Attempting to register node" node="172.24.4.234" Feb 8 23:30:53.886139 kubelet[1336]: E0208 23:30:53.886054 1336 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="172.24.4.234" Feb 8 23:30:53.886854 kubelet[1336]: E0208 23:30:53.886777 1336 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.234.17b20722933bf5ed", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.234", UID:"172.24.4.234", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.24.4.234 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.234"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 30, 53, 807302125, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 30, 53, 884095966, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.24.4.234.17b20722933bf5ed" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 8 23:30:53.888235 kubelet[1336]: E0208 23:30:53.888179 1336 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.234.17b20722933c20e3", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.234", UID:"172.24.4.234", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.24.4.234 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.234"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 30, 53, 807313123, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 30, 53, 884100969, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.24.4.234.17b20722933c20e3" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 8 23:30:53.889798 kubelet[1336]: E0208 23:30:53.889735 1336 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.234.17b20722933c3825", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.234", UID:"172.24.4.234", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.24.4.234 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.234"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 30, 53, 807319077, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 30, 53, 884103781, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.24.4.234.17b20722933c3825" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 8 23:30:53.958598 kubelet[1336]: I0208 23:30:53.958530 1336 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 8 23:30:53.960555 kubelet[1336]: I0208 23:30:53.960519 1336 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 8 23:30:53.960909 kubelet[1336]: I0208 23:30:53.960880 1336 status_manager.go:207] "Starting to sync pod status with apiserver" Feb 8 23:30:53.961181 kubelet[1336]: I0208 23:30:53.961152 1336 kubelet.go:2257] "Starting kubelet main sync loop" Feb 8 23:30:53.961645 kubelet[1336]: E0208 23:30:53.961615 1336 kubelet.go:2281] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Feb 8 23:30:53.964102 kubelet[1336]: W0208 23:30:53.964061 1336 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 8 23:30:53.964372 kubelet[1336]: E0208 23:30:53.964346 1336 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 8 23:30:54.010156 kubelet[1336]: E0208 23:30:54.009935 1336 controller.go:146] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"172.24.4.234\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="400ms" Feb 8 23:30:54.089090 kubelet[1336]: I0208 23:30:54.089012 1336 kubelet_node_status.go:70] "Attempting to register node" node="172.24.4.234" Feb 8 23:30:54.091162 kubelet[1336]: E0208 23:30:54.091120 1336 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="172.24.4.234" Feb 8 23:30:54.091908 kubelet[1336]: E0208 23:30:54.091749 1336 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.234.17b20722933bf5ed", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.234", UID:"172.24.4.234", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.24.4.234 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.234"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 30, 53, 807302125, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 30, 54, 88910700, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.24.4.234.17b20722933bf5ed" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 8 23:30:54.093740 kubelet[1336]: E0208 23:30:54.093595 1336 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.234.17b20722933c20e3", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.234", UID:"172.24.4.234", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.24.4.234 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.234"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 30, 53, 807313123, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 30, 54, 88961354, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.24.4.234.17b20722933c20e3" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 8 23:30:54.094961 kubelet[1336]: E0208 23:30:54.094857 1336 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.234.17b20722933c3825", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.234", UID:"172.24.4.234", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.24.4.234 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.234"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 30, 53, 807319077, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 30, 54, 88968911, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.24.4.234.17b20722933c3825" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 8 23:30:54.415005 kubelet[1336]: E0208 23:30:54.414920 1336 controller.go:146] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"172.24.4.234\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="800ms" Feb 8 23:30:54.493292 kubelet[1336]: I0208 23:30:54.493240 1336 kubelet_node_status.go:70] "Attempting to register node" node="172.24.4.234" Feb 8 23:30:54.495959 kubelet[1336]: E0208 23:30:54.495895 1336 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="172.24.4.234" Feb 8 23:30:54.496631 kubelet[1336]: E0208 23:30:54.496483 1336 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.234.17b20722933bf5ed", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.234", UID:"172.24.4.234", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.24.4.234 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.234"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 30, 53, 807302125, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 30, 54, 493136124, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.24.4.234.17b20722933bf5ed" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 8 23:30:54.499006 kubelet[1336]: E0208 23:30:54.498881 1336 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.234.17b20722933c20e3", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.234", UID:"172.24.4.234", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.24.4.234 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.234"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 30, 53, 807313123, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 30, 54, 493146153, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.24.4.234.17b20722933c20e3" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 8 23:30:54.501312 kubelet[1336]: E0208 23:30:54.501204 1336 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.234.17b20722933c3825", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.234", UID:"172.24.4.234", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.24.4.234 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.234"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 30, 53, 807319077, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 30, 54, 493151789, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.24.4.234.17b20722933c3825" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 8 23:30:54.759544 kubelet[1336]: I0208 23:30:54.759311 1336 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 8 23:30:54.763688 kubelet[1336]: E0208 23:30:54.763646 1336 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:30:55.207894 kubelet[1336]: E0208 23:30:55.207818 1336 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "172.24.4.234" not found Feb 8 23:30:55.224821 kubelet[1336]: E0208 23:30:55.224774 1336 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"172.24.4.234\" not found" node="172.24.4.234" Feb 8 23:30:55.297366 kubelet[1336]: I0208 23:30:55.297329 1336 kubelet_node_status.go:70] "Attempting to register node" node="172.24.4.234" Feb 8 23:30:55.307276 kubelet[1336]: I0208 23:30:55.307234 1336 kubelet_node_status.go:73] "Successfully registered node" node="172.24.4.234" Feb 8 23:30:55.348762 kubelet[1336]: I0208 23:30:55.348729 1336 kuberuntime_manager.go:1460] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Feb 8 23:30:55.349895 env[1054]: time="2024-02-08T23:30:55.349793555Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 8 23:30:55.350668 kubelet[1336]: I0208 23:30:55.350161 1336 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Feb 8 23:30:55.611951 sudo[1151]: pam_unix(sudo:session): session closed for user root Feb 8 23:30:55.764930 kubelet[1336]: E0208 23:30:55.764644 1336 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:30:55.764930 kubelet[1336]: I0208 23:30:55.764741 1336 apiserver.go:52] "Watching apiserver" Feb 8 23:30:55.770151 kubelet[1336]: I0208 23:30:55.770094 1336 topology_manager.go:212] "Topology Admit Handler" Feb 8 23:30:55.770322 kubelet[1336]: I0208 23:30:55.770264 1336 topology_manager.go:212] "Topology Admit Handler" Feb 8 23:30:55.782212 systemd[1]: Created slice kubepods-besteffort-podcde35264_ca26_4e1d_91f3_78492bd9e9d6.slice. Feb 8 23:30:55.784164 kubelet[1336]: I0208 23:30:55.784119 1336 desired_state_of_world_populator.go:153] "Finished populating initial desired state of world" Feb 8 23:30:55.792481 kubelet[1336]: I0208 23:30:55.792362 1336 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7a3dce3c-90fc-44fc-996e-c1e78804a048-hubble-tls\") pod \"cilium-pjfj6\" (UID: \"7a3dce3c-90fc-44fc-996e-c1e78804a048\") " pod="kube-system/cilium-pjfj6" Feb 8 23:30:55.792664 kubelet[1336]: I0208 23:30:55.792510 1336 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cde35264-ca26-4e1d-91f3-78492bd9e9d6-xtables-lock\") pod \"kube-proxy-x67mj\" (UID: \"cde35264-ca26-4e1d-91f3-78492bd9e9d6\") " pod="kube-system/kube-proxy-x67mj" Feb 8 23:30:55.792664 kubelet[1336]: I0208 23:30:55.792570 1336 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7a3dce3c-90fc-44fc-996e-c1e78804a048-bpf-maps\") pod \"cilium-pjfj6\" (UID: \"7a3dce3c-90fc-44fc-996e-c1e78804a048\") " pod="kube-system/cilium-pjfj6" Feb 8 23:30:55.792664 kubelet[1336]: I0208 23:30:55.792628 1336 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7a3dce3c-90fc-44fc-996e-c1e78804a048-etc-cni-netd\") pod \"cilium-pjfj6\" (UID: \"7a3dce3c-90fc-44fc-996e-c1e78804a048\") " pod="kube-system/cilium-pjfj6" Feb 8 23:30:55.792847 kubelet[1336]: I0208 23:30:55.792681 1336 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7a3dce3c-90fc-44fc-996e-c1e78804a048-xtables-lock\") pod \"cilium-pjfj6\" (UID: \"7a3dce3c-90fc-44fc-996e-c1e78804a048\") " pod="kube-system/cilium-pjfj6" Feb 8 23:30:55.792847 kubelet[1336]: I0208 23:30:55.792738 1336 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7a3dce3c-90fc-44fc-996e-c1e78804a048-clustermesh-secrets\") pod \"cilium-pjfj6\" (UID: \"7a3dce3c-90fc-44fc-996e-c1e78804a048\") " pod="kube-system/cilium-pjfj6" Feb 8 23:30:55.792847 kubelet[1336]: I0208 23:30:55.792790 1336 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7a3dce3c-90fc-44fc-996e-c1e78804a048-cni-path\") pod \"cilium-pjfj6\" (UID: \"7a3dce3c-90fc-44fc-996e-c1e78804a048\") " pod="kube-system/cilium-pjfj6" Feb 8 23:30:55.792847 kubelet[1336]: I0208 23:30:55.792842 1336 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cde35264-ca26-4e1d-91f3-78492bd9e9d6-lib-modules\") pod \"kube-proxy-x67mj\" (UID: \"cde35264-ca26-4e1d-91f3-78492bd9e9d6\") " pod="kube-system/kube-proxy-x67mj" Feb 8 23:30:55.793121 kubelet[1336]: I0208 23:30:55.792900 1336 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4p5ns\" (UniqueName: \"kubernetes.io/projected/cde35264-ca26-4e1d-91f3-78492bd9e9d6-kube-api-access-4p5ns\") pod \"kube-proxy-x67mj\" (UID: \"cde35264-ca26-4e1d-91f3-78492bd9e9d6\") " pod="kube-system/kube-proxy-x67mj" Feb 8 23:30:55.793121 kubelet[1336]: I0208 23:30:55.792963 1336 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7a3dce3c-90fc-44fc-996e-c1e78804a048-cilium-run\") pod \"cilium-pjfj6\" (UID: \"7a3dce3c-90fc-44fc-996e-c1e78804a048\") " pod="kube-system/cilium-pjfj6" Feb 8 23:30:55.793121 kubelet[1336]: I0208 23:30:55.793023 1336 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7a3dce3c-90fc-44fc-996e-c1e78804a048-cilium-cgroup\") pod \"cilium-pjfj6\" (UID: \"7a3dce3c-90fc-44fc-996e-c1e78804a048\") " pod="kube-system/cilium-pjfj6" Feb 8 23:30:55.793121 kubelet[1336]: I0208 23:30:55.793077 1336 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7a3dce3c-90fc-44fc-996e-c1e78804a048-lib-modules\") pod \"cilium-pjfj6\" (UID: \"7a3dce3c-90fc-44fc-996e-c1e78804a048\") " pod="kube-system/cilium-pjfj6" Feb 8 23:30:55.793367 kubelet[1336]: I0208 23:30:55.793128 1336 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/cde35264-ca26-4e1d-91f3-78492bd9e9d6-kube-proxy\") pod \"kube-proxy-x67mj\" (UID: \"cde35264-ca26-4e1d-91f3-78492bd9e9d6\") " pod="kube-system/kube-proxy-x67mj" Feb 8 23:30:55.793367 kubelet[1336]: I0208 23:30:55.793186 1336 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jngc\" (UniqueName: \"kubernetes.io/projected/7a3dce3c-90fc-44fc-996e-c1e78804a048-kube-api-access-4jngc\") pod \"cilium-pjfj6\" (UID: \"7a3dce3c-90fc-44fc-996e-c1e78804a048\") " pod="kube-system/cilium-pjfj6" Feb 8 23:30:55.793367 kubelet[1336]: I0208 23:30:55.793239 1336 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7a3dce3c-90fc-44fc-996e-c1e78804a048-hostproc\") pod \"cilium-pjfj6\" (UID: \"7a3dce3c-90fc-44fc-996e-c1e78804a048\") " pod="kube-system/cilium-pjfj6" Feb 8 23:30:55.793367 kubelet[1336]: I0208 23:30:55.793306 1336 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7a3dce3c-90fc-44fc-996e-c1e78804a048-cilium-config-path\") pod \"cilium-pjfj6\" (UID: \"7a3dce3c-90fc-44fc-996e-c1e78804a048\") " pod="kube-system/cilium-pjfj6" Feb 8 23:30:55.793367 kubelet[1336]: I0208 23:30:55.793360 1336 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7a3dce3c-90fc-44fc-996e-c1e78804a048-host-proc-sys-net\") pod \"cilium-pjfj6\" (UID: \"7a3dce3c-90fc-44fc-996e-c1e78804a048\") " pod="kube-system/cilium-pjfj6" Feb 8 23:30:55.793728 kubelet[1336]: I0208 23:30:55.793455 1336 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7a3dce3c-90fc-44fc-996e-c1e78804a048-host-proc-sys-kernel\") pod \"cilium-pjfj6\" (UID: \"7a3dce3c-90fc-44fc-996e-c1e78804a048\") " pod="kube-system/cilium-pjfj6" Feb 8 23:30:55.793728 kubelet[1336]: I0208 23:30:55.793476 1336 reconciler.go:41] "Reconciler: start to sync state" Feb 8 23:30:55.801870 systemd[1]: Created slice kubepods-burstable-pod7a3dce3c_90fc_44fc_996e_c1e78804a048.slice. Feb 8 23:30:55.873858 sshd[1148]: pam_unix(sshd:session): session closed for user core Feb 8 23:30:55.881090 systemd[1]: sshd@4-172.24.4.234:22-172.24.4.1:36894.service: Deactivated successfully. Feb 8 23:30:55.882830 systemd[1]: session-5.scope: Deactivated successfully. Feb 8 23:30:55.884475 systemd-logind[1042]: Session 5 logged out. Waiting for processes to exit. Feb 8 23:30:55.887203 systemd-logind[1042]: Removed session 5. Feb 8 23:30:56.097861 env[1054]: time="2024-02-08T23:30:56.097287316Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-x67mj,Uid:cde35264-ca26-4e1d-91f3-78492bd9e9d6,Namespace:kube-system,Attempt:0,}" Feb 8 23:30:56.115497 env[1054]: time="2024-02-08T23:30:56.115361481Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-pjfj6,Uid:7a3dce3c-90fc-44fc-996e-c1e78804a048,Namespace:kube-system,Attempt:0,}" Feb 8 23:30:56.765253 kubelet[1336]: E0208 23:30:56.765163 1336 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:30:56.930006 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount657687988.mount: Deactivated successfully. Feb 8 23:30:56.951626 env[1054]: time="2024-02-08T23:30:56.951551021Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:30:56.954972 env[1054]: time="2024-02-08T23:30:56.954915407Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:30:56.965741 env[1054]: time="2024-02-08T23:30:56.965653203Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:30:56.970242 env[1054]: time="2024-02-08T23:30:56.970178271Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:30:56.981261 env[1054]: time="2024-02-08T23:30:56.981198882Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:30:56.989264 env[1054]: time="2024-02-08T23:30:56.989157514Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:30:56.994307 env[1054]: time="2024-02-08T23:30:56.994245899Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:30:57.000254 env[1054]: time="2024-02-08T23:30:57.000204805Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:30:57.044461 env[1054]: time="2024-02-08T23:30:57.040103418Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 8 23:30:57.044461 env[1054]: time="2024-02-08T23:30:57.040167326Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 8 23:30:57.044461 env[1054]: time="2024-02-08T23:30:57.040182204Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 8 23:30:57.044461 env[1054]: time="2024-02-08T23:30:57.040331454Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/687073f1fbdb09e160ae351b13cb92d3f34534345ec6127db6dcf91c4f959d4c pid=1389 runtime=io.containerd.runc.v2 Feb 8 23:30:57.056938 env[1054]: time="2024-02-08T23:30:57.056838387Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 8 23:30:57.056938 env[1054]: time="2024-02-08T23:30:57.056880968Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 8 23:30:57.056938 env[1054]: time="2024-02-08T23:30:57.056894514Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 8 23:30:57.057540 env[1054]: time="2024-02-08T23:30:57.057372242Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a744277dfc29b8e3ad0db90dce48432d9ecdfc6dfb460c3282c4388b57ca6559 pid=1405 runtime=io.containerd.runc.v2 Feb 8 23:30:57.075870 systemd[1]: Started cri-containerd-a744277dfc29b8e3ad0db90dce48432d9ecdfc6dfb460c3282c4388b57ca6559.scope. Feb 8 23:30:57.099596 systemd[1]: Started cri-containerd-687073f1fbdb09e160ae351b13cb92d3f34534345ec6127db6dcf91c4f959d4c.scope. Feb 8 23:30:57.126010 env[1054]: time="2024-02-08T23:30:57.125948095Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-x67mj,Uid:cde35264-ca26-4e1d-91f3-78492bd9e9d6,Namespace:kube-system,Attempt:0,} returns sandbox id \"a744277dfc29b8e3ad0db90dce48432d9ecdfc6dfb460c3282c4388b57ca6559\"" Feb 8 23:30:57.128350 env[1054]: time="2024-02-08T23:30:57.128246764Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.27.10\"" Feb 8 23:30:57.138815 env[1054]: time="2024-02-08T23:30:57.138750921Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-pjfj6,Uid:7a3dce3c-90fc-44fc-996e-c1e78804a048,Namespace:kube-system,Attempt:0,} returns sandbox id \"687073f1fbdb09e160ae351b13cb92d3f34534345ec6127db6dcf91c4f959d4c\"" Feb 8 23:30:57.765920 kubelet[1336]: E0208 23:30:57.765858 1336 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:30:58.586142 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount686640424.mount: Deactivated successfully. Feb 8 23:30:58.766730 kubelet[1336]: E0208 23:30:58.766635 1336 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:30:59.371238 env[1054]: time="2024-02-08T23:30:59.371143773Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:30:59.374217 env[1054]: time="2024-02-08T23:30:59.374168001Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:db7b01e105753475c198490cf875df1314fd1a599f67ea1b184586cb399e1cae,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:30:59.377258 env[1054]: time="2024-02-08T23:30:59.377182837Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:30:59.379559 env[1054]: time="2024-02-08T23:30:59.379508673Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:d084b53c772f62ec38fddb2348a82d4234016daf6cd43fedbf0b3281f3790f88,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:30:59.380229 env[1054]: time="2024-02-08T23:30:59.380173690Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.27.10\" returns image reference \"sha256:db7b01e105753475c198490cf875df1314fd1a599f67ea1b184586cb399e1cae\"" Feb 8 23:30:59.384426 env[1054]: time="2024-02-08T23:30:59.384342201Z" level=info msg="CreateContainer within sandbox \"a744277dfc29b8e3ad0db90dce48432d9ecdfc6dfb460c3282c4388b57ca6559\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 8 23:30:59.384722 env[1054]: time="2024-02-08T23:30:59.384662548Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 8 23:30:59.421498 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3407793479.mount: Deactivated successfully. Feb 8 23:30:59.423314 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3582959434.mount: Deactivated successfully. Feb 8 23:30:59.437826 env[1054]: time="2024-02-08T23:30:59.437751981Z" level=info msg="CreateContainer within sandbox \"a744277dfc29b8e3ad0db90dce48432d9ecdfc6dfb460c3282c4388b57ca6559\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"0efff949d85c1099e90d0ec25eaad2005447f1bf722c1d307417cfddbb62d8ea\"" Feb 8 23:30:59.438589 env[1054]: time="2024-02-08T23:30:59.438544443Z" level=info msg="StartContainer for \"0efff949d85c1099e90d0ec25eaad2005447f1bf722c1d307417cfddbb62d8ea\"" Feb 8 23:30:59.477479 systemd[1]: Started cri-containerd-0efff949d85c1099e90d0ec25eaad2005447f1bf722c1d307417cfddbb62d8ea.scope. Feb 8 23:30:59.520936 env[1054]: time="2024-02-08T23:30:59.520881022Z" level=info msg="StartContainer for \"0efff949d85c1099e90d0ec25eaad2005447f1bf722c1d307417cfddbb62d8ea\" returns successfully" Feb 8 23:30:59.767647 kubelet[1336]: E0208 23:30:59.767454 1336 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:31:00.017127 kubelet[1336]: I0208 23:31:00.017010 1336 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-x67mj" podStartSLOduration=2.763110931 podCreationTimestamp="2024-02-08 23:30:55 +0000 UTC" firstStartedPulling="2024-02-08 23:30:57.127719236 +0000 UTC m=+4.042979024" lastFinishedPulling="2024-02-08 23:30:59.381525345 +0000 UTC m=+6.296785183" observedRunningTime="2024-02-08 23:31:00.016560961 +0000 UTC m=+6.931820799" watchObservedRunningTime="2024-02-08 23:31:00.01691709 +0000 UTC m=+6.932176918" Feb 8 23:31:00.768059 kubelet[1336]: E0208 23:31:00.767996 1336 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:31:01.769244 kubelet[1336]: E0208 23:31:01.769165 1336 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:31:02.770087 kubelet[1336]: E0208 23:31:02.770001 1336 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:31:03.770736 kubelet[1336]: E0208 23:31:03.770655 1336 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:31:04.771411 kubelet[1336]: E0208 23:31:04.771328 1336 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:31:05.771830 kubelet[1336]: E0208 23:31:05.771714 1336 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:31:06.772474 kubelet[1336]: E0208 23:31:06.772356 1336 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:31:07.182303 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1916036440.mount: Deactivated successfully. Feb 8 23:31:07.772653 kubelet[1336]: E0208 23:31:07.772594 1336 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:31:08.773746 kubelet[1336]: E0208 23:31:08.773701 1336 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:31:09.774550 kubelet[1336]: E0208 23:31:09.774420 1336 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:31:10.775621 kubelet[1336]: E0208 23:31:10.775566 1336 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:31:11.422543 env[1054]: time="2024-02-08T23:31:11.422415494Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:31:11.427593 env[1054]: time="2024-02-08T23:31:11.427519917Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:31:11.433031 env[1054]: time="2024-02-08T23:31:11.432934677Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:31:11.435150 env[1054]: time="2024-02-08T23:31:11.435054248Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Feb 8 23:31:11.440653 env[1054]: time="2024-02-08T23:31:11.440591565Z" level=info msg="CreateContainer within sandbox \"687073f1fbdb09e160ae351b13cb92d3f34534345ec6127db6dcf91c4f959d4c\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 8 23:31:11.460360 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2610390381.mount: Deactivated successfully. Feb 8 23:31:11.473464 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3194200851.mount: Deactivated successfully. Feb 8 23:31:11.487900 env[1054]: time="2024-02-08T23:31:11.487824108Z" level=info msg="CreateContainer within sandbox \"687073f1fbdb09e160ae351b13cb92d3f34534345ec6127db6dcf91c4f959d4c\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"608384a94c60a7dcbab8939a4ea92c8cf7677e589e5d7771ce0629a24410f736\"" Feb 8 23:31:11.489658 env[1054]: time="2024-02-08T23:31:11.489572913Z" level=info msg="StartContainer for \"608384a94c60a7dcbab8939a4ea92c8cf7677e589e5d7771ce0629a24410f736\"" Feb 8 23:31:11.530505 systemd[1]: Started cri-containerd-608384a94c60a7dcbab8939a4ea92c8cf7677e589e5d7771ce0629a24410f736.scope. Feb 8 23:31:11.577784 env[1054]: time="2024-02-08T23:31:11.577728945Z" level=info msg="StartContainer for \"608384a94c60a7dcbab8939a4ea92c8cf7677e589e5d7771ce0629a24410f736\" returns successfully" Feb 8 23:31:11.584916 systemd[1]: cri-containerd-608384a94c60a7dcbab8939a4ea92c8cf7677e589e5d7771ce0629a24410f736.scope: Deactivated successfully. Feb 8 23:31:11.796930 kubelet[1336]: E0208 23:31:11.776057 1336 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:31:12.309499 env[1054]: time="2024-02-08T23:31:12.309300834Z" level=info msg="shim disconnected" id=608384a94c60a7dcbab8939a4ea92c8cf7677e589e5d7771ce0629a24410f736 Feb 8 23:31:12.309499 env[1054]: time="2024-02-08T23:31:12.309453796Z" level=warning msg="cleaning up after shim disconnected" id=608384a94c60a7dcbab8939a4ea92c8cf7677e589e5d7771ce0629a24410f736 namespace=k8s.io Feb 8 23:31:12.309499 env[1054]: time="2024-02-08T23:31:12.309484503Z" level=info msg="cleaning up dead shim" Feb 8 23:31:12.326536 env[1054]: time="2024-02-08T23:31:12.326457624Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:31:12Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1672 runtime=io.containerd.runc.v2\n" Feb 8 23:31:12.456173 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-608384a94c60a7dcbab8939a4ea92c8cf7677e589e5d7771ce0629a24410f736-rootfs.mount: Deactivated successfully. Feb 8 23:31:12.776767 kubelet[1336]: E0208 23:31:12.776669 1336 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:31:13.037738 env[1054]: time="2024-02-08T23:31:13.037531391Z" level=info msg="CreateContainer within sandbox \"687073f1fbdb09e160ae351b13cb92d3f34534345ec6127db6dcf91c4f959d4c\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 8 23:31:13.062500 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3983102972.mount: Deactivated successfully. Feb 8 23:31:13.089249 env[1054]: time="2024-02-08T23:31:13.089097524Z" level=info msg="CreateContainer within sandbox \"687073f1fbdb09e160ae351b13cb92d3f34534345ec6127db6dcf91c4f959d4c\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"f63de27b31bad4ed6c086c6fced880f688c357ba5588d25539b3e0171659f1ce\"" Feb 8 23:31:13.090453 env[1054]: time="2024-02-08T23:31:13.090232240Z" level=info msg="StartContainer for \"f63de27b31bad4ed6c086c6fced880f688c357ba5588d25539b3e0171659f1ce\"" Feb 8 23:31:13.137957 systemd[1]: Started cri-containerd-f63de27b31bad4ed6c086c6fced880f688c357ba5588d25539b3e0171659f1ce.scope. Feb 8 23:31:13.181573 env[1054]: time="2024-02-08T23:31:13.181515371Z" level=info msg="StartContainer for \"f63de27b31bad4ed6c086c6fced880f688c357ba5588d25539b3e0171659f1ce\" returns successfully" Feb 8 23:31:13.188417 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 8 23:31:13.188881 systemd[1]: Stopped systemd-sysctl.service. Feb 8 23:31:13.189068 systemd[1]: Stopping systemd-sysctl.service... Feb 8 23:31:13.192856 systemd[1]: Starting systemd-sysctl.service... Feb 8 23:31:13.194002 systemd[1]: cri-containerd-f63de27b31bad4ed6c086c6fced880f688c357ba5588d25539b3e0171659f1ce.scope: Deactivated successfully. Feb 8 23:31:13.200812 systemd[1]: Finished systemd-sysctl.service. Feb 8 23:31:13.225093 env[1054]: time="2024-02-08T23:31:13.225039290Z" level=info msg="shim disconnected" id=f63de27b31bad4ed6c086c6fced880f688c357ba5588d25539b3e0171659f1ce Feb 8 23:31:13.225093 env[1054]: time="2024-02-08T23:31:13.225091626Z" level=warning msg="cleaning up after shim disconnected" id=f63de27b31bad4ed6c086c6fced880f688c357ba5588d25539b3e0171659f1ce namespace=k8s.io Feb 8 23:31:13.225324 env[1054]: time="2024-02-08T23:31:13.225104530Z" level=info msg="cleaning up dead shim" Feb 8 23:31:13.233287 env[1054]: time="2024-02-08T23:31:13.233252579Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:31:13Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1739 runtime=io.containerd.runc.v2\n" Feb 8 23:31:13.455526 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f63de27b31bad4ed6c086c6fced880f688c357ba5588d25539b3e0171659f1ce-rootfs.mount: Deactivated successfully. Feb 8 23:31:13.763896 kubelet[1336]: E0208 23:31:13.763329 1336 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:31:13.777909 kubelet[1336]: E0208 23:31:13.777864 1336 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:31:14.044897 env[1054]: time="2024-02-08T23:31:14.044071152Z" level=info msg="CreateContainer within sandbox \"687073f1fbdb09e160ae351b13cb92d3f34534345ec6127db6dcf91c4f959d4c\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 8 23:31:14.072617 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount125209122.mount: Deactivated successfully. Feb 8 23:31:14.086051 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3041971358.mount: Deactivated successfully. Feb 8 23:31:14.101429 env[1054]: time="2024-02-08T23:31:14.101301667Z" level=info msg="CreateContainer within sandbox \"687073f1fbdb09e160ae351b13cb92d3f34534345ec6127db6dcf91c4f959d4c\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"778ef38087a99b28fd0200e143e7adedf2221854d967bc54359eeee70f62d935\"" Feb 8 23:31:14.103236 env[1054]: time="2024-02-08T23:31:14.103172235Z" level=info msg="StartContainer for \"778ef38087a99b28fd0200e143e7adedf2221854d967bc54359eeee70f62d935\"" Feb 8 23:31:14.144648 systemd[1]: Started cri-containerd-778ef38087a99b28fd0200e143e7adedf2221854d967bc54359eeee70f62d935.scope. Feb 8 23:31:14.189312 systemd[1]: cri-containerd-778ef38087a99b28fd0200e143e7adedf2221854d967bc54359eeee70f62d935.scope: Deactivated successfully. Feb 8 23:31:14.194171 env[1054]: time="2024-02-08T23:31:14.194136640Z" level=info msg="StartContainer for \"778ef38087a99b28fd0200e143e7adedf2221854d967bc54359eeee70f62d935\" returns successfully" Feb 8 23:31:14.219038 env[1054]: time="2024-02-08T23:31:14.218976859Z" level=info msg="shim disconnected" id=778ef38087a99b28fd0200e143e7adedf2221854d967bc54359eeee70f62d935 Feb 8 23:31:14.219237 env[1054]: time="2024-02-08T23:31:14.219058740Z" level=warning msg="cleaning up after shim disconnected" id=778ef38087a99b28fd0200e143e7adedf2221854d967bc54359eeee70f62d935 namespace=k8s.io Feb 8 23:31:14.219237 env[1054]: time="2024-02-08T23:31:14.219072596Z" level=info msg="cleaning up dead shim" Feb 8 23:31:14.228568 env[1054]: time="2024-02-08T23:31:14.228523406Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:31:14Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1798 runtime=io.containerd.runc.v2\n" Feb 8 23:31:14.778883 kubelet[1336]: E0208 23:31:14.778817 1336 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:31:15.050731 env[1054]: time="2024-02-08T23:31:15.050549155Z" level=info msg="CreateContainer within sandbox \"687073f1fbdb09e160ae351b13cb92d3f34534345ec6127db6dcf91c4f959d4c\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 8 23:31:15.094667 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1764007808.mount: Deactivated successfully. Feb 8 23:31:15.111575 env[1054]: time="2024-02-08T23:31:15.111335002Z" level=info msg="CreateContainer within sandbox \"687073f1fbdb09e160ae351b13cb92d3f34534345ec6127db6dcf91c4f959d4c\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"d222f6bc5a0443d23c105d009e0eb08e485373a40ae3b5b40a91a697ab788d9f\"" Feb 8 23:31:15.114120 env[1054]: time="2024-02-08T23:31:15.113950670Z" level=info msg="StartContainer for \"d222f6bc5a0443d23c105d009e0eb08e485373a40ae3b5b40a91a697ab788d9f\"" Feb 8 23:31:15.156345 systemd[1]: Started cri-containerd-d222f6bc5a0443d23c105d009e0eb08e485373a40ae3b5b40a91a697ab788d9f.scope. Feb 8 23:31:15.190469 systemd[1]: cri-containerd-d222f6bc5a0443d23c105d009e0eb08e485373a40ae3b5b40a91a697ab788d9f.scope: Deactivated successfully. Feb 8 23:31:15.194887 env[1054]: time="2024-02-08T23:31:15.194847583Z" level=info msg="StartContainer for \"d222f6bc5a0443d23c105d009e0eb08e485373a40ae3b5b40a91a697ab788d9f\" returns successfully" Feb 8 23:31:15.223778 env[1054]: time="2024-02-08T23:31:15.223729447Z" level=info msg="shim disconnected" id=d222f6bc5a0443d23c105d009e0eb08e485373a40ae3b5b40a91a697ab788d9f Feb 8 23:31:15.224114 env[1054]: time="2024-02-08T23:31:15.224094592Z" level=warning msg="cleaning up after shim disconnected" id=d222f6bc5a0443d23c105d009e0eb08e485373a40ae3b5b40a91a697ab788d9f namespace=k8s.io Feb 8 23:31:15.224208 env[1054]: time="2024-02-08T23:31:15.224191391Z" level=info msg="cleaning up dead shim" Feb 8 23:31:15.231290 env[1054]: time="2024-02-08T23:31:15.231264526Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:31:15Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1853 runtime=io.containerd.runc.v2\n" Feb 8 23:31:15.455825 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d222f6bc5a0443d23c105d009e0eb08e485373a40ae3b5b40a91a697ab788d9f-rootfs.mount: Deactivated successfully. Feb 8 23:31:15.779162 kubelet[1336]: E0208 23:31:15.778999 1336 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:31:16.061441 env[1054]: time="2024-02-08T23:31:16.061180683Z" level=info msg="CreateContainer within sandbox \"687073f1fbdb09e160ae351b13cb92d3f34534345ec6127db6dcf91c4f959d4c\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 8 23:31:16.093899 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3115612298.mount: Deactivated successfully. Feb 8 23:31:16.107498 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount167733848.mount: Deactivated successfully. Feb 8 23:31:16.122991 env[1054]: time="2024-02-08T23:31:16.122849569Z" level=info msg="CreateContainer within sandbox \"687073f1fbdb09e160ae351b13cb92d3f34534345ec6127db6dcf91c4f959d4c\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"bb389e624156b09941aa3073029864651807d72a6792589c3fe0d17f21170814\"" Feb 8 23:31:16.124057 env[1054]: time="2024-02-08T23:31:16.124000079Z" level=info msg="StartContainer for \"bb389e624156b09941aa3073029864651807d72a6792589c3fe0d17f21170814\"" Feb 8 23:31:16.165118 systemd[1]: Started cri-containerd-bb389e624156b09941aa3073029864651807d72a6792589c3fe0d17f21170814.scope. Feb 8 23:31:16.224723 env[1054]: time="2024-02-08T23:31:16.224663761Z" level=info msg="StartContainer for \"bb389e624156b09941aa3073029864651807d72a6792589c3fe0d17f21170814\" returns successfully" Feb 8 23:31:16.318393 kubelet[1336]: I0208 23:31:16.317516 1336 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Feb 8 23:31:16.760434 kernel: Initializing XFRM netlink socket Feb 8 23:31:16.780291 kubelet[1336]: E0208 23:31:16.780197 1336 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:31:17.781434 kubelet[1336]: E0208 23:31:17.781315 1336 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:31:18.523224 systemd-networkd[966]: cilium_host: Link UP Feb 8 23:31:18.526781 systemd-networkd[966]: cilium_net: Link UP Feb 8 23:31:18.533497 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Feb 8 23:31:18.533642 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Feb 8 23:31:18.530655 systemd-networkd[966]: cilium_net: Gained carrier Feb 8 23:31:18.534678 systemd-networkd[966]: cilium_host: Gained carrier Feb 8 23:31:18.631860 systemd-networkd[966]: cilium_net: Gained IPv6LL Feb 8 23:31:18.655976 systemd-networkd[966]: cilium_vxlan: Link UP Feb 8 23:31:18.655994 systemd-networkd[966]: cilium_vxlan: Gained carrier Feb 8 23:31:18.783226 kubelet[1336]: E0208 23:31:18.782963 1336 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:31:18.987488 kernel: NET: Registered PF_ALG protocol family Feb 8 23:31:19.031662 systemd-networkd[966]: cilium_host: Gained IPv6LL Feb 8 23:31:19.784314 kubelet[1336]: E0208 23:31:19.784199 1336 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:31:19.835298 systemd-networkd[966]: lxc_health: Link UP Feb 8 23:31:19.849020 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 8 23:31:19.848530 systemd-networkd[966]: lxc_health: Gained carrier Feb 8 23:31:20.153210 kubelet[1336]: I0208 23:31:20.153151 1336 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-pjfj6" podStartSLOduration=10.857198556 podCreationTimestamp="2024-02-08 23:30:55 +0000 UTC" firstStartedPulling="2024-02-08 23:30:57.139926572 +0000 UTC m=+4.055186360" lastFinishedPulling="2024-02-08 23:31:11.435765596 +0000 UTC m=+18.351025434" observedRunningTime="2024-02-08 23:31:17.114006788 +0000 UTC m=+24.029266636" watchObservedRunningTime="2024-02-08 23:31:20.15303763 +0000 UTC m=+27.068297468" Feb 8 23:31:20.311663 systemd-networkd[966]: cilium_vxlan: Gained IPv6LL Feb 8 23:31:20.785301 kubelet[1336]: E0208 23:31:20.785236 1336 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:31:21.719735 systemd-networkd[966]: lxc_health: Gained IPv6LL Feb 8 23:31:21.785847 kubelet[1336]: E0208 23:31:21.785775 1336 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:31:22.403618 kubelet[1336]: I0208 23:31:22.403563 1336 topology_manager.go:212] "Topology Admit Handler" Feb 8 23:31:22.413101 systemd[1]: Created slice kubepods-besteffort-pod2c43b82e_c885_4f33_85e8_24c21fbf081d.slice. Feb 8 23:31:22.589421 kubelet[1336]: I0208 23:31:22.589324 1336 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5dqmp\" (UniqueName: \"kubernetes.io/projected/2c43b82e-c885-4f33-85e8-24c21fbf081d-kube-api-access-5dqmp\") pod \"nginx-deployment-845c78c8b9-7kfql\" (UID: \"2c43b82e-c885-4f33-85e8-24c21fbf081d\") " pod="default/nginx-deployment-845c78c8b9-7kfql" Feb 8 23:31:22.727761 env[1054]: time="2024-02-08T23:31:22.725942559Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-845c78c8b9-7kfql,Uid:2c43b82e-c885-4f33-85e8-24c21fbf081d,Namespace:default,Attempt:0,}" Feb 8 23:31:22.786564 kubelet[1336]: E0208 23:31:22.786367 1336 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:31:22.815123 systemd-networkd[966]: lxc45137f7f5d1d: Link UP Feb 8 23:31:22.827450 kernel: eth0: renamed from tmp01a9d Feb 8 23:31:22.835987 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 8 23:31:22.836065 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc45137f7f5d1d: link becomes ready Feb 8 23:31:22.836207 systemd-networkd[966]: lxc45137f7f5d1d: Gained carrier Feb 8 23:31:23.786878 kubelet[1336]: E0208 23:31:23.786854 1336 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:31:24.437683 systemd-networkd[966]: lxc45137f7f5d1d: Gained IPv6LL Feb 8 23:31:24.788460 update_engine[1043]: I0208 23:31:24.787053 1043 update_attempter.cc:509] Updating boot flags... Feb 8 23:31:24.789134 kubelet[1336]: E0208 23:31:24.787814 1336 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:31:25.173257 kubelet[1336]: I0208 23:31:25.173207 1336 prober_manager.go:287] "Failed to trigger a manual run" probe="Readiness" Feb 8 23:31:25.331113 env[1054]: time="2024-02-08T23:31:25.330904985Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 8 23:31:25.331113 env[1054]: time="2024-02-08T23:31:25.330948616Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 8 23:31:25.331113 env[1054]: time="2024-02-08T23:31:25.330962271Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 8 23:31:25.335542 env[1054]: time="2024-02-08T23:31:25.331121838Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/01a9d0d2379e0108aa630a8e811f33e31933626d779e24d47cc2c130ec452d84 pid=2380 runtime=io.containerd.runc.v2 Feb 8 23:31:25.347778 systemd[1]: Started cri-containerd-01a9d0d2379e0108aa630a8e811f33e31933626d779e24d47cc2c130ec452d84.scope. Feb 8 23:31:25.401195 env[1054]: time="2024-02-08T23:31:25.401144239Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-845c78c8b9-7kfql,Uid:2c43b82e-c885-4f33-85e8-24c21fbf081d,Namespace:default,Attempt:0,} returns sandbox id \"01a9d0d2379e0108aa630a8e811f33e31933626d779e24d47cc2c130ec452d84\"" Feb 8 23:31:25.403522 env[1054]: time="2024-02-08T23:31:25.403487018Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 8 23:31:25.788904 kubelet[1336]: E0208 23:31:25.788787 1336 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:31:26.789721 kubelet[1336]: E0208 23:31:26.789585 1336 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:31:27.790581 kubelet[1336]: E0208 23:31:27.790512 1336 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:31:28.791033 kubelet[1336]: E0208 23:31:28.790983 1336 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:31:29.791882 kubelet[1336]: E0208 23:31:29.791825 1336 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:31:29.894318 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1206371302.mount: Deactivated successfully. Feb 8 23:31:30.792694 kubelet[1336]: E0208 23:31:30.792566 1336 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:31:31.793136 kubelet[1336]: E0208 23:31:31.793077 1336 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:31:32.794850 kubelet[1336]: E0208 23:31:32.794799 1336 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:31:33.762929 kubelet[1336]: E0208 23:31:33.762882 1336 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:31:33.796605 kubelet[1336]: E0208 23:31:33.796567 1336 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:31:34.797749 kubelet[1336]: E0208 23:31:34.797636 1336 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:31:35.798722 kubelet[1336]: E0208 23:31:35.798587 1336 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:31:36.799556 kubelet[1336]: E0208 23:31:36.799509 1336 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:31:37.799793 kubelet[1336]: E0208 23:31:37.799722 1336 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:31:38.800995 kubelet[1336]: E0208 23:31:38.800930 1336 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:31:39.801973 kubelet[1336]: E0208 23:31:39.801929 1336 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:31:40.802789 kubelet[1336]: E0208 23:31:40.802655 1336 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:31:41.803415 kubelet[1336]: E0208 23:31:41.803320 1336 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:31:42.804310 kubelet[1336]: E0208 23:31:42.804206 1336 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:31:43.059203 env[1054]: time="2024-02-08T23:31:43.059015468Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:31:43.065077 env[1054]: time="2024-02-08T23:31:43.064990572Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3a8963c304a2f89d2bfa055e07403bae348b293c891b8ea01f7136642eaa277a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:31:43.067986 env[1054]: time="2024-02-08T23:31:43.067937809Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:31:43.070937 env[1054]: time="2024-02-08T23:31:43.070891598Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx@sha256:e34a272f01984c973b1e034e197c02f77dda18981038e3a54e957554ada4fec6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:31:43.071810 env[1054]: time="2024-02-08T23:31:43.071761344Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:3a8963c304a2f89d2bfa055e07403bae348b293c891b8ea01f7136642eaa277a\"" Feb 8 23:31:43.074276 env[1054]: time="2024-02-08T23:31:43.074216801Z" level=info msg="CreateContainer within sandbox \"01a9d0d2379e0108aa630a8e811f33e31933626d779e24d47cc2c130ec452d84\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Feb 8 23:31:43.097454 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2552841624.mount: Deactivated successfully. Feb 8 23:31:43.103242 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2454566472.mount: Deactivated successfully. Feb 8 23:31:43.120414 env[1054]: time="2024-02-08T23:31:43.120340174Z" level=info msg="CreateContainer within sandbox \"01a9d0d2379e0108aa630a8e811f33e31933626d779e24d47cc2c130ec452d84\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"72587072e9fa2ef41fccf1c56ad20929fe361591e678afee006e744b6c418da9\"" Feb 8 23:31:43.121171 env[1054]: time="2024-02-08T23:31:43.121148534Z" level=info msg="StartContainer for \"72587072e9fa2ef41fccf1c56ad20929fe361591e678afee006e744b6c418da9\"" Feb 8 23:31:43.148202 systemd[1]: Started cri-containerd-72587072e9fa2ef41fccf1c56ad20929fe361591e678afee006e744b6c418da9.scope. Feb 8 23:31:43.197929 env[1054]: time="2024-02-08T23:31:43.197879708Z" level=info msg="StartContainer for \"72587072e9fa2ef41fccf1c56ad20929fe361591e678afee006e744b6c418da9\" returns successfully" Feb 8 23:31:43.804634 kubelet[1336]: E0208 23:31:43.804593 1336 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:31:44.165989 kubelet[1336]: I0208 23:31:44.165938 1336 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nginx-deployment-845c78c8b9-7kfql" podStartSLOduration=4.495932232 podCreationTimestamp="2024-02-08 23:31:22 +0000 UTC" firstStartedPulling="2024-02-08 23:31:25.402849151 +0000 UTC m=+32.318108939" lastFinishedPulling="2024-02-08 23:31:43.072785036 +0000 UTC m=+49.988044824" observedRunningTime="2024-02-08 23:31:44.160315941 +0000 UTC m=+51.075575779" watchObservedRunningTime="2024-02-08 23:31:44.165868117 +0000 UTC m=+51.081127955" Feb 8 23:31:44.806196 kubelet[1336]: E0208 23:31:44.806160 1336 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:31:45.807092 kubelet[1336]: E0208 23:31:45.807060 1336 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:31:46.808714 kubelet[1336]: E0208 23:31:46.808658 1336 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:31:47.810249 kubelet[1336]: E0208 23:31:47.810197 1336 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:31:48.811618 kubelet[1336]: E0208 23:31:48.811547 1336 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:31:49.812536 kubelet[1336]: E0208 23:31:49.812481 1336 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:31:50.813008 kubelet[1336]: E0208 23:31:50.812846 1336 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:31:51.734272 kubelet[1336]: I0208 23:31:51.734095 1336 topology_manager.go:212] "Topology Admit Handler" Feb 8 23:31:51.744069 systemd[1]: Created slice kubepods-besteffort-poda2c2baef_34d6_46d9_be63_aacc5612a477.slice. Feb 8 23:31:51.813373 kubelet[1336]: E0208 23:31:51.813328 1336 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:31:51.898243 kubelet[1336]: I0208 23:31:51.898173 1336 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vz7t\" (UniqueName: \"kubernetes.io/projected/a2c2baef-34d6-46d9-be63-aacc5612a477-kube-api-access-2vz7t\") pod \"nfs-server-provisioner-0\" (UID: \"a2c2baef-34d6-46d9-be63-aacc5612a477\") " pod="default/nfs-server-provisioner-0" Feb 8 23:31:51.898565 kubelet[1336]: I0208 23:31:51.898338 1336 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/a2c2baef-34d6-46d9-be63-aacc5612a477-data\") pod \"nfs-server-provisioner-0\" (UID: \"a2c2baef-34d6-46d9-be63-aacc5612a477\") " pod="default/nfs-server-provisioner-0" Feb 8 23:31:52.052493 env[1054]: time="2024-02-08T23:31:52.050462214Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:a2c2baef-34d6-46d9-be63-aacc5612a477,Namespace:default,Attempt:0,}" Feb 8 23:31:52.164623 systemd-networkd[966]: lxcf2642754c1a5: Link UP Feb 8 23:31:52.174460 kernel: eth0: renamed from tmp03673 Feb 8 23:31:52.181048 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 8 23:31:52.181139 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcf2642754c1a5: link becomes ready Feb 8 23:31:52.181291 systemd-networkd[966]: lxcf2642754c1a5: Gained carrier Feb 8 23:31:52.606099 env[1054]: time="2024-02-08T23:31:52.605842101Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 8 23:31:52.606543 env[1054]: time="2024-02-08T23:31:52.606051051Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 8 23:31:52.606757 env[1054]: time="2024-02-08T23:31:52.606692951Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 8 23:31:52.607226 env[1054]: time="2024-02-08T23:31:52.607156177Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/036731c0b7c63e2be5cad89ad68783ed4baca372a25e49b7b7b50553dfc81055 pid=2507 runtime=io.containerd.runc.v2 Feb 8 23:31:52.639104 systemd[1]: Started cri-containerd-036731c0b7c63e2be5cad89ad68783ed4baca372a25e49b7b7b50553dfc81055.scope. Feb 8 23:31:52.690828 env[1054]: time="2024-02-08T23:31:52.690767050Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:a2c2baef-34d6-46d9-be63-aacc5612a477,Namespace:default,Attempt:0,} returns sandbox id \"036731c0b7c63e2be5cad89ad68783ed4baca372a25e49b7b7b50553dfc81055\"" Feb 8 23:31:52.693155 env[1054]: time="2024-02-08T23:31:52.692814509Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Feb 8 23:31:52.814471 kubelet[1336]: E0208 23:31:52.814342 1336 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:31:53.035619 systemd[1]: run-containerd-runc-k8s.io-036731c0b7c63e2be5cad89ad68783ed4baca372a25e49b7b7b50553dfc81055-runc.Oq1P9G.mount: Deactivated successfully. Feb 8 23:31:53.591612 systemd-networkd[966]: lxcf2642754c1a5: Gained IPv6LL Feb 8 23:31:53.762854 kubelet[1336]: E0208 23:31:53.762810 1336 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:31:53.814925 kubelet[1336]: E0208 23:31:53.814850 1336 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:31:54.816049 kubelet[1336]: E0208 23:31:54.816001 1336 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:31:55.816803 kubelet[1336]: E0208 23:31:55.816683 1336 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:31:56.492885 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3257494123.mount: Deactivated successfully. Feb 8 23:31:56.818137 kubelet[1336]: E0208 23:31:56.817647 1336 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:31:57.818070 kubelet[1336]: E0208 23:31:57.818021 1336 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:31:58.818849 kubelet[1336]: E0208 23:31:58.818747 1336 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:31:59.738808 env[1054]: time="2024-02-08T23:31:59.738629497Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:31:59.743441 env[1054]: time="2024-02-08T23:31:59.743360188Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:31:59.749044 env[1054]: time="2024-02-08T23:31:59.748958020Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:31:59.753165 env[1054]: time="2024-02-08T23:31:59.753078949Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:31:59.754941 env[1054]: time="2024-02-08T23:31:59.754852608Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Feb 8 23:31:59.759541 env[1054]: time="2024-02-08T23:31:59.759491396Z" level=info msg="CreateContainer within sandbox \"036731c0b7c63e2be5cad89ad68783ed4baca372a25e49b7b7b50553dfc81055\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Feb 8 23:31:59.779022 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2980500699.mount: Deactivated successfully. Feb 8 23:31:59.784503 env[1054]: time="2024-02-08T23:31:59.784447615Z" level=info msg="CreateContainer within sandbox \"036731c0b7c63e2be5cad89ad68783ed4baca372a25e49b7b7b50553dfc81055\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"be51e8944459d39e1d60a4bcf80aff3249d398a1e0d54e7254cac09b22675410\"" Feb 8 23:31:59.786255 env[1054]: time="2024-02-08T23:31:59.786152705Z" level=info msg="StartContainer for \"be51e8944459d39e1d60a4bcf80aff3249d398a1e0d54e7254cac09b22675410\"" Feb 8 23:31:59.787923 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount685204737.mount: Deactivated successfully. Feb 8 23:31:59.825241 kubelet[1336]: E0208 23:31:59.824358 1336 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:31:59.825475 systemd[1]: Started cri-containerd-be51e8944459d39e1d60a4bcf80aff3249d398a1e0d54e7254cac09b22675410.scope. Feb 8 23:31:59.855903 env[1054]: time="2024-02-08T23:31:59.855863855Z" level=info msg="StartContainer for \"be51e8944459d39e1d60a4bcf80aff3249d398a1e0d54e7254cac09b22675410\" returns successfully" Feb 8 23:32:00.237095 kubelet[1336]: I0208 23:32:00.236998 1336 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=2.17374576 podCreationTimestamp="2024-02-08 23:31:51 +0000 UTC" firstStartedPulling="2024-02-08 23:31:52.692562999 +0000 UTC m=+59.607822817" lastFinishedPulling="2024-02-08 23:31:59.755665288 +0000 UTC m=+66.670925126" observedRunningTime="2024-02-08 23:32:00.23588585 +0000 UTC m=+67.151145688" watchObservedRunningTime="2024-02-08 23:32:00.236848069 +0000 UTC m=+67.152107897" Feb 8 23:32:00.826229 kubelet[1336]: E0208 23:32:00.826161 1336 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:32:01.826805 kubelet[1336]: E0208 23:32:01.826696 1336 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:32:02.828133 kubelet[1336]: E0208 23:32:02.828028 1336 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:32:03.829026 kubelet[1336]: E0208 23:32:03.828918 1336 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:32:04.829251 kubelet[1336]: E0208 23:32:04.829197 1336 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:32:05.831035 kubelet[1336]: E0208 23:32:05.830857 1336 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:32:06.832188 kubelet[1336]: E0208 23:32:06.832056 1336 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:32:07.833237 kubelet[1336]: E0208 23:32:07.833185 1336 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:32:08.834450 kubelet[1336]: E0208 23:32:08.834360 1336 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:32:09.339902 kubelet[1336]: I0208 23:32:09.339743 1336 topology_manager.go:212] "Topology Admit Handler" Feb 8 23:32:09.352731 systemd[1]: Created slice kubepods-besteffort-pod6d034ea7_54a1_4da3_9e1b_272390106c6d.slice. Feb 8 23:32:09.527298 kubelet[1336]: I0208 23:32:09.527213 1336 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmqkt\" (UniqueName: \"kubernetes.io/projected/6d034ea7-54a1-4da3-9e1b-272390106c6d-kube-api-access-xmqkt\") pod \"test-pod-1\" (UID: \"6d034ea7-54a1-4da3-9e1b-272390106c6d\") " pod="default/test-pod-1" Feb 8 23:32:09.527608 kubelet[1336]: I0208 23:32:09.527322 1336 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-60a74dd6-9748-4442-9723-0f5f645f6a83\" (UniqueName: \"kubernetes.io/nfs/6d034ea7-54a1-4da3-9e1b-272390106c6d-pvc-60a74dd6-9748-4442-9723-0f5f645f6a83\") pod \"test-pod-1\" (UID: \"6d034ea7-54a1-4da3-9e1b-272390106c6d\") " pod="default/test-pod-1" Feb 8 23:32:09.711522 kernel: FS-Cache: Loaded Feb 8 23:32:09.786903 kernel: RPC: Registered named UNIX socket transport module. Feb 8 23:32:09.787090 kernel: RPC: Registered udp transport module. Feb 8 23:32:09.787166 kernel: RPC: Registered tcp transport module. Feb 8 23:32:09.787581 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Feb 8 23:32:09.835965 kubelet[1336]: E0208 23:32:09.835871 1336 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:32:09.843456 kernel: FS-Cache: Netfs 'nfs' registered for caching Feb 8 23:32:10.070994 kernel: NFS: Registering the id_resolver key type Feb 8 23:32:10.071192 kernel: Key type id_resolver registered Feb 8 23:32:10.071269 kernel: Key type id_legacy registered Feb 8 23:32:10.136041 nfsidmap[2631]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'novalocal' Feb 8 23:32:10.147716 nfsidmap[2632]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'novalocal' Feb 8 23:32:10.258831 env[1054]: time="2024-02-08T23:32:10.258696807Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:6d034ea7-54a1-4da3-9e1b-272390106c6d,Namespace:default,Attempt:0,}" Feb 8 23:32:10.336213 systemd-networkd[966]: lxc78fd39af8e61: Link UP Feb 8 23:32:10.346456 kernel: eth0: renamed from tmpfbb10 Feb 8 23:32:10.361076 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 8 23:32:10.362998 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc78fd39af8e61: link becomes ready Feb 8 23:32:10.361817 systemd-networkd[966]: lxc78fd39af8e61: Gained carrier Feb 8 23:32:10.615178 env[1054]: time="2024-02-08T23:32:10.615017406Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 8 23:32:10.615748 env[1054]: time="2024-02-08T23:32:10.615671871Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 8 23:32:10.616038 env[1054]: time="2024-02-08T23:32:10.615938971Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 8 23:32:10.616646 env[1054]: time="2024-02-08T23:32:10.616560643Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/fbb1093f0609cf0d083b088ff927811cbc514c41076cf5b271371386fdccbe8c pid=2659 runtime=io.containerd.runc.v2 Feb 8 23:32:10.642261 systemd[1]: Started cri-containerd-fbb1093f0609cf0d083b088ff927811cbc514c41076cf5b271371386fdccbe8c.scope. Feb 8 23:32:10.706729 env[1054]: time="2024-02-08T23:32:10.706684563Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:6d034ea7-54a1-4da3-9e1b-272390106c6d,Namespace:default,Attempt:0,} returns sandbox id \"fbb1093f0609cf0d083b088ff927811cbc514c41076cf5b271371386fdccbe8c\"" Feb 8 23:32:10.708752 env[1054]: time="2024-02-08T23:32:10.708723468Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 8 23:32:10.837301 kubelet[1336]: E0208 23:32:10.837136 1336 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:32:11.280320 env[1054]: time="2024-02-08T23:32:11.280221794Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:32:11.284172 env[1054]: time="2024-02-08T23:32:11.284091536Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:3a8963c304a2f89d2bfa055e07403bae348b293c891b8ea01f7136642eaa277a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:32:11.287785 env[1054]: time="2024-02-08T23:32:11.287719647Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:32:11.292039 env[1054]: time="2024-02-08T23:32:11.291946738Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx@sha256:e34a272f01984c973b1e034e197c02f77dda18981038e3a54e957554ada4fec6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:32:11.293914 env[1054]: time="2024-02-08T23:32:11.293823260Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:3a8963c304a2f89d2bfa055e07403bae348b293c891b8ea01f7136642eaa277a\"" Feb 8 23:32:11.298802 env[1054]: time="2024-02-08T23:32:11.298738459Z" level=info msg="CreateContainer within sandbox \"fbb1093f0609cf0d083b088ff927811cbc514c41076cf5b271371386fdccbe8c\" for container &ContainerMetadata{Name:test,Attempt:0,}" Feb 8 23:32:11.329469 env[1054]: time="2024-02-08T23:32:11.328524295Z" level=info msg="CreateContainer within sandbox \"fbb1093f0609cf0d083b088ff927811cbc514c41076cf5b271371386fdccbe8c\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"6e4f02afe29c8aace132e1dcb92a43255c533d636995b1043c384feaff0441cf\"" Feb 8 23:32:11.334426 env[1054]: time="2024-02-08T23:32:11.334317888Z" level=info msg="StartContainer for \"6e4f02afe29c8aace132e1dcb92a43255c533d636995b1043c384feaff0441cf\"" Feb 8 23:32:11.390896 systemd[1]: Started cri-containerd-6e4f02afe29c8aace132e1dcb92a43255c533d636995b1043c384feaff0441cf.scope. Feb 8 23:32:11.425716 env[1054]: time="2024-02-08T23:32:11.425657225Z" level=info msg="StartContainer for \"6e4f02afe29c8aace132e1dcb92a43255c533d636995b1043c384feaff0441cf\" returns successfully" Feb 8 23:32:11.658187 systemd[1]: run-containerd-runc-k8s.io-6e4f02afe29c8aace132e1dcb92a43255c533d636995b1043c384feaff0441cf-runc.bAkQDZ.mount: Deactivated successfully. Feb 8 23:32:11.838279 kubelet[1336]: E0208 23:32:11.838184 1336 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:32:12.088058 systemd-networkd[966]: lxc78fd39af8e61: Gained IPv6LL Feb 8 23:32:12.277718 kubelet[1336]: I0208 23:32:12.277671 1336 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=17.691435266 podCreationTimestamp="2024-02-08 23:31:54 +0000 UTC" firstStartedPulling="2024-02-08 23:32:10.708256634 +0000 UTC m=+77.623516472" lastFinishedPulling="2024-02-08 23:32:11.294413534 +0000 UTC m=+78.209673373" observedRunningTime="2024-02-08 23:32:12.277455871 +0000 UTC m=+79.192715710" watchObservedRunningTime="2024-02-08 23:32:12.277592167 +0000 UTC m=+79.192851995" Feb 8 23:32:12.838992 kubelet[1336]: E0208 23:32:12.838938 1336 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:32:13.762540 kubelet[1336]: E0208 23:32:13.762492 1336 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:32:13.840442 kubelet[1336]: E0208 23:32:13.840348 1336 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:32:14.841620 kubelet[1336]: E0208 23:32:14.841528 1336 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:32:15.843466 kubelet[1336]: E0208 23:32:15.843358 1336 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:32:16.844584 kubelet[1336]: E0208 23:32:16.844519 1336 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:32:17.845516 kubelet[1336]: E0208 23:32:17.845466 1336 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:32:18.846274 kubelet[1336]: E0208 23:32:18.846224 1336 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:32:19.848258 kubelet[1336]: E0208 23:32:19.848119 1336 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:32:20.849489 kubelet[1336]: E0208 23:32:20.849346 1336 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:32:21.850126 kubelet[1336]: E0208 23:32:21.850074 1336 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:32:22.852088 kubelet[1336]: E0208 23:32:22.852029 1336 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:32:22.934614 systemd[1]: run-containerd-runc-k8s.io-bb389e624156b09941aa3073029864651807d72a6792589c3fe0d17f21170814-runc.7Ez2bx.mount: Deactivated successfully. Feb 8 23:32:22.970858 env[1054]: time="2024-02-08T23:32:22.970767784Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 8 23:32:22.983123 env[1054]: time="2024-02-08T23:32:22.983058176Z" level=info msg="StopContainer for \"bb389e624156b09941aa3073029864651807d72a6792589c3fe0d17f21170814\" with timeout 1 (s)" Feb 8 23:32:22.983952 env[1054]: time="2024-02-08T23:32:22.983899507Z" level=info msg="Stop container \"bb389e624156b09941aa3073029864651807d72a6792589c3fe0d17f21170814\" with signal terminated" Feb 8 23:32:22.999000 systemd-networkd[966]: lxc_health: Link DOWN Feb 8 23:32:22.999025 systemd-networkd[966]: lxc_health: Lost carrier Feb 8 23:32:23.049025 systemd[1]: cri-containerd-bb389e624156b09941aa3073029864651807d72a6792589c3fe0d17f21170814.scope: Deactivated successfully. Feb 8 23:32:23.049563 systemd[1]: cri-containerd-bb389e624156b09941aa3073029864651807d72a6792589c3fe0d17f21170814.scope: Consumed 8.816s CPU time. Feb 8 23:32:23.083562 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bb389e624156b09941aa3073029864651807d72a6792589c3fe0d17f21170814-rootfs.mount: Deactivated successfully. Feb 8 23:32:23.096426 env[1054]: time="2024-02-08T23:32:23.096321954Z" level=info msg="shim disconnected" id=bb389e624156b09941aa3073029864651807d72a6792589c3fe0d17f21170814 Feb 8 23:32:23.096670 env[1054]: time="2024-02-08T23:32:23.096455252Z" level=warning msg="cleaning up after shim disconnected" id=bb389e624156b09941aa3073029864651807d72a6792589c3fe0d17f21170814 namespace=k8s.io Feb 8 23:32:23.096670 env[1054]: time="2024-02-08T23:32:23.096471612Z" level=info msg="cleaning up dead shim" Feb 8 23:32:23.108310 env[1054]: time="2024-02-08T23:32:23.106637961Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:32:23Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2790 runtime=io.containerd.runc.v2\n" Feb 8 23:32:23.110088 env[1054]: time="2024-02-08T23:32:23.110043849Z" level=info msg="StopContainer for \"bb389e624156b09941aa3073029864651807d72a6792589c3fe0d17f21170814\" returns successfully" Feb 8 23:32:23.110908 env[1054]: time="2024-02-08T23:32:23.110873317Z" level=info msg="StopPodSandbox for \"687073f1fbdb09e160ae351b13cb92d3f34534345ec6127db6dcf91c4f959d4c\"" Feb 8 23:32:23.111014 env[1054]: time="2024-02-08T23:32:23.110943337Z" level=info msg="Container to stop \"f63de27b31bad4ed6c086c6fced880f688c357ba5588d25539b3e0171659f1ce\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 8 23:32:23.111014 env[1054]: time="2024-02-08T23:32:23.110960379Z" level=info msg="Container to stop \"d222f6bc5a0443d23c105d009e0eb08e485373a40ae3b5b40a91a697ab788d9f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 8 23:32:23.111014 env[1054]: time="2024-02-08T23:32:23.110973253Z" level=info msg="Container to stop \"608384a94c60a7dcbab8939a4ea92c8cf7677e589e5d7771ce0629a24410f736\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 8 23:32:23.111014 env[1054]: time="2024-02-08T23:32:23.110986206Z" level=info msg="Container to stop \"778ef38087a99b28fd0200e143e7adedf2221854d967bc54359eeee70f62d935\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 8 23:32:23.111014 env[1054]: time="2024-02-08T23:32:23.110998319Z" level=info msg="Container to stop \"bb389e624156b09941aa3073029864651807d72a6792589c3fe0d17f21170814\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 8 23:32:23.113667 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-687073f1fbdb09e160ae351b13cb92d3f34534345ec6127db6dcf91c4f959d4c-shm.mount: Deactivated successfully. Feb 8 23:32:23.119481 systemd[1]: cri-containerd-687073f1fbdb09e160ae351b13cb92d3f34534345ec6127db6dcf91c4f959d4c.scope: Deactivated successfully. Feb 8 23:32:23.150448 env[1054]: time="2024-02-08T23:32:23.150305264Z" level=info msg="shim disconnected" id=687073f1fbdb09e160ae351b13cb92d3f34534345ec6127db6dcf91c4f959d4c Feb 8 23:32:23.150710 env[1054]: time="2024-02-08T23:32:23.150572229Z" level=warning msg="cleaning up after shim disconnected" id=687073f1fbdb09e160ae351b13cb92d3f34534345ec6127db6dcf91c4f959d4c namespace=k8s.io Feb 8 23:32:23.150710 env[1054]: time="2024-02-08T23:32:23.150588449Z" level=info msg="cleaning up dead shim" Feb 8 23:32:23.161780 env[1054]: time="2024-02-08T23:32:23.161731470Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:32:23Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2822 runtime=io.containerd.runc.v2\n" Feb 8 23:32:23.162129 env[1054]: time="2024-02-08T23:32:23.162084724Z" level=info msg="TearDown network for sandbox \"687073f1fbdb09e160ae351b13cb92d3f34534345ec6127db6dcf91c4f959d4c\" successfully" Feb 8 23:32:23.162177 env[1054]: time="2024-02-08T23:32:23.162138975Z" level=info msg="StopPodSandbox for \"687073f1fbdb09e160ae351b13cb92d3f34534345ec6127db6dcf91c4f959d4c\" returns successfully" Feb 8 23:32:23.232532 kubelet[1336]: I0208 23:32:23.232480 1336 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7a3dce3c-90fc-44fc-996e-c1e78804a048-bpf-maps\") pod \"7a3dce3c-90fc-44fc-996e-c1e78804a048\" (UID: \"7a3dce3c-90fc-44fc-996e-c1e78804a048\") " Feb 8 23:32:23.232958 kubelet[1336]: I0208 23:32:23.232590 1336 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7a3dce3c-90fc-44fc-996e-c1e78804a048-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "7a3dce3c-90fc-44fc-996e-c1e78804a048" (UID: "7a3dce3c-90fc-44fc-996e-c1e78804a048"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:32:23.233355 kubelet[1336]: I0208 23:32:23.233184 1336 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7a3dce3c-90fc-44fc-996e-c1e78804a048-cilium-run\") pod \"7a3dce3c-90fc-44fc-996e-c1e78804a048\" (UID: \"7a3dce3c-90fc-44fc-996e-c1e78804a048\") " Feb 8 23:32:23.233755 kubelet[1336]: I0208 23:32:23.233277 1336 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7a3dce3c-90fc-44fc-996e-c1e78804a048-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "7a3dce3c-90fc-44fc-996e-c1e78804a048" (UID: "7a3dce3c-90fc-44fc-996e-c1e78804a048"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:32:23.234154 kubelet[1336]: I0208 23:32:23.233983 1336 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7a3dce3c-90fc-44fc-996e-c1e78804a048-host-proc-sys-net\") pod \"7a3dce3c-90fc-44fc-996e-c1e78804a048\" (UID: \"7a3dce3c-90fc-44fc-996e-c1e78804a048\") " Feb 8 23:32:23.234560 kubelet[1336]: I0208 23:32:23.234075 1336 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7a3dce3c-90fc-44fc-996e-c1e78804a048-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "7a3dce3c-90fc-44fc-996e-c1e78804a048" (UID: "7a3dce3c-90fc-44fc-996e-c1e78804a048"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:32:23.234982 kubelet[1336]: I0208 23:32:23.234805 1336 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7a3dce3c-90fc-44fc-996e-c1e78804a048-cni-path\") pod \"7a3dce3c-90fc-44fc-996e-c1e78804a048\" (UID: \"7a3dce3c-90fc-44fc-996e-c1e78804a048\") " Feb 8 23:32:23.235280 kubelet[1336]: I0208 23:32:23.234889 1336 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7a3dce3c-90fc-44fc-996e-c1e78804a048-cni-path" (OuterVolumeSpecName: "cni-path") pod "7a3dce3c-90fc-44fc-996e-c1e78804a048" (UID: "7a3dce3c-90fc-44fc-996e-c1e78804a048"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:32:23.235733 kubelet[1336]: I0208 23:32:23.235666 1336 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7a3dce3c-90fc-44fc-996e-c1e78804a048-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "7a3dce3c-90fc-44fc-996e-c1e78804a048" (UID: "7a3dce3c-90fc-44fc-996e-c1e78804a048"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:32:23.235963 kubelet[1336]: I0208 23:32:23.235582 1336 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7a3dce3c-90fc-44fc-996e-c1e78804a048-cilium-cgroup\") pod \"7a3dce3c-90fc-44fc-996e-c1e78804a048\" (UID: \"7a3dce3c-90fc-44fc-996e-c1e78804a048\") " Feb 8 23:32:23.236580 kubelet[1336]: I0208 23:32:23.236288 1336 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4jngc\" (UniqueName: \"kubernetes.io/projected/7a3dce3c-90fc-44fc-996e-c1e78804a048-kube-api-access-4jngc\") pod \"7a3dce3c-90fc-44fc-996e-c1e78804a048\" (UID: \"7a3dce3c-90fc-44fc-996e-c1e78804a048\") " Feb 8 23:32:23.236756 kubelet[1336]: I0208 23:32:23.236617 1336 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7a3dce3c-90fc-44fc-996e-c1e78804a048-hostproc\") pod \"7a3dce3c-90fc-44fc-996e-c1e78804a048\" (UID: \"7a3dce3c-90fc-44fc-996e-c1e78804a048\") " Feb 8 23:32:23.236891 kubelet[1336]: I0208 23:32:23.236775 1336 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7a3dce3c-90fc-44fc-996e-c1e78804a048-hubble-tls\") pod \"7a3dce3c-90fc-44fc-996e-c1e78804a048\" (UID: \"7a3dce3c-90fc-44fc-996e-c1e78804a048\") " Feb 8 23:32:23.236891 kubelet[1336]: I0208 23:32:23.236860 1336 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7a3dce3c-90fc-44fc-996e-c1e78804a048-etc-cni-netd\") pod \"7a3dce3c-90fc-44fc-996e-c1e78804a048\" (UID: \"7a3dce3c-90fc-44fc-996e-c1e78804a048\") " Feb 8 23:32:23.237156 kubelet[1336]: I0208 23:32:23.236934 1336 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7a3dce3c-90fc-44fc-996e-c1e78804a048-xtables-lock\") pod \"7a3dce3c-90fc-44fc-996e-c1e78804a048\" (UID: \"7a3dce3c-90fc-44fc-996e-c1e78804a048\") " Feb 8 23:32:23.237156 kubelet[1336]: I0208 23:32:23.237024 1336 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7a3dce3c-90fc-44fc-996e-c1e78804a048-cilium-config-path\") pod \"7a3dce3c-90fc-44fc-996e-c1e78804a048\" (UID: \"7a3dce3c-90fc-44fc-996e-c1e78804a048\") " Feb 8 23:32:23.237156 kubelet[1336]: I0208 23:32:23.237107 1336 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7a3dce3c-90fc-44fc-996e-c1e78804a048-host-proc-sys-kernel\") pod \"7a3dce3c-90fc-44fc-996e-c1e78804a048\" (UID: \"7a3dce3c-90fc-44fc-996e-c1e78804a048\") " Feb 8 23:32:23.237568 kubelet[1336]: I0208 23:32:23.237191 1336 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7a3dce3c-90fc-44fc-996e-c1e78804a048-clustermesh-secrets\") pod \"7a3dce3c-90fc-44fc-996e-c1e78804a048\" (UID: \"7a3dce3c-90fc-44fc-996e-c1e78804a048\") " Feb 8 23:32:23.237568 kubelet[1336]: I0208 23:32:23.237262 1336 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7a3dce3c-90fc-44fc-996e-c1e78804a048-lib-modules\") pod \"7a3dce3c-90fc-44fc-996e-c1e78804a048\" (UID: \"7a3dce3c-90fc-44fc-996e-c1e78804a048\") " Feb 8 23:32:23.237568 kubelet[1336]: I0208 23:32:23.237342 1336 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7a3dce3c-90fc-44fc-996e-c1e78804a048-bpf-maps\") on node \"172.24.4.234\" DevicePath \"\"" Feb 8 23:32:23.237568 kubelet[1336]: I0208 23:32:23.237460 1336 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7a3dce3c-90fc-44fc-996e-c1e78804a048-cilium-run\") on node \"172.24.4.234\" DevicePath \"\"" Feb 8 23:32:23.237568 kubelet[1336]: I0208 23:32:23.237509 1336 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7a3dce3c-90fc-44fc-996e-c1e78804a048-host-proc-sys-net\") on node \"172.24.4.234\" DevicePath \"\"" Feb 8 23:32:23.237568 kubelet[1336]: I0208 23:32:23.237557 1336 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7a3dce3c-90fc-44fc-996e-c1e78804a048-cni-path\") on node \"172.24.4.234\" DevicePath \"\"" Feb 8 23:32:23.238245 kubelet[1336]: I0208 23:32:23.237593 1336 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7a3dce3c-90fc-44fc-996e-c1e78804a048-cilium-cgroup\") on node \"172.24.4.234\" DevicePath \"\"" Feb 8 23:32:23.238245 kubelet[1336]: I0208 23:32:23.237645 1336 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7a3dce3c-90fc-44fc-996e-c1e78804a048-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "7a3dce3c-90fc-44fc-996e-c1e78804a048" (UID: "7a3dce3c-90fc-44fc-996e-c1e78804a048"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:32:23.238245 kubelet[1336]: I0208 23:32:23.237714 1336 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7a3dce3c-90fc-44fc-996e-c1e78804a048-hostproc" (OuterVolumeSpecName: "hostproc") pod "7a3dce3c-90fc-44fc-996e-c1e78804a048" (UID: "7a3dce3c-90fc-44fc-996e-c1e78804a048"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:32:23.239649 kubelet[1336]: W0208 23:32:23.239308 1336 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/7a3dce3c-90fc-44fc-996e-c1e78804a048/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 8 23:32:23.239890 kubelet[1336]: I0208 23:32:23.239788 1336 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7a3dce3c-90fc-44fc-996e-c1e78804a048-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "7a3dce3c-90fc-44fc-996e-c1e78804a048" (UID: "7a3dce3c-90fc-44fc-996e-c1e78804a048"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:32:23.240040 kubelet[1336]: I0208 23:32:23.239910 1336 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7a3dce3c-90fc-44fc-996e-c1e78804a048-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "7a3dce3c-90fc-44fc-996e-c1e78804a048" (UID: "7a3dce3c-90fc-44fc-996e-c1e78804a048"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:32:23.241016 kubelet[1336]: I0208 23:32:23.240612 1336 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7a3dce3c-90fc-44fc-996e-c1e78804a048-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "7a3dce3c-90fc-44fc-996e-c1e78804a048" (UID: "7a3dce3c-90fc-44fc-996e-c1e78804a048"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:32:23.252823 kubelet[1336]: I0208 23:32:23.252700 1336 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7a3dce3c-90fc-44fc-996e-c1e78804a048-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "7a3dce3c-90fc-44fc-996e-c1e78804a048" (UID: "7a3dce3c-90fc-44fc-996e-c1e78804a048"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 8 23:32:23.259541 kubelet[1336]: I0208 23:32:23.259363 1336 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a3dce3c-90fc-44fc-996e-c1e78804a048-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "7a3dce3c-90fc-44fc-996e-c1e78804a048" (UID: "7a3dce3c-90fc-44fc-996e-c1e78804a048"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 8 23:32:23.259829 kubelet[1336]: I0208 23:32:23.259756 1336 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a3dce3c-90fc-44fc-996e-c1e78804a048-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "7a3dce3c-90fc-44fc-996e-c1e78804a048" (UID: "7a3dce3c-90fc-44fc-996e-c1e78804a048"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 8 23:32:23.262643 kubelet[1336]: I0208 23:32:23.262578 1336 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a3dce3c-90fc-44fc-996e-c1e78804a048-kube-api-access-4jngc" (OuterVolumeSpecName: "kube-api-access-4jngc") pod "7a3dce3c-90fc-44fc-996e-c1e78804a048" (UID: "7a3dce3c-90fc-44fc-996e-c1e78804a048"). InnerVolumeSpecName "kube-api-access-4jngc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 8 23:32:23.297958 kubelet[1336]: I0208 23:32:23.297901 1336 scope.go:115] "RemoveContainer" containerID="bb389e624156b09941aa3073029864651807d72a6792589c3fe0d17f21170814" Feb 8 23:32:23.301843 env[1054]: time="2024-02-08T23:32:23.301728776Z" level=info msg="RemoveContainer for \"bb389e624156b09941aa3073029864651807d72a6792589c3fe0d17f21170814\"" Feb 8 23:32:23.308979 env[1054]: time="2024-02-08T23:32:23.308886243Z" level=info msg="RemoveContainer for \"bb389e624156b09941aa3073029864651807d72a6792589c3fe0d17f21170814\" returns successfully" Feb 8 23:32:23.312235 kubelet[1336]: I0208 23:32:23.312190 1336 scope.go:115] "RemoveContainer" containerID="d222f6bc5a0443d23c105d009e0eb08e485373a40ae3b5b40a91a697ab788d9f" Feb 8 23:32:23.315921 env[1054]: time="2024-02-08T23:32:23.315775754Z" level=info msg="RemoveContainer for \"d222f6bc5a0443d23c105d009e0eb08e485373a40ae3b5b40a91a697ab788d9f\"" Feb 8 23:32:23.320971 systemd[1]: Removed slice kubepods-burstable-pod7a3dce3c_90fc_44fc_996e_c1e78804a048.slice. Feb 8 23:32:23.321321 systemd[1]: kubepods-burstable-pod7a3dce3c_90fc_44fc_996e_c1e78804a048.slice: Consumed 8.940s CPU time. Feb 8 23:32:23.324930 env[1054]: time="2024-02-08T23:32:23.324847592Z" level=info msg="RemoveContainer for \"d222f6bc5a0443d23c105d009e0eb08e485373a40ae3b5b40a91a697ab788d9f\" returns successfully" Feb 8 23:32:23.325233 kubelet[1336]: I0208 23:32:23.325195 1336 scope.go:115] "RemoveContainer" containerID="778ef38087a99b28fd0200e143e7adedf2221854d967bc54359eeee70f62d935" Feb 8 23:32:23.328956 env[1054]: time="2024-02-08T23:32:23.328899529Z" level=info msg="RemoveContainer for \"778ef38087a99b28fd0200e143e7adedf2221854d967bc54359eeee70f62d935\"" Feb 8 23:32:23.334531 env[1054]: time="2024-02-08T23:32:23.334466415Z" level=info msg="RemoveContainer for \"778ef38087a99b28fd0200e143e7adedf2221854d967bc54359eeee70f62d935\" returns successfully" Feb 8 23:32:23.335158 kubelet[1336]: I0208 23:32:23.335125 1336 scope.go:115] "RemoveContainer" containerID="f63de27b31bad4ed6c086c6fced880f688c357ba5588d25539b3e0171659f1ce" Feb 8 23:32:23.338167 env[1054]: time="2024-02-08T23:32:23.338025828Z" level=info msg="RemoveContainer for \"f63de27b31bad4ed6c086c6fced880f688c357ba5588d25539b3e0171659f1ce\"" Feb 8 23:32:23.338967 kubelet[1336]: I0208 23:32:23.338934 1336 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7a3dce3c-90fc-44fc-996e-c1e78804a048-hostproc\") on node \"172.24.4.234\" DevicePath \"\"" Feb 8 23:32:23.339317 kubelet[1336]: I0208 23:32:23.339258 1336 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7a3dce3c-90fc-44fc-996e-c1e78804a048-hubble-tls\") on node \"172.24.4.234\" DevicePath \"\"" Feb 8 23:32:23.339832 kubelet[1336]: I0208 23:32:23.339783 1336 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7a3dce3c-90fc-44fc-996e-c1e78804a048-etc-cni-netd\") on node \"172.24.4.234\" DevicePath \"\"" Feb 8 23:32:23.340103 kubelet[1336]: I0208 23:32:23.340077 1336 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7a3dce3c-90fc-44fc-996e-c1e78804a048-xtables-lock\") on node \"172.24.4.234\" DevicePath \"\"" Feb 8 23:32:23.340650 kubelet[1336]: I0208 23:32:23.340597 1336 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-4jngc\" (UniqueName: \"kubernetes.io/projected/7a3dce3c-90fc-44fc-996e-c1e78804a048-kube-api-access-4jngc\") on node \"172.24.4.234\" DevicePath \"\"" Feb 8 23:32:23.340927 kubelet[1336]: I0208 23:32:23.340901 1336 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7a3dce3c-90fc-44fc-996e-c1e78804a048-host-proc-sys-kernel\") on node \"172.24.4.234\" DevicePath \"\"" Feb 8 23:32:23.341214 kubelet[1336]: I0208 23:32:23.341189 1336 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7a3dce3c-90fc-44fc-996e-c1e78804a048-clustermesh-secrets\") on node \"172.24.4.234\" DevicePath \"\"" Feb 8 23:32:23.341587 kubelet[1336]: I0208 23:32:23.341531 1336 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7a3dce3c-90fc-44fc-996e-c1e78804a048-lib-modules\") on node \"172.24.4.234\" DevicePath \"\"" Feb 8 23:32:23.341904 kubelet[1336]: I0208 23:32:23.341878 1336 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7a3dce3c-90fc-44fc-996e-c1e78804a048-cilium-config-path\") on node \"172.24.4.234\" DevicePath \"\"" Feb 8 23:32:23.349278 env[1054]: time="2024-02-08T23:32:23.348578694Z" level=info msg="RemoveContainer for \"f63de27b31bad4ed6c086c6fced880f688c357ba5588d25539b3e0171659f1ce\" returns successfully" Feb 8 23:32:23.349467 kubelet[1336]: I0208 23:32:23.349055 1336 scope.go:115] "RemoveContainer" containerID="608384a94c60a7dcbab8939a4ea92c8cf7677e589e5d7771ce0629a24410f736" Feb 8 23:32:23.350791 env[1054]: time="2024-02-08T23:32:23.350765821Z" level=info msg="RemoveContainer for \"608384a94c60a7dcbab8939a4ea92c8cf7677e589e5d7771ce0629a24410f736\"" Feb 8 23:32:23.353910 env[1054]: time="2024-02-08T23:32:23.353884757Z" level=info msg="RemoveContainer for \"608384a94c60a7dcbab8939a4ea92c8cf7677e589e5d7771ce0629a24410f736\" returns successfully" Feb 8 23:32:23.354145 kubelet[1336]: I0208 23:32:23.354130 1336 scope.go:115] "RemoveContainer" containerID="bb389e624156b09941aa3073029864651807d72a6792589c3fe0d17f21170814" Feb 8 23:32:23.354540 env[1054]: time="2024-02-08T23:32:23.354480691Z" level=error msg="ContainerStatus for \"bb389e624156b09941aa3073029864651807d72a6792589c3fe0d17f21170814\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bb389e624156b09941aa3073029864651807d72a6792589c3fe0d17f21170814\": not found" Feb 8 23:32:23.354802 kubelet[1336]: E0208 23:32:23.354790 1336 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bb389e624156b09941aa3073029864651807d72a6792589c3fe0d17f21170814\": not found" containerID="bb389e624156b09941aa3073029864651807d72a6792589c3fe0d17f21170814" Feb 8 23:32:23.354896 kubelet[1336]: I0208 23:32:23.354886 1336 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:bb389e624156b09941aa3073029864651807d72a6792589c3fe0d17f21170814} err="failed to get container status \"bb389e624156b09941aa3073029864651807d72a6792589c3fe0d17f21170814\": rpc error: code = NotFound desc = an error occurred when try to find container \"bb389e624156b09941aa3073029864651807d72a6792589c3fe0d17f21170814\": not found" Feb 8 23:32:23.354961 kubelet[1336]: I0208 23:32:23.354952 1336 scope.go:115] "RemoveContainer" containerID="d222f6bc5a0443d23c105d009e0eb08e485373a40ae3b5b40a91a697ab788d9f" Feb 8 23:32:23.355482 env[1054]: time="2024-02-08T23:32:23.355337992Z" level=error msg="ContainerStatus for \"d222f6bc5a0443d23c105d009e0eb08e485373a40ae3b5b40a91a697ab788d9f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d222f6bc5a0443d23c105d009e0eb08e485373a40ae3b5b40a91a697ab788d9f\": not found" Feb 8 23:32:23.355850 kubelet[1336]: E0208 23:32:23.355785 1336 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d222f6bc5a0443d23c105d009e0eb08e485373a40ae3b5b40a91a697ab788d9f\": not found" containerID="d222f6bc5a0443d23c105d009e0eb08e485373a40ae3b5b40a91a697ab788d9f" Feb 8 23:32:23.355910 kubelet[1336]: I0208 23:32:23.355892 1336 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:d222f6bc5a0443d23c105d009e0eb08e485373a40ae3b5b40a91a697ab788d9f} err="failed to get container status \"d222f6bc5a0443d23c105d009e0eb08e485373a40ae3b5b40a91a697ab788d9f\": rpc error: code = NotFound desc = an error occurred when try to find container \"d222f6bc5a0443d23c105d009e0eb08e485373a40ae3b5b40a91a697ab788d9f\": not found" Feb 8 23:32:23.355969 kubelet[1336]: I0208 23:32:23.355955 1336 scope.go:115] "RemoveContainer" containerID="778ef38087a99b28fd0200e143e7adedf2221854d967bc54359eeee70f62d935" Feb 8 23:32:23.356240 env[1054]: time="2024-02-08T23:32:23.356197025Z" level=error msg="ContainerStatus for \"778ef38087a99b28fd0200e143e7adedf2221854d967bc54359eeee70f62d935\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"778ef38087a99b28fd0200e143e7adedf2221854d967bc54359eeee70f62d935\": not found" Feb 8 23:32:23.356439 kubelet[1336]: E0208 23:32:23.356428 1336 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"778ef38087a99b28fd0200e143e7adedf2221854d967bc54359eeee70f62d935\": not found" containerID="778ef38087a99b28fd0200e143e7adedf2221854d967bc54359eeee70f62d935" Feb 8 23:32:23.356533 kubelet[1336]: I0208 23:32:23.356523 1336 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:778ef38087a99b28fd0200e143e7adedf2221854d967bc54359eeee70f62d935} err="failed to get container status \"778ef38087a99b28fd0200e143e7adedf2221854d967bc54359eeee70f62d935\": rpc error: code = NotFound desc = an error occurred when try to find container \"778ef38087a99b28fd0200e143e7adedf2221854d967bc54359eeee70f62d935\": not found" Feb 8 23:32:23.356602 kubelet[1336]: I0208 23:32:23.356593 1336 scope.go:115] "RemoveContainer" containerID="f63de27b31bad4ed6c086c6fced880f688c357ba5588d25539b3e0171659f1ce" Feb 8 23:32:23.356992 env[1054]: time="2024-02-08T23:32:23.356909297Z" level=error msg="ContainerStatus for \"f63de27b31bad4ed6c086c6fced880f688c357ba5588d25539b3e0171659f1ce\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f63de27b31bad4ed6c086c6fced880f688c357ba5588d25539b3e0171659f1ce\": not found" Feb 8 23:32:23.357268 kubelet[1336]: E0208 23:32:23.357246 1336 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f63de27b31bad4ed6c086c6fced880f688c357ba5588d25539b3e0171659f1ce\": not found" containerID="f63de27b31bad4ed6c086c6fced880f688c357ba5588d25539b3e0171659f1ce" Feb 8 23:32:23.357353 kubelet[1336]: I0208 23:32:23.357337 1336 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:f63de27b31bad4ed6c086c6fced880f688c357ba5588d25539b3e0171659f1ce} err="failed to get container status \"f63de27b31bad4ed6c086c6fced880f688c357ba5588d25539b3e0171659f1ce\": rpc error: code = NotFound desc = an error occurred when try to find container \"f63de27b31bad4ed6c086c6fced880f688c357ba5588d25539b3e0171659f1ce\": not found" Feb 8 23:32:23.357444 kubelet[1336]: I0208 23:32:23.357428 1336 scope.go:115] "RemoveContainer" containerID="608384a94c60a7dcbab8939a4ea92c8cf7677e589e5d7771ce0629a24410f736" Feb 8 23:32:23.357693 env[1054]: time="2024-02-08T23:32:23.357654249Z" level=error msg="ContainerStatus for \"608384a94c60a7dcbab8939a4ea92c8cf7677e589e5d7771ce0629a24410f736\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"608384a94c60a7dcbab8939a4ea92c8cf7677e589e5d7771ce0629a24410f736\": not found" Feb 8 23:32:23.357959 kubelet[1336]: E0208 23:32:23.357937 1336 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"608384a94c60a7dcbab8939a4ea92c8cf7677e589e5d7771ce0629a24410f736\": not found" containerID="608384a94c60a7dcbab8939a4ea92c8cf7677e589e5d7771ce0629a24410f736" Feb 8 23:32:23.358007 kubelet[1336]: I0208 23:32:23.357993 1336 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:608384a94c60a7dcbab8939a4ea92c8cf7677e589e5d7771ce0629a24410f736} err="failed to get container status \"608384a94c60a7dcbab8939a4ea92c8cf7677e589e5d7771ce0629a24410f736\": rpc error: code = NotFound desc = an error occurred when try to find container \"608384a94c60a7dcbab8939a4ea92c8cf7677e589e5d7771ce0629a24410f736\": not found" Feb 8 23:32:23.853408 kubelet[1336]: E0208 23:32:23.853332 1336 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:32:23.890271 kubelet[1336]: E0208 23:32:23.890199 1336 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 8 23:32:23.927048 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-687073f1fbdb09e160ae351b13cb92d3f34534345ec6127db6dcf91c4f959d4c-rootfs.mount: Deactivated successfully. Feb 8 23:32:23.927298 systemd[1]: var-lib-kubelet-pods-7a3dce3c\x2d90fc\x2d44fc\x2d996e\x2dc1e78804a048-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d4jngc.mount: Deactivated successfully. Feb 8 23:32:23.927539 systemd[1]: var-lib-kubelet-pods-7a3dce3c\x2d90fc\x2d44fc\x2d996e\x2dc1e78804a048-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 8 23:32:23.927701 systemd[1]: var-lib-kubelet-pods-7a3dce3c\x2d90fc\x2d44fc\x2d996e\x2dc1e78804a048-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 8 23:32:23.968473 kubelet[1336]: I0208 23:32:23.968429 1336 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=7a3dce3c-90fc-44fc-996e-c1e78804a048 path="/var/lib/kubelet/pods/7a3dce3c-90fc-44fc-996e-c1e78804a048/volumes" Feb 8 23:32:24.854956 kubelet[1336]: E0208 23:32:24.854905 1336 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:32:25.856595 kubelet[1336]: E0208 23:32:25.856474 1336 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:32:26.857119 kubelet[1336]: E0208 23:32:26.857038 1336 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:32:27.443050 kubelet[1336]: I0208 23:32:27.443009 1336 setters.go:548] "Node became not ready" node="172.24.4.234" condition={Type:Ready Status:False LastHeartbeatTime:2024-02-08 23:32:27.442912207 +0000 UTC m=+94.358172045 LastTransitionTime:2024-02-08 23:32:27.442912207 +0000 UTC m=+94.358172045 Reason:KubeletNotReady Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized} Feb 8 23:32:27.858443 kubelet[1336]: E0208 23:32:27.858365 1336 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:32:28.207427 kubelet[1336]: I0208 23:32:28.205182 1336 topology_manager.go:212] "Topology Admit Handler" Feb 8 23:32:28.207427 kubelet[1336]: E0208 23:32:28.205572 1336 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7a3dce3c-90fc-44fc-996e-c1e78804a048" containerName="mount-cgroup" Feb 8 23:32:28.207427 kubelet[1336]: E0208 23:32:28.205591 1336 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7a3dce3c-90fc-44fc-996e-c1e78804a048" containerName="mount-bpf-fs" Feb 8 23:32:28.207427 kubelet[1336]: E0208 23:32:28.205602 1336 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7a3dce3c-90fc-44fc-996e-c1e78804a048" containerName="cilium-agent" Feb 8 23:32:28.207427 kubelet[1336]: E0208 23:32:28.205612 1336 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7a3dce3c-90fc-44fc-996e-c1e78804a048" containerName="apply-sysctl-overwrites" Feb 8 23:32:28.207427 kubelet[1336]: E0208 23:32:28.205622 1336 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7a3dce3c-90fc-44fc-996e-c1e78804a048" containerName="clean-cilium-state" Feb 8 23:32:28.207427 kubelet[1336]: I0208 23:32:28.205650 1336 memory_manager.go:346] "RemoveStaleState removing state" podUID="7a3dce3c-90fc-44fc-996e-c1e78804a048" containerName="cilium-agent" Feb 8 23:32:28.213025 systemd[1]: Created slice kubepods-besteffort-podab3370c7_bae0_48da_8d76_aca3602aff62.slice. Feb 8 23:32:28.214464 kubelet[1336]: I0208 23:32:28.213668 1336 topology_manager.go:212] "Topology Admit Handler" Feb 8 23:32:28.224658 systemd[1]: Created slice kubepods-burstable-pod18dff0ac_6053_4f60_b0bc_4ffe0c7cfc88.slice. Feb 8 23:32:28.279675 kubelet[1336]: I0208 23:32:28.279589 1336 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h89jx\" (UniqueName: \"kubernetes.io/projected/ab3370c7-bae0-48da-8d76-aca3602aff62-kube-api-access-h89jx\") pod \"cilium-operator-574c4bb98d-4m5m7\" (UID: \"ab3370c7-bae0-48da-8d76-aca3602aff62\") " pod="kube-system/cilium-operator-574c4bb98d-4m5m7" Feb 8 23:32:28.279944 kubelet[1336]: I0208 23:32:28.279832 1336 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ab3370c7-bae0-48da-8d76-aca3602aff62-cilium-config-path\") pod \"cilium-operator-574c4bb98d-4m5m7\" (UID: \"ab3370c7-bae0-48da-8d76-aca3602aff62\") " pod="kube-system/cilium-operator-574c4bb98d-4m5m7" Feb 8 23:32:28.380642 kubelet[1336]: I0208 23:32:28.380616 1336 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/18dff0ac-6053-4f60-b0bc-4ffe0c7cfc88-lib-modules\") pod \"cilium-42tbf\" (UID: \"18dff0ac-6053-4f60-b0bc-4ffe0c7cfc88\") " pod="kube-system/cilium-42tbf" Feb 8 23:32:28.380762 kubelet[1336]: I0208 23:32:28.380703 1336 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/18dff0ac-6053-4f60-b0bc-4ffe0c7cfc88-cilium-ipsec-secrets\") pod \"cilium-42tbf\" (UID: \"18dff0ac-6053-4f60-b0bc-4ffe0c7cfc88\") " pod="kube-system/cilium-42tbf" Feb 8 23:32:28.380796 kubelet[1336]: I0208 23:32:28.380774 1336 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/18dff0ac-6053-4f60-b0bc-4ffe0c7cfc88-cilium-run\") pod \"cilium-42tbf\" (UID: \"18dff0ac-6053-4f60-b0bc-4ffe0c7cfc88\") " pod="kube-system/cilium-42tbf" Feb 8 23:32:28.380849 kubelet[1336]: I0208 23:32:28.380809 1336 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/18dff0ac-6053-4f60-b0bc-4ffe0c7cfc88-etc-cni-netd\") pod \"cilium-42tbf\" (UID: \"18dff0ac-6053-4f60-b0bc-4ffe0c7cfc88\") " pod="kube-system/cilium-42tbf" Feb 8 23:32:28.380884 kubelet[1336]: I0208 23:32:28.380870 1336 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/18dff0ac-6053-4f60-b0bc-4ffe0c7cfc88-xtables-lock\") pod \"cilium-42tbf\" (UID: \"18dff0ac-6053-4f60-b0bc-4ffe0c7cfc88\") " pod="kube-system/cilium-42tbf" Feb 8 23:32:28.380915 kubelet[1336]: I0208 23:32:28.380897 1336 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/18dff0ac-6053-4f60-b0bc-4ffe0c7cfc88-cilium-config-path\") pod \"cilium-42tbf\" (UID: \"18dff0ac-6053-4f60-b0bc-4ffe0c7cfc88\") " pod="kube-system/cilium-42tbf" Feb 8 23:32:28.380980 kubelet[1336]: I0208 23:32:28.380959 1336 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/18dff0ac-6053-4f60-b0bc-4ffe0c7cfc88-hubble-tls\") pod \"cilium-42tbf\" (UID: \"18dff0ac-6053-4f60-b0bc-4ffe0c7cfc88\") " pod="kube-system/cilium-42tbf" Feb 8 23:32:28.381051 kubelet[1336]: I0208 23:32:28.381039 1336 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xnm45\" (UniqueName: \"kubernetes.io/projected/18dff0ac-6053-4f60-b0bc-4ffe0c7cfc88-kube-api-access-xnm45\") pod \"cilium-42tbf\" (UID: \"18dff0ac-6053-4f60-b0bc-4ffe0c7cfc88\") " pod="kube-system/cilium-42tbf" Feb 8 23:32:28.381089 kubelet[1336]: I0208 23:32:28.381069 1336 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/18dff0ac-6053-4f60-b0bc-4ffe0c7cfc88-hostproc\") pod \"cilium-42tbf\" (UID: \"18dff0ac-6053-4f60-b0bc-4ffe0c7cfc88\") " pod="kube-system/cilium-42tbf" Feb 8 23:32:28.381158 kubelet[1336]: I0208 23:32:28.381139 1336 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/18dff0ac-6053-4f60-b0bc-4ffe0c7cfc88-bpf-maps\") pod \"cilium-42tbf\" (UID: \"18dff0ac-6053-4f60-b0bc-4ffe0c7cfc88\") " pod="kube-system/cilium-42tbf" Feb 8 23:32:28.383434 kubelet[1336]: I0208 23:32:28.383418 1336 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/18dff0ac-6053-4f60-b0bc-4ffe0c7cfc88-host-proc-sys-net\") pod \"cilium-42tbf\" (UID: \"18dff0ac-6053-4f60-b0bc-4ffe0c7cfc88\") " pod="kube-system/cilium-42tbf" Feb 8 23:32:28.383540 kubelet[1336]: I0208 23:32:28.383530 1336 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/18dff0ac-6053-4f60-b0bc-4ffe0c7cfc88-host-proc-sys-kernel\") pod \"cilium-42tbf\" (UID: \"18dff0ac-6053-4f60-b0bc-4ffe0c7cfc88\") " pod="kube-system/cilium-42tbf" Feb 8 23:32:28.383626 kubelet[1336]: I0208 23:32:28.383616 1336 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/18dff0ac-6053-4f60-b0bc-4ffe0c7cfc88-cilium-cgroup\") pod \"cilium-42tbf\" (UID: \"18dff0ac-6053-4f60-b0bc-4ffe0c7cfc88\") " pod="kube-system/cilium-42tbf" Feb 8 23:32:28.383728 kubelet[1336]: I0208 23:32:28.383717 1336 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/18dff0ac-6053-4f60-b0bc-4ffe0c7cfc88-clustermesh-secrets\") pod \"cilium-42tbf\" (UID: \"18dff0ac-6053-4f60-b0bc-4ffe0c7cfc88\") " pod="kube-system/cilium-42tbf" Feb 8 23:32:28.383820 kubelet[1336]: I0208 23:32:28.383809 1336 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/18dff0ac-6053-4f60-b0bc-4ffe0c7cfc88-cni-path\") pod \"cilium-42tbf\" (UID: \"18dff0ac-6053-4f60-b0bc-4ffe0c7cfc88\") " pod="kube-system/cilium-42tbf" Feb 8 23:32:28.532017 env[1054]: time="2024-02-08T23:32:28.531838598Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-574c4bb98d-4m5m7,Uid:ab3370c7-bae0-48da-8d76-aca3602aff62,Namespace:kube-system,Attempt:0,}" Feb 8 23:32:28.540623 env[1054]: time="2024-02-08T23:32:28.540491698Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-42tbf,Uid:18dff0ac-6053-4f60-b0bc-4ffe0c7cfc88,Namespace:kube-system,Attempt:0,}" Feb 8 23:32:28.564365 env[1054]: time="2024-02-08T23:32:28.564228140Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 8 23:32:28.564653 env[1054]: time="2024-02-08T23:32:28.564370756Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 8 23:32:28.564653 env[1054]: time="2024-02-08T23:32:28.564492722Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 8 23:32:28.564799 env[1054]: time="2024-02-08T23:32:28.564736214Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a6a66cf94aa357292aa71311ce0a49d4bfddab8971c4e214112305166c544c27 pid=2856 runtime=io.containerd.runc.v2 Feb 8 23:32:28.567274 env[1054]: time="2024-02-08T23:32:28.567151280Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 8 23:32:28.567711 env[1054]: time="2024-02-08T23:32:28.567594232Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 8 23:32:28.567970 env[1054]: time="2024-02-08T23:32:28.567651589Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 8 23:32:28.568582 env[1054]: time="2024-02-08T23:32:28.568453829Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/424a92f6aa57a1cd7a0ce6ae7842283dbeadb021498f7db19482f8655f1f1c6d pid=2861 runtime=io.containerd.runc.v2 Feb 8 23:32:28.597539 systemd[1]: Started cri-containerd-a6a66cf94aa357292aa71311ce0a49d4bfddab8971c4e214112305166c544c27.scope. Feb 8 23:32:28.605744 systemd[1]: Started cri-containerd-424a92f6aa57a1cd7a0ce6ae7842283dbeadb021498f7db19482f8655f1f1c6d.scope. Feb 8 23:32:28.631079 env[1054]: time="2024-02-08T23:32:28.631035074Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-42tbf,Uid:18dff0ac-6053-4f60-b0bc-4ffe0c7cfc88,Namespace:kube-system,Attempt:0,} returns sandbox id \"424a92f6aa57a1cd7a0ce6ae7842283dbeadb021498f7db19482f8655f1f1c6d\"" Feb 8 23:32:28.634052 env[1054]: time="2024-02-08T23:32:28.634025187Z" level=info msg="CreateContainer within sandbox \"424a92f6aa57a1cd7a0ce6ae7842283dbeadb021498f7db19482f8655f1f1c6d\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 8 23:32:28.649080 env[1054]: time="2024-02-08T23:32:28.649035063Z" level=info msg="CreateContainer within sandbox \"424a92f6aa57a1cd7a0ce6ae7842283dbeadb021498f7db19482f8655f1f1c6d\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"e3ae4b511f2d40d9f1660ed24ea8a3c46312f3e0d3711f3aa3988444822c3f02\"" Feb 8 23:32:28.649849 env[1054]: time="2024-02-08T23:32:28.649819440Z" level=info msg="StartContainer for \"e3ae4b511f2d40d9f1660ed24ea8a3c46312f3e0d3711f3aa3988444822c3f02\"" Feb 8 23:32:28.661806 env[1054]: time="2024-02-08T23:32:28.661743745Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-574c4bb98d-4m5m7,Uid:ab3370c7-bae0-48da-8d76-aca3602aff62,Namespace:kube-system,Attempt:0,} returns sandbox id \"a6a66cf94aa357292aa71311ce0a49d4bfddab8971c4e214112305166c544c27\"" Feb 8 23:32:28.663400 env[1054]: time="2024-02-08T23:32:28.663331714Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 8 23:32:28.677273 systemd[1]: Started cri-containerd-e3ae4b511f2d40d9f1660ed24ea8a3c46312f3e0d3711f3aa3988444822c3f02.scope. Feb 8 23:32:28.688957 systemd[1]: cri-containerd-e3ae4b511f2d40d9f1660ed24ea8a3c46312f3e0d3711f3aa3988444822c3f02.scope: Deactivated successfully. Feb 8 23:32:28.709492 env[1054]: time="2024-02-08T23:32:28.709373563Z" level=info msg="shim disconnected" id=e3ae4b511f2d40d9f1660ed24ea8a3c46312f3e0d3711f3aa3988444822c3f02 Feb 8 23:32:28.709492 env[1054]: time="2024-02-08T23:32:28.709494227Z" level=warning msg="cleaning up after shim disconnected" id=e3ae4b511f2d40d9f1660ed24ea8a3c46312f3e0d3711f3aa3988444822c3f02 namespace=k8s.io Feb 8 23:32:28.709695 env[1054]: time="2024-02-08T23:32:28.709506329Z" level=info msg="cleaning up dead shim" Feb 8 23:32:28.717511 env[1054]: time="2024-02-08T23:32:28.717449822Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:32:28Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2950 runtime=io.containerd.runc.v2\ntime=\"2024-02-08T23:32:28Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/e3ae4b511f2d40d9f1660ed24ea8a3c46312f3e0d3711f3aa3988444822c3f02/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Feb 8 23:32:28.717839 env[1054]: time="2024-02-08T23:32:28.717708743Z" level=error msg="copy shim log" error="read /proc/self/fd/64: file already closed" Feb 8 23:32:28.719194 env[1054]: time="2024-02-08T23:32:28.719143507Z" level=error msg="Failed to pipe stdout of container \"e3ae4b511f2d40d9f1660ed24ea8a3c46312f3e0d3711f3aa3988444822c3f02\"" error="reading from a closed fifo" Feb 8 23:32:28.719244 env[1054]: time="2024-02-08T23:32:28.719202136Z" level=error msg="Failed to pipe stderr of container \"e3ae4b511f2d40d9f1660ed24ea8a3c46312f3e0d3711f3aa3988444822c3f02\"" error="reading from a closed fifo" Feb 8 23:32:28.722550 env[1054]: time="2024-02-08T23:32:28.722493389Z" level=error msg="StartContainer for \"e3ae4b511f2d40d9f1660ed24ea8a3c46312f3e0d3711f3aa3988444822c3f02\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Feb 8 23:32:28.722790 kubelet[1336]: E0208 23:32:28.722743 1336 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="e3ae4b511f2d40d9f1660ed24ea8a3c46312f3e0d3711f3aa3988444822c3f02" Feb 8 23:32:28.722888 kubelet[1336]: E0208 23:32:28.722867 1336 kuberuntime_manager.go:1212] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Feb 8 23:32:28.722888 kubelet[1336]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Feb 8 23:32:28.722888 kubelet[1336]: rm /hostbin/cilium-mount Feb 8 23:32:28.722990 kubelet[1336]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-xnm45,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},} start failed in pod cilium-42tbf_kube-system(18dff0ac-6053-4f60-b0bc-4ffe0c7cfc88): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Feb 8 23:32:28.722990 kubelet[1336]: E0208 23:32:28.722914 1336 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-42tbf" podUID=18dff0ac-6053-4f60-b0bc-4ffe0c7cfc88 Feb 8 23:32:28.859545 kubelet[1336]: E0208 23:32:28.859460 1336 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:32:28.892219 kubelet[1336]: E0208 23:32:28.892164 1336 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 8 23:32:29.332707 env[1054]: time="2024-02-08T23:32:29.332086014Z" level=info msg="CreateContainer within sandbox \"424a92f6aa57a1cd7a0ce6ae7842283dbeadb021498f7db19482f8655f1f1c6d\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:1,}" Feb 8 23:32:29.351118 env[1054]: time="2024-02-08T23:32:29.351004928Z" level=info msg="CreateContainer within sandbox \"424a92f6aa57a1cd7a0ce6ae7842283dbeadb021498f7db19482f8655f1f1c6d\" for &ContainerMetadata{Name:mount-cgroup,Attempt:1,} returns container id \"6dcb9f1d56753668da5ca54950ecee1778d088268750ce2deacb066a595013f2\"" Feb 8 23:32:29.354327 env[1054]: time="2024-02-08T23:32:29.354269021Z" level=info msg="StartContainer for \"6dcb9f1d56753668da5ca54950ecee1778d088268750ce2deacb066a595013f2\"" Feb 8 23:32:29.388445 systemd[1]: Started cri-containerd-6dcb9f1d56753668da5ca54950ecee1778d088268750ce2deacb066a595013f2.scope. Feb 8 23:32:29.426573 systemd[1]: cri-containerd-6dcb9f1d56753668da5ca54950ecee1778d088268750ce2deacb066a595013f2.scope: Deactivated successfully. Feb 8 23:32:29.433039 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6dcb9f1d56753668da5ca54950ecee1778d088268750ce2deacb066a595013f2-rootfs.mount: Deactivated successfully. Feb 8 23:32:29.440303 env[1054]: time="2024-02-08T23:32:29.440252422Z" level=info msg="shim disconnected" id=6dcb9f1d56753668da5ca54950ecee1778d088268750ce2deacb066a595013f2 Feb 8 23:32:29.440489 env[1054]: time="2024-02-08T23:32:29.440305751Z" level=warning msg="cleaning up after shim disconnected" id=6dcb9f1d56753668da5ca54950ecee1778d088268750ce2deacb066a595013f2 namespace=k8s.io Feb 8 23:32:29.440489 env[1054]: time="2024-02-08T23:32:29.440317744Z" level=info msg="cleaning up dead shim" Feb 8 23:32:29.448416 env[1054]: time="2024-02-08T23:32:29.448343742Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:32:29Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2987 runtime=io.containerd.runc.v2\ntime=\"2024-02-08T23:32:29Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/6dcb9f1d56753668da5ca54950ecee1778d088268750ce2deacb066a595013f2/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Feb 8 23:32:29.448683 env[1054]: time="2024-02-08T23:32:29.448612382Z" level=error msg="copy shim log" error="read /proc/self/fd/67: file already closed" Feb 8 23:32:29.451457 env[1054]: time="2024-02-08T23:32:29.451414787Z" level=error msg="Failed to pipe stdout of container \"6dcb9f1d56753668da5ca54950ecee1778d088268750ce2deacb066a595013f2\"" error="reading from a closed fifo" Feb 8 23:32:29.451513 env[1054]: time="2024-02-08T23:32:29.451487662Z" level=error msg="Failed to pipe stderr of container \"6dcb9f1d56753668da5ca54950ecee1778d088268750ce2deacb066a595013f2\"" error="reading from a closed fifo" Feb 8 23:32:29.455060 env[1054]: time="2024-02-08T23:32:29.455014775Z" level=error msg="StartContainer for \"6dcb9f1d56753668da5ca54950ecee1778d088268750ce2deacb066a595013f2\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Feb 8 23:32:29.455513 kubelet[1336]: E0208 23:32:29.455335 1336 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="6dcb9f1d56753668da5ca54950ecee1778d088268750ce2deacb066a595013f2" Feb 8 23:32:29.456056 kubelet[1336]: E0208 23:32:29.455668 1336 kuberuntime_manager.go:1212] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Feb 8 23:32:29.456056 kubelet[1336]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Feb 8 23:32:29.456056 kubelet[1336]: rm /hostbin/cilium-mount Feb 8 23:32:29.456056 kubelet[1336]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-xnm45,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},} start failed in pod cilium-42tbf_kube-system(18dff0ac-6053-4f60-b0bc-4ffe0c7cfc88): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Feb 8 23:32:29.456056 kubelet[1336]: E0208 23:32:29.455734 1336 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-42tbf" podUID=18dff0ac-6053-4f60-b0bc-4ffe0c7cfc88 Feb 8 23:32:29.860536 kubelet[1336]: E0208 23:32:29.860462 1336 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:32:30.336056 kubelet[1336]: I0208 23:32:30.335916 1336 scope.go:115] "RemoveContainer" containerID="e3ae4b511f2d40d9f1660ed24ea8a3c46312f3e0d3711f3aa3988444822c3f02" Feb 8 23:32:30.336991 kubelet[1336]: I0208 23:32:30.336954 1336 scope.go:115] "RemoveContainer" containerID="e3ae4b511f2d40d9f1660ed24ea8a3c46312f3e0d3711f3aa3988444822c3f02" Feb 8 23:32:30.340342 env[1054]: time="2024-02-08T23:32:30.340222128Z" level=info msg="RemoveContainer for \"e3ae4b511f2d40d9f1660ed24ea8a3c46312f3e0d3711f3aa3988444822c3f02\"" Feb 8 23:32:30.341312 env[1054]: time="2024-02-08T23:32:30.340482632Z" level=info msg="RemoveContainer for \"e3ae4b511f2d40d9f1660ed24ea8a3c46312f3e0d3711f3aa3988444822c3f02\"" Feb 8 23:32:30.342582 env[1054]: time="2024-02-08T23:32:30.342477979Z" level=error msg="RemoveContainer for \"e3ae4b511f2d40d9f1660ed24ea8a3c46312f3e0d3711f3aa3988444822c3f02\" failed" error="failed to set removing state for container \"e3ae4b511f2d40d9f1660ed24ea8a3c46312f3e0d3711f3aa3988444822c3f02\": container is already in removing state" Feb 8 23:32:30.342871 kubelet[1336]: E0208 23:32:30.342762 1336 remote_runtime.go:368] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"e3ae4b511f2d40d9f1660ed24ea8a3c46312f3e0d3711f3aa3988444822c3f02\": container is already in removing state" containerID="e3ae4b511f2d40d9f1660ed24ea8a3c46312f3e0d3711f3aa3988444822c3f02" Feb 8 23:32:30.342871 kubelet[1336]: E0208 23:32:30.342819 1336 kuberuntime_container.go:817] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "e3ae4b511f2d40d9f1660ed24ea8a3c46312f3e0d3711f3aa3988444822c3f02": container is already in removing state; Skipping pod "cilium-42tbf_kube-system(18dff0ac-6053-4f60-b0bc-4ffe0c7cfc88)" Feb 8 23:32:30.343474 kubelet[1336]: E0208 23:32:30.343441 1336 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mount-cgroup pod=cilium-42tbf_kube-system(18dff0ac-6053-4f60-b0bc-4ffe0c7cfc88)\"" pod="kube-system/cilium-42tbf" podUID=18dff0ac-6053-4f60-b0bc-4ffe0c7cfc88 Feb 8 23:32:30.368612 env[1054]: time="2024-02-08T23:32:30.368537798Z" level=info msg="RemoveContainer for \"e3ae4b511f2d40d9f1660ed24ea8a3c46312f3e0d3711f3aa3988444822c3f02\" returns successfully" Feb 8 23:32:30.374064 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount327590274.mount: Deactivated successfully. Feb 8 23:32:30.861370 kubelet[1336]: E0208 23:32:30.861288 1336 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:32:31.341670 env[1054]: time="2024-02-08T23:32:31.341487571Z" level=info msg="StopPodSandbox for \"424a92f6aa57a1cd7a0ce6ae7842283dbeadb021498f7db19482f8655f1f1c6d\"" Feb 8 23:32:31.341670 env[1054]: time="2024-02-08T23:32:31.341615229Z" level=info msg="Container to stop \"6dcb9f1d56753668da5ca54950ecee1778d088268750ce2deacb066a595013f2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 8 23:32:31.344527 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-424a92f6aa57a1cd7a0ce6ae7842283dbeadb021498f7db19482f8655f1f1c6d-shm.mount: Deactivated successfully. Feb 8 23:32:31.362550 systemd[1]: cri-containerd-424a92f6aa57a1cd7a0ce6ae7842283dbeadb021498f7db19482f8655f1f1c6d.scope: Deactivated successfully. Feb 8 23:32:31.404605 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-424a92f6aa57a1cd7a0ce6ae7842283dbeadb021498f7db19482f8655f1f1c6d-rootfs.mount: Deactivated successfully. Feb 8 23:32:31.626266 env[1054]: time="2024-02-08T23:32:31.626166340Z" level=info msg="shim disconnected" id=424a92f6aa57a1cd7a0ce6ae7842283dbeadb021498f7db19482f8655f1f1c6d Feb 8 23:32:31.627519 env[1054]: time="2024-02-08T23:32:31.627467960Z" level=warning msg="cleaning up after shim disconnected" id=424a92f6aa57a1cd7a0ce6ae7842283dbeadb021498f7db19482f8655f1f1c6d namespace=k8s.io Feb 8 23:32:31.627735 env[1054]: time="2024-02-08T23:32:31.627696945Z" level=info msg="cleaning up dead shim" Feb 8 23:32:31.663355 env[1054]: time="2024-02-08T23:32:31.663254133Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:32:31Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3023 runtime=io.containerd.runc.v2\n" Feb 8 23:32:31.664508 env[1054]: time="2024-02-08T23:32:31.664446077Z" level=info msg="TearDown network for sandbox \"424a92f6aa57a1cd7a0ce6ae7842283dbeadb021498f7db19482f8655f1f1c6d\" successfully" Feb 8 23:32:31.664718 env[1054]: time="2024-02-08T23:32:31.664670925Z" level=info msg="StopPodSandbox for \"424a92f6aa57a1cd7a0ce6ae7842283dbeadb021498f7db19482f8655f1f1c6d\" returns successfully" Feb 8 23:32:31.811655 kubelet[1336]: I0208 23:32:31.811425 1336 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/18dff0ac-6053-4f60-b0bc-4ffe0c7cfc88-cni-path\") pod \"18dff0ac-6053-4f60-b0bc-4ffe0c7cfc88\" (UID: \"18dff0ac-6053-4f60-b0bc-4ffe0c7cfc88\") " Feb 8 23:32:31.812181 kubelet[1336]: I0208 23:32:31.812154 1336 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/18dff0ac-6053-4f60-b0bc-4ffe0c7cfc88-cilium-ipsec-secrets\") pod \"18dff0ac-6053-4f60-b0bc-4ffe0c7cfc88\" (UID: \"18dff0ac-6053-4f60-b0bc-4ffe0c7cfc88\") " Feb 8 23:32:31.812943 kubelet[1336]: I0208 23:32:31.812917 1336 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/18dff0ac-6053-4f60-b0bc-4ffe0c7cfc88-bpf-maps\") pod \"18dff0ac-6053-4f60-b0bc-4ffe0c7cfc88\" (UID: \"18dff0ac-6053-4f60-b0bc-4ffe0c7cfc88\") " Feb 8 23:32:31.813195 kubelet[1336]: I0208 23:32:31.813148 1336 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/18dff0ac-6053-4f60-b0bc-4ffe0c7cfc88-host-proc-sys-kernel\") pod \"18dff0ac-6053-4f60-b0bc-4ffe0c7cfc88\" (UID: \"18dff0ac-6053-4f60-b0bc-4ffe0c7cfc88\") " Feb 8 23:32:31.813435 kubelet[1336]: I0208 23:32:31.813408 1336 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/18dff0ac-6053-4f60-b0bc-4ffe0c7cfc88-cilium-cgroup\") pod \"18dff0ac-6053-4f60-b0bc-4ffe0c7cfc88\" (UID: \"18dff0ac-6053-4f60-b0bc-4ffe0c7cfc88\") " Feb 8 23:32:31.813667 kubelet[1336]: I0208 23:32:31.813622 1336 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/18dff0ac-6053-4f60-b0bc-4ffe0c7cfc88-xtables-lock\") pod \"18dff0ac-6053-4f60-b0bc-4ffe0c7cfc88\" (UID: \"18dff0ac-6053-4f60-b0bc-4ffe0c7cfc88\") " Feb 8 23:32:31.813887 kubelet[1336]: I0208 23:32:31.813842 1336 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/18dff0ac-6053-4f60-b0bc-4ffe0c7cfc88-hostproc\") pod \"18dff0ac-6053-4f60-b0bc-4ffe0c7cfc88\" (UID: \"18dff0ac-6053-4f60-b0bc-4ffe0c7cfc88\") " Feb 8 23:32:31.814122 kubelet[1336]: I0208 23:32:31.814099 1336 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/18dff0ac-6053-4f60-b0bc-4ffe0c7cfc88-clustermesh-secrets\") pod \"18dff0ac-6053-4f60-b0bc-4ffe0c7cfc88\" (UID: \"18dff0ac-6053-4f60-b0bc-4ffe0c7cfc88\") " Feb 8 23:32:31.814354 kubelet[1336]: I0208 23:32:31.814331 1336 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xnm45\" (UniqueName: \"kubernetes.io/projected/18dff0ac-6053-4f60-b0bc-4ffe0c7cfc88-kube-api-access-xnm45\") pod \"18dff0ac-6053-4f60-b0bc-4ffe0c7cfc88\" (UID: \"18dff0ac-6053-4f60-b0bc-4ffe0c7cfc88\") " Feb 8 23:32:31.814663 kubelet[1336]: I0208 23:32:31.814638 1336 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/18dff0ac-6053-4f60-b0bc-4ffe0c7cfc88-lib-modules\") pod \"18dff0ac-6053-4f60-b0bc-4ffe0c7cfc88\" (UID: \"18dff0ac-6053-4f60-b0bc-4ffe0c7cfc88\") " Feb 8 23:32:31.814902 kubelet[1336]: I0208 23:32:31.814855 1336 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/18dff0ac-6053-4f60-b0bc-4ffe0c7cfc88-cilium-run\") pod \"18dff0ac-6053-4f60-b0bc-4ffe0c7cfc88\" (UID: \"18dff0ac-6053-4f60-b0bc-4ffe0c7cfc88\") " Feb 8 23:32:31.815142 kubelet[1336]: I0208 23:32:31.815117 1336 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/18dff0ac-6053-4f60-b0bc-4ffe0c7cfc88-etc-cni-netd\") pod \"18dff0ac-6053-4f60-b0bc-4ffe0c7cfc88\" (UID: \"18dff0ac-6053-4f60-b0bc-4ffe0c7cfc88\") " Feb 8 23:32:31.815419 kubelet[1336]: I0208 23:32:31.815361 1336 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/18dff0ac-6053-4f60-b0bc-4ffe0c7cfc88-hubble-tls\") pod \"18dff0ac-6053-4f60-b0bc-4ffe0c7cfc88\" (UID: \"18dff0ac-6053-4f60-b0bc-4ffe0c7cfc88\") " Feb 8 23:32:31.815699 kubelet[1336]: I0208 23:32:31.815674 1336 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/18dff0ac-6053-4f60-b0bc-4ffe0c7cfc88-host-proc-sys-net\") pod \"18dff0ac-6053-4f60-b0bc-4ffe0c7cfc88\" (UID: \"18dff0ac-6053-4f60-b0bc-4ffe0c7cfc88\") " Feb 8 23:32:31.815991 kubelet[1336]: I0208 23:32:31.815968 1336 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/18dff0ac-6053-4f60-b0bc-4ffe0c7cfc88-cilium-config-path\") pod \"18dff0ac-6053-4f60-b0bc-4ffe0c7cfc88\" (UID: \"18dff0ac-6053-4f60-b0bc-4ffe0c7cfc88\") " Feb 8 23:32:31.816668 kubelet[1336]: W0208 23:32:31.816579 1336 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/18dff0ac-6053-4f60-b0bc-4ffe0c7cfc88/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 8 23:32:31.817058 kubelet[1336]: I0208 23:32:31.811969 1336 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/18dff0ac-6053-4f60-b0bc-4ffe0c7cfc88-cni-path" (OuterVolumeSpecName: "cni-path") pod "18dff0ac-6053-4f60-b0bc-4ffe0c7cfc88" (UID: "18dff0ac-6053-4f60-b0bc-4ffe0c7cfc88"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:32:31.818999 kubelet[1336]: I0208 23:32:31.818552 1336 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/18dff0ac-6053-4f60-b0bc-4ffe0c7cfc88-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "18dff0ac-6053-4f60-b0bc-4ffe0c7cfc88" (UID: "18dff0ac-6053-4f60-b0bc-4ffe0c7cfc88"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:32:31.819222 kubelet[1336]: I0208 23:32:31.818569 1336 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/18dff0ac-6053-4f60-b0bc-4ffe0c7cfc88-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "18dff0ac-6053-4f60-b0bc-4ffe0c7cfc88" (UID: "18dff0ac-6053-4f60-b0bc-4ffe0c7cfc88"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:32:31.819452 kubelet[1336]: I0208 23:32:31.818582 1336 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/18dff0ac-6053-4f60-b0bc-4ffe0c7cfc88-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "18dff0ac-6053-4f60-b0bc-4ffe0c7cfc88" (UID: "18dff0ac-6053-4f60-b0bc-4ffe0c7cfc88"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:32:31.819656 kubelet[1336]: I0208 23:32:31.818595 1336 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/18dff0ac-6053-4f60-b0bc-4ffe0c7cfc88-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "18dff0ac-6053-4f60-b0bc-4ffe0c7cfc88" (UID: "18dff0ac-6053-4f60-b0bc-4ffe0c7cfc88"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:32:31.819843 kubelet[1336]: I0208 23:32:31.818607 1336 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/18dff0ac-6053-4f60-b0bc-4ffe0c7cfc88-hostproc" (OuterVolumeSpecName: "hostproc") pod "18dff0ac-6053-4f60-b0bc-4ffe0c7cfc88" (UID: "18dff0ac-6053-4f60-b0bc-4ffe0c7cfc88"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:32:31.822107 kubelet[1336]: I0208 23:32:31.822073 1336 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18dff0ac-6053-4f60-b0bc-4ffe0c7cfc88-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "18dff0ac-6053-4f60-b0bc-4ffe0c7cfc88" (UID: "18dff0ac-6053-4f60-b0bc-4ffe0c7cfc88"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 8 23:32:31.822107 kubelet[1336]: I0208 23:32:31.822110 1336 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/18dff0ac-6053-4f60-b0bc-4ffe0c7cfc88-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "18dff0ac-6053-4f60-b0bc-4ffe0c7cfc88" (UID: "18dff0ac-6053-4f60-b0bc-4ffe0c7cfc88"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:32:31.822307 kubelet[1336]: I0208 23:32:31.822274 1336 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/18dff0ac-6053-4f60-b0bc-4ffe0c7cfc88-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "18dff0ac-6053-4f60-b0bc-4ffe0c7cfc88" (UID: "18dff0ac-6053-4f60-b0bc-4ffe0c7cfc88"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:32:31.823501 kubelet[1336]: I0208 23:32:31.823467 1336 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/18dff0ac-6053-4f60-b0bc-4ffe0c7cfc88-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "18dff0ac-6053-4f60-b0bc-4ffe0c7cfc88" (UID: "18dff0ac-6053-4f60-b0bc-4ffe0c7cfc88"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:32:31.823501 kubelet[1336]: I0208 23:32:31.823500 1336 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/18dff0ac-6053-4f60-b0bc-4ffe0c7cfc88-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "18dff0ac-6053-4f60-b0bc-4ffe0c7cfc88" (UID: "18dff0ac-6053-4f60-b0bc-4ffe0c7cfc88"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:32:31.828505 systemd[1]: var-lib-kubelet-pods-18dff0ac\x2d6053\x2d4f60\x2db0bc\x2d4ffe0c7cfc88-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dxnm45.mount: Deactivated successfully. Feb 8 23:32:31.835642 systemd[1]: var-lib-kubelet-pods-18dff0ac\x2d6053\x2d4f60\x2db0bc\x2d4ffe0c7cfc88-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Feb 8 23:32:31.839223 kubelet[1336]: I0208 23:32:31.839176 1336 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18dff0ac-6053-4f60-b0bc-4ffe0c7cfc88-kube-api-access-xnm45" (OuterVolumeSpecName: "kube-api-access-xnm45") pod "18dff0ac-6053-4f60-b0bc-4ffe0c7cfc88" (UID: "18dff0ac-6053-4f60-b0bc-4ffe0c7cfc88"). InnerVolumeSpecName "kube-api-access-xnm45". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 8 23:32:31.839622 kubelet[1336]: W0208 23:32:31.839579 1336 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod18dff0ac_6053_4f60_b0bc_4ffe0c7cfc88.slice/cri-containerd-e3ae4b511f2d40d9f1660ed24ea8a3c46312f3e0d3711f3aa3988444822c3f02.scope WatchSource:0}: container "e3ae4b511f2d40d9f1660ed24ea8a3c46312f3e0d3711f3aa3988444822c3f02" in namespace "k8s.io": not found Feb 8 23:32:31.843657 systemd[1]: var-lib-kubelet-pods-18dff0ac\x2d6053\x2d4f60\x2db0bc\x2d4ffe0c7cfc88-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 8 23:32:31.849563 kubelet[1336]: I0208 23:32:31.849515 1336 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18dff0ac-6053-4f60-b0bc-4ffe0c7cfc88-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "18dff0ac-6053-4f60-b0bc-4ffe0c7cfc88" (UID: "18dff0ac-6053-4f60-b0bc-4ffe0c7cfc88"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 8 23:32:31.851220 kubelet[1336]: I0208 23:32:31.851185 1336 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18dff0ac-6053-4f60-b0bc-4ffe0c7cfc88-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "18dff0ac-6053-4f60-b0bc-4ffe0c7cfc88" (UID: "18dff0ac-6053-4f60-b0bc-4ffe0c7cfc88"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 8 23:32:31.852013 kubelet[1336]: I0208 23:32:31.851978 1336 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18dff0ac-6053-4f60-b0bc-4ffe0c7cfc88-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "18dff0ac-6053-4f60-b0bc-4ffe0c7cfc88" (UID: "18dff0ac-6053-4f60-b0bc-4ffe0c7cfc88"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 8 23:32:31.862193 kubelet[1336]: E0208 23:32:31.862163 1336 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:32:31.921220 kubelet[1336]: I0208 23:32:31.916278 1336 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/18dff0ac-6053-4f60-b0bc-4ffe0c7cfc88-cni-path\") on node \"172.24.4.234\" DevicePath \"\"" Feb 8 23:32:31.921220 kubelet[1336]: I0208 23:32:31.916313 1336 reconciler_common.go:300] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/18dff0ac-6053-4f60-b0bc-4ffe0c7cfc88-cilium-ipsec-secrets\") on node \"172.24.4.234\" DevicePath \"\"" Feb 8 23:32:31.921220 kubelet[1336]: I0208 23:32:31.916328 1336 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/18dff0ac-6053-4f60-b0bc-4ffe0c7cfc88-bpf-maps\") on node \"172.24.4.234\" DevicePath \"\"" Feb 8 23:32:31.921220 kubelet[1336]: I0208 23:32:31.916345 1336 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/18dff0ac-6053-4f60-b0bc-4ffe0c7cfc88-host-proc-sys-kernel\") on node \"172.24.4.234\" DevicePath \"\"" Feb 8 23:32:31.921220 kubelet[1336]: I0208 23:32:31.916358 1336 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/18dff0ac-6053-4f60-b0bc-4ffe0c7cfc88-cilium-cgroup\") on node \"172.24.4.234\" DevicePath \"\"" Feb 8 23:32:31.921220 kubelet[1336]: I0208 23:32:31.916370 1336 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/18dff0ac-6053-4f60-b0bc-4ffe0c7cfc88-xtables-lock\") on node \"172.24.4.234\" DevicePath \"\"" Feb 8 23:32:31.921220 kubelet[1336]: I0208 23:32:31.916396 1336 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/18dff0ac-6053-4f60-b0bc-4ffe0c7cfc88-hostproc\") on node \"172.24.4.234\" DevicePath \"\"" Feb 8 23:32:31.921220 kubelet[1336]: I0208 23:32:31.916410 1336 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/18dff0ac-6053-4f60-b0bc-4ffe0c7cfc88-clustermesh-secrets\") on node \"172.24.4.234\" DevicePath \"\"" Feb 8 23:32:31.921220 kubelet[1336]: I0208 23:32:31.916423 1336 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-xnm45\" (UniqueName: \"kubernetes.io/projected/18dff0ac-6053-4f60-b0bc-4ffe0c7cfc88-kube-api-access-xnm45\") on node \"172.24.4.234\" DevicePath \"\"" Feb 8 23:32:31.921220 kubelet[1336]: I0208 23:32:31.916436 1336 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/18dff0ac-6053-4f60-b0bc-4ffe0c7cfc88-lib-modules\") on node \"172.24.4.234\" DevicePath \"\"" Feb 8 23:32:31.921220 kubelet[1336]: I0208 23:32:31.916457 1336 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/18dff0ac-6053-4f60-b0bc-4ffe0c7cfc88-cilium-run\") on node \"172.24.4.234\" DevicePath \"\"" Feb 8 23:32:31.921220 kubelet[1336]: I0208 23:32:31.916470 1336 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/18dff0ac-6053-4f60-b0bc-4ffe0c7cfc88-etc-cni-netd\") on node \"172.24.4.234\" DevicePath \"\"" Feb 8 23:32:31.921220 kubelet[1336]: I0208 23:32:31.916488 1336 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/18dff0ac-6053-4f60-b0bc-4ffe0c7cfc88-hubble-tls\") on node \"172.24.4.234\" DevicePath \"\"" Feb 8 23:32:31.921220 kubelet[1336]: I0208 23:32:31.916500 1336 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/18dff0ac-6053-4f60-b0bc-4ffe0c7cfc88-host-proc-sys-net\") on node \"172.24.4.234\" DevicePath \"\"" Feb 8 23:32:31.921220 kubelet[1336]: I0208 23:32:31.916512 1336 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/18dff0ac-6053-4f60-b0bc-4ffe0c7cfc88-cilium-config-path\") on node \"172.24.4.234\" DevicePath \"\"" Feb 8 23:32:31.967776 systemd[1]: Removed slice kubepods-burstable-pod18dff0ac_6053_4f60_b0bc_4ffe0c7cfc88.slice. Feb 8 23:32:31.993260 env[1054]: time="2024-02-08T23:32:31.993101480Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:32:31.996753 env[1054]: time="2024-02-08T23:32:31.996708483Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:32:31.999936 env[1054]: time="2024-02-08T23:32:31.999889424Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:32:32.001237 env[1054]: time="2024-02-08T23:32:32.001199639Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Feb 8 23:32:32.004314 env[1054]: time="2024-02-08T23:32:32.004275787Z" level=info msg="CreateContainer within sandbox \"a6a66cf94aa357292aa71311ce0a49d4bfddab8971c4e214112305166c544c27\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 8 23:32:32.032662 env[1054]: time="2024-02-08T23:32:32.032520715Z" level=info msg="CreateContainer within sandbox \"a6a66cf94aa357292aa71311ce0a49d4bfddab8971c4e214112305166c544c27\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"0ceacafa5761941f532f754aa75e0158f13b2e0886460a747588a7addfbec154\"" Feb 8 23:32:32.033709 env[1054]: time="2024-02-08T23:32:32.033549457Z" level=info msg="StartContainer for \"0ceacafa5761941f532f754aa75e0158f13b2e0886460a747588a7addfbec154\"" Feb 8 23:32:32.058618 systemd[1]: Started cri-containerd-0ceacafa5761941f532f754aa75e0158f13b2e0886460a747588a7addfbec154.scope. Feb 8 23:32:32.237502 env[1054]: time="2024-02-08T23:32:32.237168463Z" level=info msg="StartContainer for \"0ceacafa5761941f532f754aa75e0158f13b2e0886460a747588a7addfbec154\" returns successfully" Feb 8 23:32:32.349456 kubelet[1336]: I0208 23:32:32.349355 1336 scope.go:115] "RemoveContainer" containerID="6dcb9f1d56753668da5ca54950ecee1778d088268750ce2deacb066a595013f2" Feb 8 23:32:32.356698 systemd[1]: var-lib-kubelet-pods-18dff0ac\x2d6053\x2d4f60\x2db0bc\x2d4ffe0c7cfc88-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 8 23:32:32.385875 env[1054]: time="2024-02-08T23:32:32.385088492Z" level=info msg="RemoveContainer for \"6dcb9f1d56753668da5ca54950ecee1778d088268750ce2deacb066a595013f2\"" Feb 8 23:32:32.398099 env[1054]: time="2024-02-08T23:32:32.398024666Z" level=info msg="RemoveContainer for \"6dcb9f1d56753668da5ca54950ecee1778d088268750ce2deacb066a595013f2\" returns successfully" Feb 8 23:32:32.441666 kubelet[1336]: I0208 23:32:32.441630 1336 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-574c4bb98d-4m5m7" podStartSLOduration=1.10281402 podCreationTimestamp="2024-02-08 23:32:28 +0000 UTC" firstStartedPulling="2024-02-08 23:32:28.66284456 +0000 UTC m=+95.578104348" lastFinishedPulling="2024-02-08 23:32:32.001616965 +0000 UTC m=+98.916876753" observedRunningTime="2024-02-08 23:32:32.422342336 +0000 UTC m=+99.337602124" watchObservedRunningTime="2024-02-08 23:32:32.441586425 +0000 UTC m=+99.356846213" Feb 8 23:32:32.480674 kubelet[1336]: I0208 23:32:32.480619 1336 topology_manager.go:212] "Topology Admit Handler" Feb 8 23:32:32.481060 kubelet[1336]: E0208 23:32:32.481038 1336 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="18dff0ac-6053-4f60-b0bc-4ffe0c7cfc88" containerName="mount-cgroup" Feb 8 23:32:32.481228 kubelet[1336]: E0208 23:32:32.481206 1336 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="18dff0ac-6053-4f60-b0bc-4ffe0c7cfc88" containerName="mount-cgroup" Feb 8 23:32:32.481459 kubelet[1336]: I0208 23:32:32.481439 1336 memory_manager.go:346] "RemoveStaleState removing state" podUID="18dff0ac-6053-4f60-b0bc-4ffe0c7cfc88" containerName="mount-cgroup" Feb 8 23:32:32.481709 kubelet[1336]: I0208 23:32:32.481660 1336 memory_manager.go:346] "RemoveStaleState removing state" podUID="18dff0ac-6053-4f60-b0bc-4ffe0c7cfc88" containerName="mount-cgroup" Feb 8 23:32:32.492712 systemd[1]: Created slice kubepods-burstable-pod6a27e672_f70a_4d25_8a24_7419cdf6b327.slice. Feb 8 23:32:32.620881 kubelet[1336]: I0208 23:32:32.620714 1336 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6a27e672-f70a-4d25-8a24-7419cdf6b327-hostproc\") pod \"cilium-v79rf\" (UID: \"6a27e672-f70a-4d25-8a24-7419cdf6b327\") " pod="kube-system/cilium-v79rf" Feb 8 23:32:32.620881 kubelet[1336]: I0208 23:32:32.620865 1336 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6a27e672-f70a-4d25-8a24-7419cdf6b327-cilium-cgroup\") pod \"cilium-v79rf\" (UID: \"6a27e672-f70a-4d25-8a24-7419cdf6b327\") " pod="kube-system/cilium-v79rf" Feb 8 23:32:32.621218 kubelet[1336]: I0208 23:32:32.621002 1336 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6a27e672-f70a-4d25-8a24-7419cdf6b327-host-proc-sys-net\") pod \"cilium-v79rf\" (UID: \"6a27e672-f70a-4d25-8a24-7419cdf6b327\") " pod="kube-system/cilium-v79rf" Feb 8 23:32:32.621218 kubelet[1336]: I0208 23:32:32.621109 1336 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6a27e672-f70a-4d25-8a24-7419cdf6b327-bpf-maps\") pod \"cilium-v79rf\" (UID: \"6a27e672-f70a-4d25-8a24-7419cdf6b327\") " pod="kube-system/cilium-v79rf" Feb 8 23:32:32.621218 kubelet[1336]: I0208 23:32:32.621212 1336 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6a27e672-f70a-4d25-8a24-7419cdf6b327-cni-path\") pod \"cilium-v79rf\" (UID: \"6a27e672-f70a-4d25-8a24-7419cdf6b327\") " pod="kube-system/cilium-v79rf" Feb 8 23:32:32.621528 kubelet[1336]: I0208 23:32:32.621311 1336 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6a27e672-f70a-4d25-8a24-7419cdf6b327-etc-cni-netd\") pod \"cilium-v79rf\" (UID: \"6a27e672-f70a-4d25-8a24-7419cdf6b327\") " pod="kube-system/cilium-v79rf" Feb 8 23:32:32.621528 kubelet[1336]: I0208 23:32:32.621418 1336 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6a27e672-f70a-4d25-8a24-7419cdf6b327-cilium-config-path\") pod \"cilium-v79rf\" (UID: \"6a27e672-f70a-4d25-8a24-7419cdf6b327\") " pod="kube-system/cilium-v79rf" Feb 8 23:32:32.621528 kubelet[1336]: I0208 23:32:32.621518 1336 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/6a27e672-f70a-4d25-8a24-7419cdf6b327-cilium-ipsec-secrets\") pod \"cilium-v79rf\" (UID: \"6a27e672-f70a-4d25-8a24-7419cdf6b327\") " pod="kube-system/cilium-v79rf" Feb 8 23:32:32.621778 kubelet[1336]: I0208 23:32:32.621631 1336 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6a27e672-f70a-4d25-8a24-7419cdf6b327-cilium-run\") pod \"cilium-v79rf\" (UID: \"6a27e672-f70a-4d25-8a24-7419cdf6b327\") " pod="kube-system/cilium-v79rf" Feb 8 23:32:32.621855 kubelet[1336]: I0208 23:32:32.621838 1336 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6a27e672-f70a-4d25-8a24-7419cdf6b327-xtables-lock\") pod \"cilium-v79rf\" (UID: \"6a27e672-f70a-4d25-8a24-7419cdf6b327\") " pod="kube-system/cilium-v79rf" Feb 8 23:32:32.622491 kubelet[1336]: I0208 23:32:32.622446 1336 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6a27e672-f70a-4d25-8a24-7419cdf6b327-clustermesh-secrets\") pod \"cilium-v79rf\" (UID: \"6a27e672-f70a-4d25-8a24-7419cdf6b327\") " pod="kube-system/cilium-v79rf" Feb 8 23:32:32.622692 kubelet[1336]: I0208 23:32:32.622641 1336 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6a27e672-f70a-4d25-8a24-7419cdf6b327-hubble-tls\") pod \"cilium-v79rf\" (UID: \"6a27e672-f70a-4d25-8a24-7419cdf6b327\") " pod="kube-system/cilium-v79rf" Feb 8 23:32:32.622940 kubelet[1336]: I0208 23:32:32.622860 1336 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2fssx\" (UniqueName: \"kubernetes.io/projected/6a27e672-f70a-4d25-8a24-7419cdf6b327-kube-api-access-2fssx\") pod \"cilium-v79rf\" (UID: \"6a27e672-f70a-4d25-8a24-7419cdf6b327\") " pod="kube-system/cilium-v79rf" Feb 8 23:32:32.623136 kubelet[1336]: I0208 23:32:32.623061 1336 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6a27e672-f70a-4d25-8a24-7419cdf6b327-lib-modules\") pod \"cilium-v79rf\" (UID: \"6a27e672-f70a-4d25-8a24-7419cdf6b327\") " pod="kube-system/cilium-v79rf" Feb 8 23:32:32.623290 kubelet[1336]: I0208 23:32:32.623263 1336 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6a27e672-f70a-4d25-8a24-7419cdf6b327-host-proc-sys-kernel\") pod \"cilium-v79rf\" (UID: \"6a27e672-f70a-4d25-8a24-7419cdf6b327\") " pod="kube-system/cilium-v79rf" Feb 8 23:32:32.805976 env[1054]: time="2024-02-08T23:32:32.804798235Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-v79rf,Uid:6a27e672-f70a-4d25-8a24-7419cdf6b327,Namespace:kube-system,Attempt:0,}" Feb 8 23:32:32.845271 env[1054]: time="2024-02-08T23:32:32.844768587Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 8 23:32:32.845271 env[1054]: time="2024-02-08T23:32:32.844851883Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 8 23:32:32.845271 env[1054]: time="2024-02-08T23:32:32.844885254Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 8 23:32:32.846144 env[1054]: time="2024-02-08T23:32:32.845988074Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7acf4f943486bf0b70d66ae9ac051e56cdf9e626892a4bdaa1dbed6f1e9625f7 pid=3091 runtime=io.containerd.runc.v2 Feb 8 23:32:32.864189 kubelet[1336]: E0208 23:32:32.864069 1336 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:32:32.884816 systemd[1]: Started cri-containerd-7acf4f943486bf0b70d66ae9ac051e56cdf9e626892a4bdaa1dbed6f1e9625f7.scope. Feb 8 23:32:32.924487 env[1054]: time="2024-02-08T23:32:32.924432418Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-v79rf,Uid:6a27e672-f70a-4d25-8a24-7419cdf6b327,Namespace:kube-system,Attempt:0,} returns sandbox id \"7acf4f943486bf0b70d66ae9ac051e56cdf9e626892a4bdaa1dbed6f1e9625f7\"" Feb 8 23:32:32.928540 env[1054]: time="2024-02-08T23:32:32.928489219Z" level=info msg="CreateContainer within sandbox \"7acf4f943486bf0b70d66ae9ac051e56cdf9e626892a4bdaa1dbed6f1e9625f7\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 8 23:32:32.949396 env[1054]: time="2024-02-08T23:32:32.949311230Z" level=info msg="CreateContainer within sandbox \"7acf4f943486bf0b70d66ae9ac051e56cdf9e626892a4bdaa1dbed6f1e9625f7\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"bddc5f50e528e9b783fa35304a29b818438855e899696ef755137ec79c41789b\"" Feb 8 23:32:32.950150 env[1054]: time="2024-02-08T23:32:32.950098494Z" level=info msg="StartContainer for \"bddc5f50e528e9b783fa35304a29b818438855e899696ef755137ec79c41789b\"" Feb 8 23:32:32.974920 systemd[1]: Started cri-containerd-bddc5f50e528e9b783fa35304a29b818438855e899696ef755137ec79c41789b.scope. Feb 8 23:32:33.009596 env[1054]: time="2024-02-08T23:32:33.009553866Z" level=info msg="StartContainer for \"bddc5f50e528e9b783fa35304a29b818438855e899696ef755137ec79c41789b\" returns successfully" Feb 8 23:32:33.020540 systemd[1]: cri-containerd-bddc5f50e528e9b783fa35304a29b818438855e899696ef755137ec79c41789b.scope: Deactivated successfully. Feb 8 23:32:33.047183 env[1054]: time="2024-02-08T23:32:33.047111517Z" level=info msg="shim disconnected" id=bddc5f50e528e9b783fa35304a29b818438855e899696ef755137ec79c41789b Feb 8 23:32:33.047463 env[1054]: time="2024-02-08T23:32:33.047441250Z" level=warning msg="cleaning up after shim disconnected" id=bddc5f50e528e9b783fa35304a29b818438855e899696ef755137ec79c41789b namespace=k8s.io Feb 8 23:32:33.047556 env[1054]: time="2024-02-08T23:32:33.047539683Z" level=info msg="cleaning up dead shim" Feb 8 23:32:33.055247 env[1054]: time="2024-02-08T23:32:33.055200275Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:32:33Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3172 runtime=io.containerd.runc.v2\n" Feb 8 23:32:33.374825 env[1054]: time="2024-02-08T23:32:33.374763267Z" level=info msg="CreateContainer within sandbox \"7acf4f943486bf0b70d66ae9ac051e56cdf9e626892a4bdaa1dbed6f1e9625f7\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 8 23:32:33.399275 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2237809893.mount: Deactivated successfully. Feb 8 23:32:33.407353 env[1054]: time="2024-02-08T23:32:33.407262183Z" level=info msg="CreateContainer within sandbox \"7acf4f943486bf0b70d66ae9ac051e56cdf9e626892a4bdaa1dbed6f1e9625f7\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"70ecb95561cd5e3e8663c0a2a0234eb4a41f3adbd7db1920df85bc4c80812ce6\"" Feb 8 23:32:33.409076 env[1054]: time="2024-02-08T23:32:33.409023798Z" level=info msg="StartContainer for \"70ecb95561cd5e3e8663c0a2a0234eb4a41f3adbd7db1920df85bc4c80812ce6\"" Feb 8 23:32:33.440876 systemd[1]: Started cri-containerd-70ecb95561cd5e3e8663c0a2a0234eb4a41f3adbd7db1920df85bc4c80812ce6.scope. Feb 8 23:32:33.488329 env[1054]: time="2024-02-08T23:32:33.488211642Z" level=info msg="StartContainer for \"70ecb95561cd5e3e8663c0a2a0234eb4a41f3adbd7db1920df85bc4c80812ce6\" returns successfully" Feb 8 23:32:33.491546 systemd[1]: cri-containerd-70ecb95561cd5e3e8663c0a2a0234eb4a41f3adbd7db1920df85bc4c80812ce6.scope: Deactivated successfully. Feb 8 23:32:33.518931 env[1054]: time="2024-02-08T23:32:33.518859337Z" level=info msg="shim disconnected" id=70ecb95561cd5e3e8663c0a2a0234eb4a41f3adbd7db1920df85bc4c80812ce6 Feb 8 23:32:33.519255 env[1054]: time="2024-02-08T23:32:33.519224697Z" level=warning msg="cleaning up after shim disconnected" id=70ecb95561cd5e3e8663c0a2a0234eb4a41f3adbd7db1920df85bc4c80812ce6 namespace=k8s.io Feb 8 23:32:33.519344 env[1054]: time="2024-02-08T23:32:33.519327187Z" level=info msg="cleaning up dead shim" Feb 8 23:32:33.528947 env[1054]: time="2024-02-08T23:32:33.528909120Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:32:33Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3235 runtime=io.containerd.runc.v2\n" Feb 8 23:32:33.764539 kubelet[1336]: E0208 23:32:33.763697 1336 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:32:33.864347 kubelet[1336]: E0208 23:32:33.864311 1336 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:32:33.894294 kubelet[1336]: E0208 23:32:33.894210 1336 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 8 23:32:33.967367 kubelet[1336]: I0208 23:32:33.967332 1336 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=18dff0ac-6053-4f60-b0bc-4ffe0c7cfc88 path="/var/lib/kubelet/pods/18dff0ac-6053-4f60-b0bc-4ffe0c7cfc88/volumes" Feb 8 23:32:34.346976 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-70ecb95561cd5e3e8663c0a2a0234eb4a41f3adbd7db1920df85bc4c80812ce6-rootfs.mount: Deactivated successfully. Feb 8 23:32:34.386518 env[1054]: time="2024-02-08T23:32:34.386453623Z" level=info msg="CreateContainer within sandbox \"7acf4f943486bf0b70d66ae9ac051e56cdf9e626892a4bdaa1dbed6f1e9625f7\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 8 23:32:34.422510 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2380722511.mount: Deactivated successfully. Feb 8 23:32:34.435864 env[1054]: time="2024-02-08T23:32:34.435755684Z" level=info msg="CreateContainer within sandbox \"7acf4f943486bf0b70d66ae9ac051e56cdf9e626892a4bdaa1dbed6f1e9625f7\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"c585a8c0c6c95ca89bfd42070790c008a73c4a9e45d6f849606e0c5cfec471ac\"" Feb 8 23:32:34.437134 env[1054]: time="2024-02-08T23:32:34.437060119Z" level=info msg="StartContainer for \"c585a8c0c6c95ca89bfd42070790c008a73c4a9e45d6f849606e0c5cfec471ac\"" Feb 8 23:32:34.474343 systemd[1]: Started cri-containerd-c585a8c0c6c95ca89bfd42070790c008a73c4a9e45d6f849606e0c5cfec471ac.scope. Feb 8 23:32:34.535062 env[1054]: time="2024-02-08T23:32:34.534971363Z" level=info msg="StartContainer for \"c585a8c0c6c95ca89bfd42070790c008a73c4a9e45d6f849606e0c5cfec471ac\" returns successfully" Feb 8 23:32:34.543246 systemd[1]: cri-containerd-c585a8c0c6c95ca89bfd42070790c008a73c4a9e45d6f849606e0c5cfec471ac.scope: Deactivated successfully. Feb 8 23:32:34.581594 env[1054]: time="2024-02-08T23:32:34.581540353Z" level=info msg="shim disconnected" id=c585a8c0c6c95ca89bfd42070790c008a73c4a9e45d6f849606e0c5cfec471ac Feb 8 23:32:34.581899 env[1054]: time="2024-02-08T23:32:34.581880295Z" level=warning msg="cleaning up after shim disconnected" id=c585a8c0c6c95ca89bfd42070790c008a73c4a9e45d6f849606e0c5cfec471ac namespace=k8s.io Feb 8 23:32:34.581994 env[1054]: time="2024-02-08T23:32:34.581978297Z" level=info msg="cleaning up dead shim" Feb 8 23:32:34.590245 env[1054]: time="2024-02-08T23:32:34.590198422Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:32:34Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3295 runtime=io.containerd.runc.v2\n" Feb 8 23:32:34.865163 kubelet[1336]: E0208 23:32:34.865090 1336 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:32:34.957432 kubelet[1336]: W0208 23:32:34.956986 1336 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod18dff0ac_6053_4f60_b0bc_4ffe0c7cfc88.slice/cri-containerd-6dcb9f1d56753668da5ca54950ecee1778d088268750ce2deacb066a595013f2.scope WatchSource:0}: container "6dcb9f1d56753668da5ca54950ecee1778d088268750ce2deacb066a595013f2" in namespace "k8s.io": not found Feb 8 23:32:35.393466 env[1054]: time="2024-02-08T23:32:35.393342518Z" level=info msg="CreateContainer within sandbox \"7acf4f943486bf0b70d66ae9ac051e56cdf9e626892a4bdaa1dbed6f1e9625f7\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 8 23:32:35.432269 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2948852292.mount: Deactivated successfully. Feb 8 23:32:35.454841 env[1054]: time="2024-02-08T23:32:35.454722657Z" level=info msg="CreateContainer within sandbox \"7acf4f943486bf0b70d66ae9ac051e56cdf9e626892a4bdaa1dbed6f1e9625f7\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"d16b95cef1b411ab008e8841c1f36261ffd2005cfb4b6c13c3f308d3ba3afe3f\"" Feb 8 23:32:35.456741 env[1054]: time="2024-02-08T23:32:35.456633882Z" level=info msg="StartContainer for \"d16b95cef1b411ab008e8841c1f36261ffd2005cfb4b6c13c3f308d3ba3afe3f\"" Feb 8 23:32:35.498938 systemd[1]: Started cri-containerd-d16b95cef1b411ab008e8841c1f36261ffd2005cfb4b6c13c3f308d3ba3afe3f.scope. Feb 8 23:32:35.541827 systemd[1]: cri-containerd-d16b95cef1b411ab008e8841c1f36261ffd2005cfb4b6c13c3f308d3ba3afe3f.scope: Deactivated successfully. Feb 8 23:32:35.543969 env[1054]: time="2024-02-08T23:32:35.543809563Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6a27e672_f70a_4d25_8a24_7419cdf6b327.slice/cri-containerd-d16b95cef1b411ab008e8841c1f36261ffd2005cfb4b6c13c3f308d3ba3afe3f.scope/memory.events\": no such file or directory" Feb 8 23:32:35.554798 env[1054]: time="2024-02-08T23:32:35.554700336Z" level=info msg="StartContainer for \"d16b95cef1b411ab008e8841c1f36261ffd2005cfb4b6c13c3f308d3ba3afe3f\" returns successfully" Feb 8 23:32:35.598083 env[1054]: time="2024-02-08T23:32:35.598033444Z" level=info msg="shim disconnected" id=d16b95cef1b411ab008e8841c1f36261ffd2005cfb4b6c13c3f308d3ba3afe3f Feb 8 23:32:35.598355 env[1054]: time="2024-02-08T23:32:35.598334373Z" level=warning msg="cleaning up after shim disconnected" id=d16b95cef1b411ab008e8841c1f36261ffd2005cfb4b6c13c3f308d3ba3afe3f namespace=k8s.io Feb 8 23:32:35.598466 env[1054]: time="2024-02-08T23:32:35.598448425Z" level=info msg="cleaning up dead shim" Feb 8 23:32:35.609925 env[1054]: time="2024-02-08T23:32:35.609894411Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:32:35Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3351 runtime=io.containerd.runc.v2\n" Feb 8 23:32:35.865995 kubelet[1336]: E0208 23:32:35.865813 1336 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:32:36.401830 env[1054]: time="2024-02-08T23:32:36.401759134Z" level=info msg="CreateContainer within sandbox \"7acf4f943486bf0b70d66ae9ac051e56cdf9e626892a4bdaa1dbed6f1e9625f7\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 8 23:32:36.439081 env[1054]: time="2024-02-08T23:32:36.438975858Z" level=info msg="CreateContainer within sandbox \"7acf4f943486bf0b70d66ae9ac051e56cdf9e626892a4bdaa1dbed6f1e9625f7\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"b0771ae92399b48c0f1379446601605c44df4366057057ed62df86d80e0baa4c\"" Feb 8 23:32:36.441367 env[1054]: time="2024-02-08T23:32:36.441281276Z" level=info msg="StartContainer for \"b0771ae92399b48c0f1379446601605c44df4366057057ed62df86d80e0baa4c\"" Feb 8 23:32:36.494189 systemd[1]: Started cri-containerd-b0771ae92399b48c0f1379446601605c44df4366057057ed62df86d80e0baa4c.scope. Feb 8 23:32:36.577672 env[1054]: time="2024-02-08T23:32:36.577612775Z" level=info msg="StartContainer for \"b0771ae92399b48c0f1379446601605c44df4366057057ed62df86d80e0baa4c\" returns successfully" Feb 8 23:32:36.867012 kubelet[1336]: E0208 23:32:36.866917 1336 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:32:37.346526 systemd[1]: run-containerd-runc-k8s.io-b0771ae92399b48c0f1379446601605c44df4366057057ed62df86d80e0baa4c-runc.IzTN0h.mount: Deactivated successfully. Feb 8 23:32:37.536697 kernel: cryptd: max_cpu_qlen set to 1000 Feb 8 23:32:37.586435 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm_base(ctr(aes-generic),ghash-generic)))) Feb 8 23:32:37.867823 kubelet[1336]: E0208 23:32:37.867676 1336 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:32:38.091570 kubelet[1336]: W0208 23:32:38.088897 1336 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6a27e672_f70a_4d25_8a24_7419cdf6b327.slice/cri-containerd-bddc5f50e528e9b783fa35304a29b818438855e899696ef755137ec79c41789b.scope WatchSource:0}: task bddc5f50e528e9b783fa35304a29b818438855e899696ef755137ec79c41789b not found: not found Feb 8 23:32:38.691764 systemd[1]: run-containerd-runc-k8s.io-b0771ae92399b48c0f1379446601605c44df4366057057ed62df86d80e0baa4c-runc.2SuFXh.mount: Deactivated successfully. Feb 8 23:32:38.868890 kubelet[1336]: E0208 23:32:38.868773 1336 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:32:39.869210 kubelet[1336]: E0208 23:32:39.869114 1336 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:32:40.671898 systemd-networkd[966]: lxc_health: Link UP Feb 8 23:32:40.677767 systemd-networkd[966]: lxc_health: Gained carrier Feb 8 23:32:40.678422 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 8 23:32:40.831417 kubelet[1336]: I0208 23:32:40.831368 1336 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-v79rf" podStartSLOduration=8.831325838 podCreationTimestamp="2024-02-08 23:32:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-08 23:32:37.443677292 +0000 UTC m=+104.358937090" watchObservedRunningTime="2024-02-08 23:32:40.831325838 +0000 UTC m=+107.746585626" Feb 8 23:32:40.869503 kubelet[1336]: E0208 23:32:40.869469 1336 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:32:41.016980 systemd[1]: run-containerd-runc-k8s.io-b0771ae92399b48c0f1379446601605c44df4366057057ed62df86d80e0baa4c-runc.QGJ1Ii.mount: Deactivated successfully. Feb 8 23:32:41.201375 kubelet[1336]: W0208 23:32:41.200694 1336 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6a27e672_f70a_4d25_8a24_7419cdf6b327.slice/cri-containerd-70ecb95561cd5e3e8663c0a2a0234eb4a41f3adbd7db1920df85bc4c80812ce6.scope WatchSource:0}: task 70ecb95561cd5e3e8663c0a2a0234eb4a41f3adbd7db1920df85bc4c80812ce6 not found: not found Feb 8 23:32:41.869904 kubelet[1336]: E0208 23:32:41.869857 1336 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:32:42.188483 systemd-networkd[966]: lxc_health: Gained IPv6LL Feb 8 23:32:42.870995 kubelet[1336]: E0208 23:32:42.870938 1336 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:32:43.249971 systemd[1]: run-containerd-runc-k8s.io-b0771ae92399b48c0f1379446601605c44df4366057057ed62df86d80e0baa4c-runc.vhfuxL.mount: Deactivated successfully. Feb 8 23:32:43.871723 kubelet[1336]: E0208 23:32:43.871690 1336 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:32:44.313530 kubelet[1336]: W0208 23:32:44.313360 1336 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6a27e672_f70a_4d25_8a24_7419cdf6b327.slice/cri-containerd-c585a8c0c6c95ca89bfd42070790c008a73c4a9e45d6f849606e0c5cfec471ac.scope WatchSource:0}: task c585a8c0c6c95ca89bfd42070790c008a73c4a9e45d6f849606e0c5cfec471ac not found: not found Feb 8 23:32:44.873174 kubelet[1336]: E0208 23:32:44.873127 1336 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:32:45.455575 systemd[1]: run-containerd-runc-k8s.io-b0771ae92399b48c0f1379446601605c44df4366057057ed62df86d80e0baa4c-runc.dbPALj.mount: Deactivated successfully. Feb 8 23:32:45.874728 kubelet[1336]: E0208 23:32:45.874638 1336 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:32:46.875593 kubelet[1336]: E0208 23:32:46.875537 1336 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:32:47.425624 kubelet[1336]: W0208 23:32:47.425158 1336 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6a27e672_f70a_4d25_8a24_7419cdf6b327.slice/cri-containerd-d16b95cef1b411ab008e8841c1f36261ffd2005cfb4b6c13c3f308d3ba3afe3f.scope WatchSource:0}: task d16b95cef1b411ab008e8841c1f36261ffd2005cfb4b6c13c3f308d3ba3afe3f not found: not found Feb 8 23:32:47.719214 systemd[1]: run-containerd-runc-k8s.io-b0771ae92399b48c0f1379446601605c44df4366057057ed62df86d80e0baa4c-runc.iFAOqo.mount: Deactivated successfully. Feb 8 23:32:47.877466 kubelet[1336]: E0208 23:32:47.877289 1336 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:32:48.878359 kubelet[1336]: E0208 23:32:48.878313 1336 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:32:49.879756 kubelet[1336]: E0208 23:32:49.879698 1336 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:32:50.880288 kubelet[1336]: E0208 23:32:50.880210 1336 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:32:51.881584 kubelet[1336]: E0208 23:32:51.881528 1336 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:32:52.883124 kubelet[1336]: E0208 23:32:52.883025 1336 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:32:53.766356 kubelet[1336]: E0208 23:32:53.766235 1336 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:32:53.786568 env[1054]: time="2024-02-08T23:32:53.786478991Z" level=info msg="StopPodSandbox for \"687073f1fbdb09e160ae351b13cb92d3f34534345ec6127db6dcf91c4f959d4c\"" Feb 8 23:32:53.787690 env[1054]: time="2024-02-08T23:32:53.787596894Z" level=info msg="TearDown network for sandbox \"687073f1fbdb09e160ae351b13cb92d3f34534345ec6127db6dcf91c4f959d4c\" successfully" Feb 8 23:32:53.787903 env[1054]: time="2024-02-08T23:32:53.787858852Z" level=info msg="StopPodSandbox for \"687073f1fbdb09e160ae351b13cb92d3f34534345ec6127db6dcf91c4f959d4c\" returns successfully" Feb 8 23:32:53.789713 env[1054]: time="2024-02-08T23:32:53.789640063Z" level=info msg="RemovePodSandbox for \"687073f1fbdb09e160ae351b13cb92d3f34534345ec6127db6dcf91c4f959d4c\"" Feb 8 23:32:53.789885 env[1054]: time="2024-02-08T23:32:53.789723879Z" level=info msg="Forcibly stopping sandbox \"687073f1fbdb09e160ae351b13cb92d3f34534345ec6127db6dcf91c4f959d4c\"" Feb 8 23:32:53.789977 env[1054]: time="2024-02-08T23:32:53.789895930Z" level=info msg="TearDown network for sandbox \"687073f1fbdb09e160ae351b13cb92d3f34534345ec6127db6dcf91c4f959d4c\" successfully" Feb 8 23:32:53.799171 env[1054]: time="2024-02-08T23:32:53.799053610Z" level=info msg="RemovePodSandbox \"687073f1fbdb09e160ae351b13cb92d3f34534345ec6127db6dcf91c4f959d4c\" returns successfully" Feb 8 23:32:53.800291 env[1054]: time="2024-02-08T23:32:53.800239161Z" level=info msg="StopPodSandbox for \"424a92f6aa57a1cd7a0ce6ae7842283dbeadb021498f7db19482f8655f1f1c6d\"" Feb 8 23:32:53.800755 env[1054]: time="2024-02-08T23:32:53.800677809Z" level=info msg="TearDown network for sandbox \"424a92f6aa57a1cd7a0ce6ae7842283dbeadb021498f7db19482f8655f1f1c6d\" successfully" Feb 8 23:32:53.800961 env[1054]: time="2024-02-08T23:32:53.800921052Z" level=info msg="StopPodSandbox for \"424a92f6aa57a1cd7a0ce6ae7842283dbeadb021498f7db19482f8655f1f1c6d\" returns successfully" Feb 8 23:32:53.801705 env[1054]: time="2024-02-08T23:32:53.801662334Z" level=info msg="RemovePodSandbox for \"424a92f6aa57a1cd7a0ce6ae7842283dbeadb021498f7db19482f8655f1f1c6d\"" Feb 8 23:32:53.801950 env[1054]: time="2024-02-08T23:32:53.801883476Z" level=info msg="Forcibly stopping sandbox \"424a92f6aa57a1cd7a0ce6ae7842283dbeadb021498f7db19482f8655f1f1c6d\"" Feb 8 23:32:53.802178 env[1054]: time="2024-02-08T23:32:53.802137860Z" level=info msg="TearDown network for sandbox \"424a92f6aa57a1cd7a0ce6ae7842283dbeadb021498f7db19482f8655f1f1c6d\" successfully" Feb 8 23:32:53.807787 env[1054]: time="2024-02-08T23:32:53.807657249Z" level=info msg="RemovePodSandbox \"424a92f6aa57a1cd7a0ce6ae7842283dbeadb021498f7db19482f8655f1f1c6d\" returns successfully" Feb 8 23:32:53.883996 kubelet[1336]: E0208 23:32:53.883922 1336 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:32:54.884813 kubelet[1336]: E0208 23:32:54.884771 1336 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"