Feb 12 20:25:17.093133 kernel: Linux version 5.15.148-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Mon Feb 12 18:05:31 -00 2024 Feb 12 20:25:17.093182 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=f2beb0668e3dab90bbcf0ace3803b7ee02142bfb86913ef12ef6d2ee81a411a4 Feb 12 20:25:17.093210 kernel: BIOS-provided physical RAM map: Feb 12 20:25:17.093227 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Feb 12 20:25:17.093244 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Feb 12 20:25:17.093260 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Feb 12 20:25:17.093279 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdcfff] usable Feb 12 20:25:17.093296 kernel: BIOS-e820: [mem 0x000000007ffdd000-0x000000007fffffff] reserved Feb 12 20:25:17.093316 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Feb 12 20:25:17.093333 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Feb 12 20:25:17.098426 kernel: NX (Execute Disable) protection: active Feb 12 20:25:17.098441 kernel: SMBIOS 2.8 present. Feb 12 20:25:17.098454 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014 Feb 12 20:25:17.098467 kernel: Hypervisor detected: KVM Feb 12 20:25:17.098483 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Feb 12 20:25:17.098503 kernel: kvm-clock: cpu 0, msr 3afaa001, primary cpu clock Feb 12 20:25:17.098516 kernel: kvm-clock: using sched offset of 6987440040 cycles Feb 12 20:25:17.098530 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Feb 12 20:25:17.098544 kernel: tsc: Detected 1996.249 MHz processor Feb 12 20:25:17.098558 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 12 20:25:17.098573 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 12 20:25:17.098587 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Feb 12 20:25:17.098601 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 12 20:25:17.098618 kernel: ACPI: Early table checksum verification disabled Feb 12 20:25:17.098631 kernel: ACPI: RSDP 0x00000000000F5930 000014 (v00 BOCHS ) Feb 12 20:25:17.098645 kernel: ACPI: RSDT 0x000000007FFE1848 000030 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 20:25:17.098659 kernel: ACPI: FACP 0x000000007FFE172C 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 20:25:17.098673 kernel: ACPI: DSDT 0x000000007FFE0040 0016EC (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 20:25:17.098687 kernel: ACPI: FACS 0x000000007FFE0000 000040 Feb 12 20:25:17.098701 kernel: ACPI: APIC 0x000000007FFE17A0 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 20:25:17.098715 kernel: ACPI: WAET 0x000000007FFE1820 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 20:25:17.098729 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe172c-0x7ffe179f] Feb 12 20:25:17.098746 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe172b] Feb 12 20:25:17.098759 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Feb 12 20:25:17.098773 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17a0-0x7ffe181f] Feb 12 20:25:17.098787 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe1820-0x7ffe1847] Feb 12 20:25:17.098800 kernel: No NUMA configuration found Feb 12 20:25:17.098813 kernel: Faking a node at [mem 0x0000000000000000-0x000000007ffdcfff] Feb 12 20:25:17.098827 kernel: NODE_DATA(0) allocated [mem 0x7ffd7000-0x7ffdcfff] Feb 12 20:25:17.098841 kernel: Zone ranges: Feb 12 20:25:17.098863 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 12 20:25:17.098879 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdcfff] Feb 12 20:25:17.098893 kernel: Normal empty Feb 12 20:25:17.098908 kernel: Movable zone start for each node Feb 12 20:25:17.098922 kernel: Early memory node ranges Feb 12 20:25:17.098936 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Feb 12 20:25:17.098953 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdcfff] Feb 12 20:25:17.098967 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdcfff] Feb 12 20:25:17.098981 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 12 20:25:17.098995 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Feb 12 20:25:17.099010 kernel: On node 0, zone DMA32: 35 pages in unavailable ranges Feb 12 20:25:17.099023 kernel: ACPI: PM-Timer IO Port: 0x608 Feb 12 20:25:17.099038 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Feb 12 20:25:17.099053 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Feb 12 20:25:17.099067 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Feb 12 20:25:17.099084 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Feb 12 20:25:17.099099 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 12 20:25:17.099113 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Feb 12 20:25:17.099127 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Feb 12 20:25:17.099141 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 12 20:25:17.099155 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Feb 12 20:25:17.099169 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Feb 12 20:25:17.099183 kernel: Booting paravirtualized kernel on KVM Feb 12 20:25:17.099197 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 12 20:25:17.099212 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 Feb 12 20:25:17.099230 kernel: percpu: Embedded 55 pages/cpu s185624 r8192 d31464 u1048576 Feb 12 20:25:17.099244 kernel: pcpu-alloc: s185624 r8192 d31464 u1048576 alloc=1*2097152 Feb 12 20:25:17.099258 kernel: pcpu-alloc: [0] 0 1 Feb 12 20:25:17.099272 kernel: kvm-guest: stealtime: cpu 0, msr 7dc1c0c0 Feb 12 20:25:17.099286 kernel: kvm-guest: PV spinlocks disabled, no host support Feb 12 20:25:17.099300 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515805 Feb 12 20:25:17.099314 kernel: Policy zone: DMA32 Feb 12 20:25:17.099331 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=f2beb0668e3dab90bbcf0ace3803b7ee02142bfb86913ef12ef6d2ee81a411a4 Feb 12 20:25:17.099378 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 12 20:25:17.099393 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 12 20:25:17.099407 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Feb 12 20:25:17.099422 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 12 20:25:17.099436 kernel: Memory: 1975340K/2096620K available (12294K kernel code, 2275K rwdata, 13700K rodata, 45496K init, 4048K bss, 121020K reserved, 0K cma-reserved) Feb 12 20:25:17.099451 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 12 20:25:17.099465 kernel: ftrace: allocating 34475 entries in 135 pages Feb 12 20:25:17.099480 kernel: ftrace: allocated 135 pages with 4 groups Feb 12 20:25:17.099497 kernel: rcu: Hierarchical RCU implementation. Feb 12 20:25:17.099513 kernel: rcu: RCU event tracing is enabled. Feb 12 20:25:17.099528 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 12 20:25:17.099542 kernel: Rude variant of Tasks RCU enabled. Feb 12 20:25:17.099557 kernel: Tracing variant of Tasks RCU enabled. Feb 12 20:25:17.099571 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 12 20:25:17.099585 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 12 20:25:17.099599 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Feb 12 20:25:17.099613 kernel: Console: colour VGA+ 80x25 Feb 12 20:25:17.099627 kernel: printk: console [tty0] enabled Feb 12 20:25:17.099645 kernel: printk: console [ttyS0] enabled Feb 12 20:25:17.099659 kernel: ACPI: Core revision 20210730 Feb 12 20:25:17.099674 kernel: APIC: Switch to symmetric I/O mode setup Feb 12 20:25:17.099688 kernel: x2apic enabled Feb 12 20:25:17.099702 kernel: Switched APIC routing to physical x2apic. Feb 12 20:25:17.099716 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Feb 12 20:25:17.099730 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Feb 12 20:25:17.099745 kernel: Calibrating delay loop (skipped) preset value.. 3992.49 BogoMIPS (lpj=1996249) Feb 12 20:25:17.099759 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Feb 12 20:25:17.099776 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Feb 12 20:25:17.099791 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 12 20:25:17.099805 kernel: Spectre V2 : Mitigation: Retpolines Feb 12 20:25:17.099819 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 12 20:25:17.099833 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 12 20:25:17.099847 kernel: Speculative Store Bypass: Vulnerable Feb 12 20:25:17.099861 kernel: x86/fpu: x87 FPU will use FXSAVE Feb 12 20:25:17.099875 kernel: Freeing SMP alternatives memory: 32K Feb 12 20:25:17.099889 kernel: pid_max: default: 32768 minimum: 301 Feb 12 20:25:17.099906 kernel: LSM: Security Framework initializing Feb 12 20:25:17.099920 kernel: SELinux: Initializing. Feb 12 20:25:17.099953 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Feb 12 20:25:17.099968 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Feb 12 20:25:17.099983 kernel: smpboot: CPU0: AMD Intel Core i7 9xx (Nehalem Class Core i7) (family: 0x6, model: 0x1a, stepping: 0x3) Feb 12 20:25:17.099997 kernel: Performance Events: AMD PMU driver. Feb 12 20:25:17.100011 kernel: ... version: 0 Feb 12 20:25:17.100025 kernel: ... bit width: 48 Feb 12 20:25:17.100039 kernel: ... generic registers: 4 Feb 12 20:25:17.100064 kernel: ... value mask: 0000ffffffffffff Feb 12 20:25:17.100079 kernel: ... max period: 00007fffffffffff Feb 12 20:25:17.100094 kernel: ... fixed-purpose events: 0 Feb 12 20:25:17.100111 kernel: ... event mask: 000000000000000f Feb 12 20:25:17.100125 kernel: signal: max sigframe size: 1440 Feb 12 20:25:17.100140 kernel: rcu: Hierarchical SRCU implementation. Feb 12 20:25:17.100154 kernel: smp: Bringing up secondary CPUs ... Feb 12 20:25:17.100169 kernel: x86: Booting SMP configuration: Feb 12 20:25:17.100186 kernel: .... node #0, CPUs: #1 Feb 12 20:25:17.100201 kernel: kvm-clock: cpu 1, msr 3afaa041, secondary cpu clock Feb 12 20:25:17.100216 kernel: kvm-guest: stealtime: cpu 1, msr 7dd1c0c0 Feb 12 20:25:17.100231 kernel: smp: Brought up 1 node, 2 CPUs Feb 12 20:25:17.100246 kernel: smpboot: Max logical packages: 2 Feb 12 20:25:17.100260 kernel: smpboot: Total of 2 processors activated (7984.99 BogoMIPS) Feb 12 20:25:17.100275 kernel: devtmpfs: initialized Feb 12 20:25:17.100290 kernel: x86/mm: Memory block size: 128MB Feb 12 20:25:17.100305 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 12 20:25:17.100322 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 12 20:25:17.101362 kernel: pinctrl core: initialized pinctrl subsystem Feb 12 20:25:17.101377 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 12 20:25:17.101385 kernel: audit: initializing netlink subsys (disabled) Feb 12 20:25:17.101394 kernel: audit: type=2000 audit(1707769516.614:1): state=initialized audit_enabled=0 res=1 Feb 12 20:25:17.101403 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 12 20:25:17.101412 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 12 20:25:17.101420 kernel: cpuidle: using governor menu Feb 12 20:25:17.101429 kernel: ACPI: bus type PCI registered Feb 12 20:25:17.101441 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 12 20:25:17.101449 kernel: dca service started, version 1.12.1 Feb 12 20:25:17.101458 kernel: PCI: Using configuration type 1 for base access Feb 12 20:25:17.101466 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 12 20:25:17.101475 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Feb 12 20:25:17.101484 kernel: ACPI: Added _OSI(Module Device) Feb 12 20:25:17.101492 kernel: ACPI: Added _OSI(Processor Device) Feb 12 20:25:17.101501 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 12 20:25:17.101509 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 12 20:25:17.101519 kernel: ACPI: Added _OSI(Linux-Dell-Video) Feb 12 20:25:17.101528 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Feb 12 20:25:17.101536 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Feb 12 20:25:17.101545 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 12 20:25:17.101553 kernel: ACPI: Interpreter enabled Feb 12 20:25:17.101562 kernel: ACPI: PM: (supports S0 S3 S5) Feb 12 20:25:17.101570 kernel: ACPI: Using IOAPIC for interrupt routing Feb 12 20:25:17.101579 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 12 20:25:17.101587 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Feb 12 20:25:17.101597 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 12 20:25:17.101760 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Feb 12 20:25:17.101846 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. Feb 12 20:25:17.101859 kernel: acpiphp: Slot [3] registered Feb 12 20:25:17.101867 kernel: acpiphp: Slot [4] registered Feb 12 20:25:17.101876 kernel: acpiphp: Slot [5] registered Feb 12 20:25:17.101884 kernel: acpiphp: Slot [6] registered Feb 12 20:25:17.101894 kernel: acpiphp: Slot [7] registered Feb 12 20:25:17.101902 kernel: acpiphp: Slot [8] registered Feb 12 20:25:17.101910 kernel: acpiphp: Slot [9] registered Feb 12 20:25:17.101918 kernel: acpiphp: Slot [10] registered Feb 12 20:25:17.101926 kernel: acpiphp: Slot [11] registered Feb 12 20:25:17.101934 kernel: acpiphp: Slot [12] registered Feb 12 20:25:17.101941 kernel: acpiphp: Slot [13] registered Feb 12 20:25:17.101949 kernel: acpiphp: Slot [14] registered Feb 12 20:25:17.101957 kernel: acpiphp: Slot [15] registered Feb 12 20:25:17.101965 kernel: acpiphp: Slot [16] registered Feb 12 20:25:17.101975 kernel: acpiphp: Slot [17] registered Feb 12 20:25:17.101983 kernel: acpiphp: Slot [18] registered Feb 12 20:25:17.101991 kernel: acpiphp: Slot [19] registered Feb 12 20:25:17.101998 kernel: acpiphp: Slot [20] registered Feb 12 20:25:17.102006 kernel: acpiphp: Slot [21] registered Feb 12 20:25:17.102014 kernel: acpiphp: Slot [22] registered Feb 12 20:25:17.102022 kernel: acpiphp: Slot [23] registered Feb 12 20:25:17.102030 kernel: acpiphp: Slot [24] registered Feb 12 20:25:17.102038 kernel: acpiphp: Slot [25] registered Feb 12 20:25:17.102047 kernel: acpiphp: Slot [26] registered Feb 12 20:25:17.102055 kernel: acpiphp: Slot [27] registered Feb 12 20:25:17.102063 kernel: acpiphp: Slot [28] registered Feb 12 20:25:17.102071 kernel: acpiphp: Slot [29] registered Feb 12 20:25:17.102078 kernel: acpiphp: Slot [30] registered Feb 12 20:25:17.102086 kernel: acpiphp: Slot [31] registered Feb 12 20:25:17.102094 kernel: PCI host bridge to bus 0000:00 Feb 12 20:25:17.102179 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 12 20:25:17.102254 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 12 20:25:17.102329 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 12 20:25:17.102430 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Feb 12 20:25:17.102511 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Feb 12 20:25:17.102584 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 12 20:25:17.102680 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Feb 12 20:25:17.102772 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Feb 12 20:25:17.102879 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Feb 12 20:25:17.102966 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc120-0xc12f] Feb 12 20:25:17.103050 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Feb 12 20:25:17.103132 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Feb 12 20:25:17.103216 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Feb 12 20:25:17.103300 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Feb 12 20:25:17.103411 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Feb 12 20:25:17.103500 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Feb 12 20:25:17.103583 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Feb 12 20:25:17.103674 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Feb 12 20:25:17.103758 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Feb 12 20:25:17.103847 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Feb 12 20:25:17.103946 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfeb90000-0xfeb90fff] Feb 12 20:25:17.104039 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfeb80000-0xfeb8ffff pref] Feb 12 20:25:17.104127 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Feb 12 20:25:17.104248 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Feb 12 20:25:17.104356 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc080-0xc0bf] Feb 12 20:25:17.104451 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfeb91000-0xfeb91fff] Feb 12 20:25:17.104539 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Feb 12 20:25:17.104626 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb00000-0xfeb7ffff pref] Feb 12 20:25:17.104726 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Feb 12 20:25:17.104819 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Feb 12 20:25:17.104908 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfeb92000-0xfeb92fff] Feb 12 20:25:17.104995 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Feb 12 20:25:17.105101 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 Feb 12 20:25:17.105192 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc0c0-0xc0ff] Feb 12 20:25:17.105279 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Feb 12 20:25:17.105398 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 Feb 12 20:25:17.105490 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc100-0xc11f] Feb 12 20:25:17.105580 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Feb 12 20:25:17.105593 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Feb 12 20:25:17.105602 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Feb 12 20:25:17.105611 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Feb 12 20:25:17.105619 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Feb 12 20:25:17.105628 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Feb 12 20:25:17.105639 kernel: iommu: Default domain type: Translated Feb 12 20:25:17.105648 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 12 20:25:17.105734 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Feb 12 20:25:17.105822 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Feb 12 20:25:17.105909 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Feb 12 20:25:17.105922 kernel: vgaarb: loaded Feb 12 20:25:17.105931 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 12 20:25:17.105940 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 12 20:25:17.105948 kernel: PTP clock support registered Feb 12 20:25:17.105959 kernel: PCI: Using ACPI for IRQ routing Feb 12 20:25:17.105968 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 12 20:25:17.105977 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Feb 12 20:25:17.105985 kernel: e820: reserve RAM buffer [mem 0x7ffdd000-0x7fffffff] Feb 12 20:25:17.105994 kernel: clocksource: Switched to clocksource kvm-clock Feb 12 20:25:17.106002 kernel: VFS: Disk quotas dquot_6.6.0 Feb 12 20:25:17.106011 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 12 20:25:17.106019 kernel: pnp: PnP ACPI init Feb 12 20:25:17.106119 kernel: pnp 00:03: [dma 2] Feb 12 20:25:17.106136 kernel: pnp: PnP ACPI: found 5 devices Feb 12 20:25:17.106145 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 12 20:25:17.106154 kernel: NET: Registered PF_INET protocol family Feb 12 20:25:17.106162 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 12 20:25:17.106171 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Feb 12 20:25:17.106180 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 12 20:25:17.106189 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 12 20:25:17.106197 kernel: TCP bind hash table entries: 16384 (order: 6, 262144 bytes, linear) Feb 12 20:25:17.106207 kernel: TCP: Hash tables configured (established 16384 bind 16384) Feb 12 20:25:17.106216 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Feb 12 20:25:17.106225 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Feb 12 20:25:17.106234 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 12 20:25:17.106242 kernel: NET: Registered PF_XDP protocol family Feb 12 20:25:17.106322 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 12 20:25:17.112488 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 12 20:25:17.112582 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 12 20:25:17.112659 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Feb 12 20:25:17.112754 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Feb 12 20:25:17.112863 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Feb 12 20:25:17.112952 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Feb 12 20:25:17.113036 kernel: pci 0000:00:01.0: Activating ISA DMA hang workarounds Feb 12 20:25:17.113049 kernel: PCI: CLS 0 bytes, default 64 Feb 12 20:25:17.113058 kernel: Initialise system trusted keyrings Feb 12 20:25:17.113066 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Feb 12 20:25:17.113074 kernel: Key type asymmetric registered Feb 12 20:25:17.113085 kernel: Asymmetric key parser 'x509' registered Feb 12 20:25:17.113093 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Feb 12 20:25:17.113101 kernel: io scheduler mq-deadline registered Feb 12 20:25:17.113109 kernel: io scheduler kyber registered Feb 12 20:25:17.113117 kernel: io scheduler bfq registered Feb 12 20:25:17.113125 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 12 20:25:17.113134 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Feb 12 20:25:17.113142 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Feb 12 20:25:17.113150 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Feb 12 20:25:17.113160 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Feb 12 20:25:17.113168 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 12 20:25:17.113176 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 12 20:25:17.113183 kernel: random: crng init done Feb 12 20:25:17.113191 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Feb 12 20:25:17.113199 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Feb 12 20:25:17.113207 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Feb 12 20:25:17.113317 kernel: rtc_cmos 00:04: RTC can wake from S4 Feb 12 20:25:17.113333 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Feb 12 20:25:17.113426 kernel: rtc_cmos 00:04: registered as rtc0 Feb 12 20:25:17.113501 kernel: rtc_cmos 00:04: setting system clock to 2024-02-12T20:25:16 UTC (1707769516) Feb 12 20:25:17.113575 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Feb 12 20:25:17.113587 kernel: NET: Registered PF_INET6 protocol family Feb 12 20:25:17.113595 kernel: Segment Routing with IPv6 Feb 12 20:25:17.113603 kernel: In-situ OAM (IOAM) with IPv6 Feb 12 20:25:17.113611 kernel: NET: Registered PF_PACKET protocol family Feb 12 20:25:17.113619 kernel: Key type dns_resolver registered Feb 12 20:25:17.113630 kernel: IPI shorthand broadcast: enabled Feb 12 20:25:17.113638 kernel: sched_clock: Marking stable (716006602, 127574704)->(905774691, -62193385) Feb 12 20:25:17.113646 kernel: registered taskstats version 1 Feb 12 20:25:17.113654 kernel: Loading compiled-in X.509 certificates Feb 12 20:25:17.113663 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.148-flatcar: 253e5c5c936b12e2ff2626e7f3214deb753330c8' Feb 12 20:25:17.113670 kernel: Key type .fscrypt registered Feb 12 20:25:17.113678 kernel: Key type fscrypt-provisioning registered Feb 12 20:25:17.113687 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 12 20:25:17.113696 kernel: ima: Allocated hash algorithm: sha1 Feb 12 20:25:17.113705 kernel: ima: No architecture policies found Feb 12 20:25:17.113713 kernel: Freeing unused kernel image (initmem) memory: 45496K Feb 12 20:25:17.113721 kernel: Write protecting the kernel read-only data: 28672k Feb 12 20:25:17.113729 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Feb 12 20:25:17.113737 kernel: Freeing unused kernel image (rodata/data gap) memory: 636K Feb 12 20:25:17.113745 kernel: Run /init as init process Feb 12 20:25:17.113753 kernel: with arguments: Feb 12 20:25:17.113761 kernel: /init Feb 12 20:25:17.113768 kernel: with environment: Feb 12 20:25:17.113778 kernel: HOME=/ Feb 12 20:25:17.113785 kernel: TERM=linux Feb 12 20:25:17.113793 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 12 20:25:17.113804 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 12 20:25:17.113815 systemd[1]: Detected virtualization kvm. Feb 12 20:25:17.113824 systemd[1]: Detected architecture x86-64. Feb 12 20:25:17.113832 systemd[1]: Running in initrd. Feb 12 20:25:17.113843 systemd[1]: No hostname configured, using default hostname. Feb 12 20:25:17.113851 systemd[1]: Hostname set to . Feb 12 20:25:17.113861 systemd[1]: Initializing machine ID from VM UUID. Feb 12 20:25:17.113870 systemd[1]: Queued start job for default target initrd.target. Feb 12 20:25:17.113879 systemd[1]: Started systemd-ask-password-console.path. Feb 12 20:25:17.113887 systemd[1]: Reached target cryptsetup.target. Feb 12 20:25:17.113896 systemd[1]: Reached target paths.target. Feb 12 20:25:17.113904 systemd[1]: Reached target slices.target. Feb 12 20:25:17.113914 systemd[1]: Reached target swap.target. Feb 12 20:25:17.113923 systemd[1]: Reached target timers.target. Feb 12 20:25:17.113932 systemd[1]: Listening on iscsid.socket. Feb 12 20:25:17.113940 systemd[1]: Listening on iscsiuio.socket. Feb 12 20:25:17.113949 systemd[1]: Listening on systemd-journald-audit.socket. Feb 12 20:25:17.113958 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 12 20:25:17.113966 systemd[1]: Listening on systemd-journald.socket. Feb 12 20:25:17.113975 systemd[1]: Listening on systemd-networkd.socket. Feb 12 20:25:17.113985 systemd[1]: Listening on systemd-udevd-control.socket. Feb 12 20:25:17.113994 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 12 20:25:17.114002 systemd[1]: Reached target sockets.target. Feb 12 20:25:17.114011 systemd[1]: Starting kmod-static-nodes.service... Feb 12 20:25:17.114028 systemd[1]: Finished network-cleanup.service. Feb 12 20:25:17.114038 systemd[1]: Starting systemd-fsck-usr.service... Feb 12 20:25:17.114048 systemd[1]: Starting systemd-journald.service... Feb 12 20:25:17.114057 systemd[1]: Starting systemd-modules-load.service... Feb 12 20:25:17.114067 systemd[1]: Starting systemd-resolved.service... Feb 12 20:25:17.114075 systemd[1]: Starting systemd-vconsole-setup.service... Feb 12 20:25:17.114084 systemd[1]: Finished kmod-static-nodes.service. Feb 12 20:25:17.114093 systemd[1]: Finished systemd-fsck-usr.service. Feb 12 20:25:17.114105 systemd-journald[185]: Journal started Feb 12 20:25:17.114156 systemd-journald[185]: Runtime Journal (/run/log/journal/7eddc33fbadb4e93b617549219825a25) is 4.9M, max 39.5M, 34.5M free. Feb 12 20:25:17.076381 systemd-modules-load[186]: Inserted module 'overlay' Feb 12 20:25:17.140217 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 12 20:25:17.140247 systemd[1]: Started systemd-journald.service. Feb 12 20:25:17.140276 kernel: audit: type=1130 audit(1707769517.133:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:17.140290 kernel: Bridge firewalling registered Feb 12 20:25:17.133000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:17.124858 systemd-resolved[187]: Positive Trust Anchors: Feb 12 20:25:17.150695 kernel: audit: type=1130 audit(1707769517.140:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:17.140000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:17.124874 systemd-resolved[187]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 12 20:25:17.155211 kernel: audit: type=1130 audit(1707769517.150:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:17.150000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:17.124910 systemd-resolved[187]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 12 20:25:17.161280 kernel: audit: type=1130 audit(1707769517.155:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:17.155000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:17.127764 systemd-resolved[187]: Defaulting to hostname 'linux'. Feb 12 20:25:17.140762 systemd[1]: Started systemd-resolved.service. Feb 12 20:25:17.143393 systemd-modules-load[186]: Inserted module 'br_netfilter' Feb 12 20:25:17.151415 systemd[1]: Finished systemd-vconsole-setup.service. Feb 12 20:25:17.155850 systemd[1]: Reached target nss-lookup.target. Feb 12 20:25:17.168123 kernel: SCSI subsystem initialized Feb 12 20:25:17.163185 systemd[1]: Starting dracut-cmdline-ask.service... Feb 12 20:25:17.166182 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 12 20:25:17.177069 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 12 20:25:17.177000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:17.182392 kernel: audit: type=1130 audit(1707769517.177:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:17.182421 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 12 20:25:17.187446 kernel: device-mapper: uevent: version 1.0.3 Feb 12 20:25:17.187474 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Feb 12 20:25:17.192948 systemd-modules-load[186]: Inserted module 'dm_multipath' Feb 12 20:25:17.193000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:17.193812 systemd[1]: Finished systemd-modules-load.service. Feb 12 20:25:17.199069 kernel: audit: type=1130 audit(1707769517.193:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:17.195022 systemd[1]: Starting systemd-sysctl.service... Feb 12 20:25:17.200583 systemd[1]: Finished dracut-cmdline-ask.service. Feb 12 20:25:17.201000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:17.205895 systemd[1]: Starting dracut-cmdline.service... Feb 12 20:25:17.206515 kernel: audit: type=1130 audit(1707769517.201:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:17.212307 systemd[1]: Finished systemd-sysctl.service. Feb 12 20:25:17.213000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:17.216993 dracut-cmdline[206]: dracut-dracut-053 Feb 12 20:25:17.217620 kernel: audit: type=1130 audit(1707769517.213:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:17.217748 dracut-cmdline[206]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=f2beb0668e3dab90bbcf0ace3803b7ee02142bfb86913ef12ef6d2ee81a411a4 Feb 12 20:25:17.285430 kernel: Loading iSCSI transport class v2.0-870. Feb 12 20:25:17.299397 kernel: iscsi: registered transport (tcp) Feb 12 20:25:17.326624 kernel: iscsi: registered transport (qla4xxx) Feb 12 20:25:17.326709 kernel: QLogic iSCSI HBA Driver Feb 12 20:25:17.382990 systemd[1]: Finished dracut-cmdline.service. Feb 12 20:25:17.384644 systemd[1]: Starting dracut-pre-udev.service... Feb 12 20:25:17.394494 kernel: audit: type=1130 audit(1707769517.383:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:17.383000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:17.445456 kernel: raid6: sse2x4 gen() 9676 MB/s Feb 12 20:25:17.462436 kernel: raid6: sse2x4 xor() 6491 MB/s Feb 12 20:25:17.479558 kernel: raid6: sse2x2 gen() 12999 MB/s Feb 12 20:25:17.496448 kernel: raid6: sse2x2 xor() 8179 MB/s Feb 12 20:25:17.513451 kernel: raid6: sse2x1 gen() 9934 MB/s Feb 12 20:25:17.531287 kernel: raid6: sse2x1 xor() 6358 MB/s Feb 12 20:25:17.531393 kernel: raid6: using algorithm sse2x2 gen() 12999 MB/s Feb 12 20:25:17.531424 kernel: raid6: .... xor() 8179 MB/s, rmw enabled Feb 12 20:25:17.532259 kernel: raid6: using ssse3x2 recovery algorithm Feb 12 20:25:17.548410 kernel: xor: measuring software checksum speed Feb 12 20:25:17.551219 kernel: prefetch64-sse : 17266 MB/sec Feb 12 20:25:17.551277 kernel: generic_sse : 15572 MB/sec Feb 12 20:25:17.551304 kernel: xor: using function: prefetch64-sse (17266 MB/sec) Feb 12 20:25:17.671405 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Feb 12 20:25:17.687427 systemd[1]: Finished dracut-pre-udev.service. Feb 12 20:25:17.691000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:17.691000 audit: BPF prog-id=7 op=LOAD Feb 12 20:25:17.692000 audit: BPF prog-id=8 op=LOAD Feb 12 20:25:17.692931 systemd[1]: Starting systemd-udevd.service... Feb 12 20:25:17.709625 systemd-udevd[384]: Using default interface naming scheme 'v252'. Feb 12 20:25:17.715000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:17.714913 systemd[1]: Started systemd-udevd.service. Feb 12 20:25:17.720232 systemd[1]: Starting dracut-pre-trigger.service... Feb 12 20:25:17.736786 dracut-pre-trigger[401]: rd.md=0: removing MD RAID activation Feb 12 20:25:17.789209 systemd[1]: Finished dracut-pre-trigger.service. Feb 12 20:25:17.791004 systemd[1]: Starting systemd-udev-trigger.service... Feb 12 20:25:17.789000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:17.834781 systemd[1]: Finished systemd-udev-trigger.service. Feb 12 20:25:17.837000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:17.919377 kernel: virtio_blk virtio2: [vda] 41943040 512-byte logical blocks (21.5 GB/20.0 GiB) Feb 12 20:25:17.927368 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 12 20:25:17.927413 kernel: GPT:17805311 != 41943039 Feb 12 20:25:17.927426 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 12 20:25:17.927437 kernel: GPT:17805311 != 41943039 Feb 12 20:25:17.927447 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 12 20:25:17.927458 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 12 20:25:17.941367 kernel: libata version 3.00 loaded. Feb 12 20:25:17.953498 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (433) Feb 12 20:25:17.961369 kernel: ata_piix 0000:00:01.1: version 2.13 Feb 12 20:25:17.964374 kernel: scsi host0: ata_piix Feb 12 20:25:17.965177 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 12 20:25:18.020802 kernel: scsi host1: ata_piix Feb 12 20:25:18.020980 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc120 irq 14 Feb 12 20:25:18.020994 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc128 irq 15 Feb 12 20:25:18.028035 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Feb 12 20:25:18.032364 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Feb 12 20:25:18.035700 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Feb 12 20:25:18.036320 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Feb 12 20:25:18.038409 systemd[1]: Starting disk-uuid.service... Feb 12 20:25:18.060544 disk-uuid[461]: Primary Header is updated. Feb 12 20:25:18.060544 disk-uuid[461]: Secondary Entries is updated. Feb 12 20:25:18.060544 disk-uuid[461]: Secondary Header is updated. Feb 12 20:25:18.077146 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 12 20:25:18.085392 kernel: GPT:disk_guids don't match. Feb 12 20:25:18.085478 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 12 20:25:18.085505 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 12 20:25:18.099391 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 12 20:25:19.287300 disk-uuid[462]: The operation has completed successfully. Feb 12 20:25:19.288980 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 12 20:25:19.673170 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 12 20:25:19.674771 systemd[1]: Finished disk-uuid.service. Feb 12 20:25:19.675000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:19.675000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:19.678221 systemd[1]: Starting verity-setup.service... Feb 12 20:25:19.789401 kernel: device-mapper: verity: sha256 using implementation "sha256-ssse3" Feb 12 20:25:20.249217 systemd[1]: Found device dev-mapper-usr.device. Feb 12 20:25:20.255000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:20.252884 systemd[1]: Mounting sysusr-usr.mount... Feb 12 20:25:20.254650 systemd[1]: Finished verity-setup.service. Feb 12 20:25:20.394361 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Feb 12 20:25:20.394857 systemd[1]: Mounted sysusr-usr.mount. Feb 12 20:25:20.396188 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Feb 12 20:25:20.397712 systemd[1]: Starting ignition-setup.service... Feb 12 20:25:20.400249 systemd[1]: Starting parse-ip-for-networkd.service... Feb 12 20:25:20.421859 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 12 20:25:20.421906 kernel: BTRFS info (device vda6): using free space tree Feb 12 20:25:20.421924 kernel: BTRFS info (device vda6): has skinny extents Feb 12 20:25:20.450115 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 12 20:25:20.464000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:20.464599 systemd[1]: Finished ignition-setup.service. Feb 12 20:25:20.466224 systemd[1]: Starting ignition-fetch-offline.service... Feb 12 20:25:20.538605 systemd[1]: Finished parse-ip-for-networkd.service. Feb 12 20:25:20.539000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:20.542000 audit: BPF prog-id=9 op=LOAD Feb 12 20:25:20.545092 systemd[1]: Starting systemd-networkd.service... Feb 12 20:25:20.580334 systemd-networkd[632]: lo: Link UP Feb 12 20:25:20.581073 systemd-networkd[632]: lo: Gained carrier Feb 12 20:25:20.582000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:20.582302 systemd-networkd[632]: Enumeration completed Feb 12 20:25:20.582403 systemd[1]: Started systemd-networkd.service. Feb 12 20:25:20.582892 systemd[1]: Reached target network.target. Feb 12 20:25:20.587094 systemd[1]: Starting iscsiuio.service... Feb 12 20:25:20.589626 systemd-networkd[632]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 12 20:25:20.592816 systemd-networkd[632]: eth0: Link UP Feb 12 20:25:20.592821 systemd-networkd[632]: eth0: Gained carrier Feb 12 20:25:20.630483 systemd[1]: Started iscsiuio.service. Feb 12 20:25:20.631000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:20.633288 systemd[1]: Starting iscsid.service... Feb 12 20:25:20.634487 systemd-networkd[632]: eth0: DHCPv4 address 172.24.4.189/24, gateway 172.24.4.1 acquired from 172.24.4.1 Feb 12 20:25:20.636177 iscsid[637]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Feb 12 20:25:20.636177 iscsid[637]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Feb 12 20:25:20.636177 iscsid[637]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Feb 12 20:25:20.636177 iscsid[637]: If using hardware iscsi like qla4xxx this message can be ignored. Feb 12 20:25:20.636177 iscsid[637]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Feb 12 20:25:20.636177 iscsid[637]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Feb 12 20:25:20.640000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:20.637441 systemd[1]: Started iscsid.service. Feb 12 20:25:20.641783 systemd[1]: Starting dracut-initqueue.service... Feb 12 20:25:20.657000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:20.657099 systemd[1]: Finished dracut-initqueue.service. Feb 12 20:25:20.657634 systemd[1]: Reached target remote-fs-pre.target. Feb 12 20:25:20.658055 systemd[1]: Reached target remote-cryptsetup.target. Feb 12 20:25:20.658499 systemd[1]: Reached target remote-fs.target. Feb 12 20:25:20.659611 systemd[1]: Starting dracut-pre-mount.service... Feb 12 20:25:20.672000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:20.672829 systemd[1]: Finished dracut-pre-mount.service. Feb 12 20:25:20.933932 ignition[576]: Ignition 2.14.0 Feb 12 20:25:20.933965 ignition[576]: Stage: fetch-offline Feb 12 20:25:20.934164 ignition[576]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 20:25:20.934221 ignition[576]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Feb 12 20:25:20.938229 ignition[576]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 12 20:25:20.938561 ignition[576]: parsed url from cmdline: "" Feb 12 20:25:20.938571 ignition[576]: no config URL provided Feb 12 20:25:20.942000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:20.941324 systemd[1]: Finished ignition-fetch-offline.service. Feb 12 20:25:20.938585 ignition[576]: reading system config file "/usr/lib/ignition/user.ign" Feb 12 20:25:20.944035 systemd[1]: Starting ignition-fetch.service... Feb 12 20:25:20.938604 ignition[576]: no config at "/usr/lib/ignition/user.ign" Feb 12 20:25:20.938617 ignition[576]: failed to fetch config: resource requires networking Feb 12 20:25:20.939096 ignition[576]: Ignition finished successfully Feb 12 20:25:20.963030 ignition[655]: Ignition 2.14.0 Feb 12 20:25:20.963058 ignition[655]: Stage: fetch Feb 12 20:25:20.963378 ignition[655]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 20:25:20.963429 ignition[655]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Feb 12 20:25:20.966169 ignition[655]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 12 20:25:20.966446 ignition[655]: parsed url from cmdline: "" Feb 12 20:25:20.966459 ignition[655]: no config URL provided Feb 12 20:25:20.966478 ignition[655]: reading system config file "/usr/lib/ignition/user.ign" Feb 12 20:25:20.966508 ignition[655]: no config at "/usr/lib/ignition/user.ign" Feb 12 20:25:20.971993 ignition[655]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Feb 12 20:25:20.972054 ignition[655]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Feb 12 20:25:20.972275 ignition[655]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Feb 12 20:25:21.180832 ignition[655]: GET result: OK Feb 12 20:25:21.181041 ignition[655]: parsing config with SHA512: 8e36ecee9e15cd7a322e5c0d57aedbff4c4b520a4ba5bd9f567db92bbf765ab8383a3f09d6108114a14ef09cf0715e2d5824b827b4e083ed7d8c3044c1be18eb Feb 12 20:25:21.227211 unknown[655]: fetched base config from "system" Feb 12 20:25:21.228034 unknown[655]: fetched base config from "system" Feb 12 20:25:21.228671 unknown[655]: fetched user config from "openstack" Feb 12 20:25:21.229818 ignition[655]: fetch: fetch complete Feb 12 20:25:21.230353 ignition[655]: fetch: fetch passed Feb 12 20:25:21.230947 ignition[655]: Ignition finished successfully Feb 12 20:25:21.233292 systemd[1]: Finished ignition-fetch.service. Feb 12 20:25:21.234000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:21.237099 kernel: kauditd_printk_skb: 18 callbacks suppressed Feb 12 20:25:21.237130 kernel: audit: type=1130 audit(1707769521.234:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:21.253043 systemd[1]: Starting ignition-kargs.service... Feb 12 20:25:21.272049 ignition[661]: Ignition 2.14.0 Feb 12 20:25:21.272074 ignition[661]: Stage: kargs Feb 12 20:25:21.272303 ignition[661]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 20:25:21.272408 ignition[661]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Feb 12 20:25:21.274412 ignition[661]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 12 20:25:21.276925 ignition[661]: kargs: kargs passed Feb 12 20:25:21.277017 ignition[661]: Ignition finished successfully Feb 12 20:25:21.278825 systemd[1]: Finished ignition-kargs.service. Feb 12 20:25:21.279000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:21.280859 systemd[1]: Starting ignition-disks.service... Feb 12 20:25:21.287960 kernel: audit: type=1130 audit(1707769521.279:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:21.290861 ignition[666]: Ignition 2.14.0 Feb 12 20:25:21.291699 ignition[666]: Stage: disks Feb 12 20:25:21.292486 ignition[666]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 20:25:21.293315 ignition[666]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Feb 12 20:25:21.294892 ignition[666]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 12 20:25:21.297456 ignition[666]: disks: disks passed Feb 12 20:25:21.298042 ignition[666]: Ignition finished successfully Feb 12 20:25:21.299833 systemd[1]: Finished ignition-disks.service. Feb 12 20:25:21.309310 kernel: audit: type=1130 audit(1707769521.300:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:21.300000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:21.300586 systemd[1]: Reached target initrd-root-device.target. Feb 12 20:25:21.309788 systemd[1]: Reached target local-fs-pre.target. Feb 12 20:25:21.311421 systemd[1]: Reached target local-fs.target. Feb 12 20:25:21.313002 systemd[1]: Reached target sysinit.target. Feb 12 20:25:21.314528 systemd[1]: Reached target basic.target. Feb 12 20:25:21.317005 systemd[1]: Starting systemd-fsck-root.service... Feb 12 20:25:21.340428 systemd-fsck[674]: ROOT: clean, 602/1628000 files, 124050/1617920 blocks Feb 12 20:25:21.353582 systemd[1]: Finished systemd-fsck-root.service. Feb 12 20:25:21.354000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:21.356892 systemd[1]: Mounting sysroot.mount... Feb 12 20:25:21.361201 kernel: audit: type=1130 audit(1707769521.354:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:21.503561 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Feb 12 20:25:21.505223 systemd[1]: Mounted sysroot.mount. Feb 12 20:25:21.506561 systemd[1]: Reached target initrd-root-fs.target. Feb 12 20:25:21.511127 systemd[1]: Mounting sysroot-usr.mount... Feb 12 20:25:21.514601 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Feb 12 20:25:21.516531 systemd[1]: Starting flatcar-openstack-hostname.service... Feb 12 20:25:21.517765 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 12 20:25:21.517865 systemd[1]: Reached target ignition-diskful.target. Feb 12 20:25:21.533118 systemd[1]: Mounted sysroot-usr.mount. Feb 12 20:25:21.555194 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 12 20:25:21.561256 systemd[1]: Starting initrd-setup-root.service... Feb 12 20:25:21.577142 initrd-setup-root[686]: cut: /sysroot/etc/passwd: No such file or directory Feb 12 20:25:21.591409 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (681) Feb 12 20:25:21.600899 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 12 20:25:21.600978 kernel: BTRFS info (device vda6): using free space tree Feb 12 20:25:21.601006 kernel: BTRFS info (device vda6): has skinny extents Feb 12 20:25:21.604627 initrd-setup-root[694]: cut: /sysroot/etc/group: No such file or directory Feb 12 20:25:21.618628 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 12 20:25:21.620476 initrd-setup-root[720]: cut: /sysroot/etc/shadow: No such file or directory Feb 12 20:25:21.631598 initrd-setup-root[728]: cut: /sysroot/etc/gshadow: No such file or directory Feb 12 20:25:21.717268 systemd[1]: Finished initrd-setup-root.service. Feb 12 20:25:21.735265 kernel: audit: type=1130 audit(1707769521.718:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:21.718000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:21.721638 systemd[1]: Starting ignition-mount.service... Feb 12 20:25:21.740092 systemd[1]: Starting sysroot-boot.service... Feb 12 20:25:21.747855 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Feb 12 20:25:21.748287 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Feb 12 20:25:21.765212 ignition[748]: INFO : Ignition 2.14.0 Feb 12 20:25:21.766187 ignition[748]: INFO : Stage: mount Feb 12 20:25:21.766898 ignition[748]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 20:25:21.767697 ignition[748]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Feb 12 20:25:21.770082 ignition[748]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 12 20:25:21.772177 ignition[748]: INFO : mount: mount passed Feb 12 20:25:21.772783 ignition[748]: INFO : Ignition finished successfully Feb 12 20:25:21.774000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:21.774040 systemd[1]: Finished ignition-mount.service. Feb 12 20:25:21.779364 kernel: audit: type=1130 audit(1707769521.774:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:21.868811 coreos-metadata[680]: Feb 12 20:25:21.868 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Feb 12 20:25:21.873828 systemd[1]: Finished sysroot-boot.service. Feb 12 20:25:21.875000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:21.888415 kernel: audit: type=1130 audit(1707769521.875:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:21.939431 coreos-metadata[680]: Feb 12 20:25:21.939 INFO Fetch successful Feb 12 20:25:21.941520 coreos-metadata[680]: Feb 12 20:25:21.941 INFO wrote hostname ci-3510-3-2-4-778020c044.novalocal to /sysroot/etc/hostname Feb 12 20:25:21.961117 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Feb 12 20:25:21.961392 systemd[1]: Finished flatcar-openstack-hostname.service. Feb 12 20:25:21.982920 kernel: audit: type=1130 audit(1707769521.962:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:21.982968 kernel: audit: type=1131 audit(1707769521.962:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:21.962000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:21.962000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:21.964497 systemd[1]: Starting ignition-files.service... Feb 12 20:25:21.991027 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 12 20:25:22.072883 systemd-networkd[632]: eth0: Gained IPv6LL Feb 12 20:25:22.140438 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (759) Feb 12 20:25:22.147403 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 12 20:25:22.147465 kernel: BTRFS info (device vda6): using free space tree Feb 12 20:25:22.147492 kernel: BTRFS info (device vda6): has skinny extents Feb 12 20:25:22.162291 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 12 20:25:22.211011 ignition[778]: INFO : Ignition 2.14.0 Feb 12 20:25:22.211011 ignition[778]: INFO : Stage: files Feb 12 20:25:22.214289 ignition[778]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 20:25:22.214289 ignition[778]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Feb 12 20:25:22.214289 ignition[778]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 12 20:25:22.222163 ignition[778]: DEBUG : files: compiled without relabeling support, skipping Feb 12 20:25:22.222163 ignition[778]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 12 20:25:22.222163 ignition[778]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 12 20:25:22.229117 ignition[778]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 12 20:25:22.229117 ignition[778]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 12 20:25:22.233959 ignition[778]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 12 20:25:22.233959 ignition[778]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.3.0.tgz" Feb 12 20:25:22.233959 ignition[778]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://github.com/containernetworking/plugins/releases/download/v1.3.0/cni-plugins-linux-amd64-v1.3.0.tgz: attempt #1 Feb 12 20:25:22.231397 unknown[778]: wrote ssh authorized keys file for user: core Feb 12 20:25:22.834072 ignition[778]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 12 20:25:23.778967 ignition[778]: DEBUG : files: createFilesystemsFiles: createFiles: op(3): file matches expected sum of: 5d0324ca8a3c90c680b6e1fddb245a2255582fa15949ba1f3c6bb7323df9d3af754dae98d6e40ac9ccafb2999c932df2c4288d418949a4915d928eb23c090540 Feb 12 20:25:23.778967 ignition[778]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.3.0.tgz" Feb 12 20:25:23.785083 ignition[778]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/crictl-v1.27.0-linux-amd64.tar.gz" Feb 12 20:25:23.785083 ignition[778]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.27.0/crictl-v1.27.0-linux-amd64.tar.gz: attempt #1 Feb 12 20:25:24.254242 ignition[778]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 12 20:25:24.754758 ignition[778]: DEBUG : files: createFilesystemsFiles: createFiles: op(4): file matches expected sum of: aa622325bf05520939f9e020d7a28ab48ac23e2fae6f47d5a4e52174c88c1ebc31b464853e4fd65bd8f5331f330a6ca96fd370d247d3eeaed042da4ee2d1219a Feb 12 20:25:24.758276 ignition[778]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/crictl-v1.27.0-linux-amd64.tar.gz" Feb 12 20:25:24.779789 ignition[778]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/kubeadm" Feb 12 20:25:24.779789 ignition[778]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://dl.k8s.io/release/v1.28.1/bin/linux/amd64/kubeadm: attempt #1 Feb 12 20:25:24.920772 ignition[778]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Feb 12 20:25:25.987562 ignition[778]: DEBUG : files: createFilesystemsFiles: createFiles: op(5): file matches expected sum of: f4daad200c8378dfdc6cb69af28eaca4215f2b4a2dbdf75f29f9210171cb5683bc873fc000319022e6b3ad61175475d77190734713ba9136644394e8a8faafa1 Feb 12 20:25:25.987562 ignition[778]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/kubeadm" Feb 12 20:25:25.987562 ignition[778]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/bin/kubelet" Feb 12 20:25:25.994437 ignition[778]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://dl.k8s.io/release/v1.28.1/bin/linux/amd64/kubelet: attempt #1 Feb 12 20:25:26.106726 ignition[778]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Feb 12 20:25:28.276880 ignition[778]: DEBUG : files: createFilesystemsFiles: createFiles: op(6): file matches expected sum of: ce6ba764274162d38ac1c44e1fb1f0f835346f3afc5b508bb755b1b7d7170910f5812b0a1941b32e29d950e905bbd08ae761c87befad921db4d44969c8562e75 Feb 12 20:25:28.278645 ignition[778]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/bin/kubelet" Feb 12 20:25:28.279537 ignition[778]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/install.sh" Feb 12 20:25:28.280609 ignition[778]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/install.sh" Feb 12 20:25:28.281649 ignition[778]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/docker/daemon.json" Feb 12 20:25:28.282508 ignition[778]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/docker/daemon.json" Feb 12 20:25:28.316891 ignition[778]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 12 20:25:28.317796 ignition[778]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 12 20:25:28.317796 ignition[778]: INFO : files: op(a): [started] processing unit "coreos-metadata-sshkeys@.service" Feb 12 20:25:28.345695 ignition[778]: INFO : files: op(a): op(b): [started] writing systemd drop-in "20-clct-provider-override.conf" at "/sysroot/etc/systemd/system/coreos-metadata-sshkeys@.service.d/20-clct-provider-override.conf" Feb 12 20:25:28.348532 ignition[778]: INFO : files: op(a): op(b): [finished] writing systemd drop-in "20-clct-provider-override.conf" at "/sysroot/etc/systemd/system/coreos-metadata-sshkeys@.service.d/20-clct-provider-override.conf" Feb 12 20:25:28.348532 ignition[778]: INFO : files: op(a): [finished] processing unit "coreos-metadata-sshkeys@.service" Feb 12 20:25:28.348532 ignition[778]: INFO : files: op(c): [started] processing unit "coreos-metadata.service" Feb 12 20:25:28.348532 ignition[778]: INFO : files: op(c): op(d): [started] writing systemd drop-in "20-clct-provider-override.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/20-clct-provider-override.conf" Feb 12 20:25:28.348532 ignition[778]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "20-clct-provider-override.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/20-clct-provider-override.conf" Feb 12 20:25:28.348532 ignition[778]: INFO : files: op(c): [finished] processing unit "coreos-metadata.service" Feb 12 20:25:28.348532 ignition[778]: INFO : files: op(e): [started] processing unit "prepare-cni-plugins.service" Feb 12 20:25:28.348532 ignition[778]: INFO : files: op(e): op(f): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 12 20:25:28.348532 ignition[778]: INFO : files: op(e): op(f): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 12 20:25:28.348532 ignition[778]: INFO : files: op(e): [finished] processing unit "prepare-cni-plugins.service" Feb 12 20:25:28.348532 ignition[778]: INFO : files: op(10): [started] processing unit "prepare-critools.service" Feb 12 20:25:28.348532 ignition[778]: INFO : files: op(10): op(11): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 12 20:25:28.348532 ignition[778]: INFO : files: op(10): op(11): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 12 20:25:28.348532 ignition[778]: INFO : files: op(10): [finished] processing unit "prepare-critools.service" Feb 12 20:25:28.348532 ignition[778]: INFO : files: op(12): [started] setting preset to enabled for "prepare-critools.service" Feb 12 20:25:28.348532 ignition[778]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-critools.service" Feb 12 20:25:28.348532 ignition[778]: INFO : files: op(13): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Feb 12 20:25:28.386315 ignition[778]: INFO : files: op(13): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Feb 12 20:25:28.386315 ignition[778]: INFO : files: op(14): [started] setting preset to enabled for "prepare-cni-plugins.service" Feb 12 20:25:28.386315 ignition[778]: INFO : files: op(14): [finished] setting preset to enabled for "prepare-cni-plugins.service" Feb 12 20:25:28.410619 ignition[778]: INFO : files: createResultFile: createFiles: op(15): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 12 20:25:28.413429 ignition[778]: INFO : files: createResultFile: createFiles: op(15): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 12 20:25:28.413429 ignition[778]: INFO : files: files passed Feb 12 20:25:28.413429 ignition[778]: INFO : Ignition finished successfully Feb 12 20:25:28.431521 kernel: audit: type=1130 audit(1707769528.418:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:28.418000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:28.414919 systemd[1]: Finished ignition-files.service. Feb 12 20:25:28.422985 systemd[1]: Starting initrd-setup-root-after-ignition.service... Feb 12 20:25:28.430408 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Feb 12 20:25:28.432024 systemd[1]: Starting ignition-quench.service... Feb 12 20:25:28.488684 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 12 20:25:28.508985 kernel: audit: type=1130 audit(1707769528.489:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:28.509045 kernel: audit: type=1131 audit(1707769528.490:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:28.489000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:28.490000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:28.488977 systemd[1]: Finished ignition-quench.service. Feb 12 20:25:28.614494 initrd-setup-root-after-ignition[803]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 12 20:25:28.615546 systemd[1]: Finished initrd-setup-root-after-ignition.service. Feb 12 20:25:28.627513 kernel: audit: type=1130 audit(1707769528.617:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:28.617000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:28.618175 systemd[1]: Reached target ignition-complete.target. Feb 12 20:25:28.631093 systemd[1]: Starting initrd-parse-etc.service... Feb 12 20:25:28.658327 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 12 20:25:28.658586 systemd[1]: Finished initrd-parse-etc.service. Feb 12 20:25:28.677498 kernel: audit: type=1130 audit(1707769528.660:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:28.677544 kernel: audit: type=1131 audit(1707769528.660:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:28.660000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:28.660000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:28.661101 systemd[1]: Reached target initrd-fs.target. Feb 12 20:25:28.678433 systemd[1]: Reached target initrd.target. Feb 12 20:25:28.680404 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Feb 12 20:25:28.681916 systemd[1]: Starting dracut-pre-pivot.service... Feb 12 20:25:28.705669 systemd[1]: Finished dracut-pre-pivot.service. Feb 12 20:25:28.724705 kernel: audit: type=1130 audit(1707769528.707:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:28.707000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:28.724082 systemd[1]: Starting initrd-cleanup.service... Feb 12 20:25:28.741436 systemd[1]: Stopped target nss-lookup.target. Feb 12 20:25:28.743885 systemd[1]: Stopped target remote-cryptsetup.target. Feb 12 20:25:28.746372 systemd[1]: Stopped target timers.target. Feb 12 20:25:28.747962 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 12 20:25:28.748106 systemd[1]: Stopped dracut-pre-pivot.service. Feb 12 20:25:28.758114 kernel: audit: type=1131 audit(1707769528.749:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:28.749000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:28.750219 systemd[1]: Stopped target initrd.target. Feb 12 20:25:28.758785 systemd[1]: Stopped target basic.target. Feb 12 20:25:28.760135 systemd[1]: Stopped target ignition-complete.target. Feb 12 20:25:28.761715 systemd[1]: Stopped target ignition-diskful.target. Feb 12 20:25:28.763210 systemd[1]: Stopped target initrd-root-device.target. Feb 12 20:25:28.765134 systemd[1]: Stopped target remote-fs.target. Feb 12 20:25:28.765932 systemd[1]: Stopped target remote-fs-pre.target. Feb 12 20:25:28.767603 systemd[1]: Stopped target sysinit.target. Feb 12 20:25:28.769138 systemd[1]: Stopped target local-fs.target. Feb 12 20:25:28.770532 systemd[1]: Stopped target local-fs-pre.target. Feb 12 20:25:28.771622 systemd[1]: Stopped target swap.target. Feb 12 20:25:28.777444 kernel: audit: type=1131 audit(1707769528.773:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:28.773000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:28.772971 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 12 20:25:28.773226 systemd[1]: Stopped dracut-pre-mount.service. Feb 12 20:25:28.782262 kernel: audit: type=1131 audit(1707769528.778:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:28.778000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:28.774025 systemd[1]: Stopped target cryptsetup.target. Feb 12 20:25:28.782000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:28.777958 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 12 20:25:28.783000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:28.778118 systemd[1]: Stopped dracut-initqueue.service. Feb 12 20:25:28.778959 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 12 20:25:28.779124 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Feb 12 20:25:28.782931 systemd[1]: ignition-files.service: Deactivated successfully. Feb 12 20:25:28.783079 systemd[1]: Stopped ignition-files.service. Feb 12 20:25:28.784791 systemd[1]: Stopping ignition-mount.service... Feb 12 20:25:28.796907 systemd[1]: Stopping sysroot-boot.service... Feb 12 20:25:28.798000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:28.797649 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 12 20:25:28.800000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:28.797940 systemd[1]: Stopped systemd-udev-trigger.service. Feb 12 20:25:28.798921 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 12 20:25:28.799090 systemd[1]: Stopped dracut-pre-trigger.service. Feb 12 20:25:28.804000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:28.804000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:28.804397 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 12 20:25:28.809378 ignition[816]: INFO : Ignition 2.14.0 Feb 12 20:25:28.809378 ignition[816]: INFO : Stage: umount Feb 12 20:25:28.809378 ignition[816]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 20:25:28.809378 ignition[816]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Feb 12 20:25:28.809378 ignition[816]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 12 20:25:28.809378 ignition[816]: INFO : umount: umount passed Feb 12 20:25:28.809378 ignition[816]: INFO : Ignition finished successfully Feb 12 20:25:28.809000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:28.810000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:28.811000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:28.812000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:28.815000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:28.804507 systemd[1]: Finished initrd-cleanup.service. Feb 12 20:25:28.808921 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 12 20:25:28.809044 systemd[1]: Stopped ignition-mount.service. Feb 12 20:25:28.810074 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 12 20:25:28.810135 systemd[1]: Stopped ignition-disks.service. Feb 12 20:25:28.811024 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 12 20:25:28.811104 systemd[1]: Stopped ignition-kargs.service. Feb 12 20:25:28.812010 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 12 20:25:28.812062 systemd[1]: Stopped ignition-fetch.service. Feb 12 20:25:28.813206 systemd[1]: Stopped target network.target. Feb 12 20:25:28.814673 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 12 20:25:28.814719 systemd[1]: Stopped ignition-fetch-offline.service. Feb 12 20:25:28.825000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:28.815894 systemd[1]: Stopped target paths.target. Feb 12 20:25:28.816793 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 12 20:25:28.820489 systemd[1]: Stopped systemd-ask-password-console.path. Feb 12 20:25:28.821042 systemd[1]: Stopped target slices.target. Feb 12 20:25:28.821890 systemd[1]: Stopped target sockets.target. Feb 12 20:25:28.823367 systemd[1]: iscsid.socket: Deactivated successfully. Feb 12 20:25:28.823407 systemd[1]: Closed iscsid.socket. Feb 12 20:25:28.831000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:28.824221 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 12 20:25:28.824246 systemd[1]: Closed iscsiuio.socket. Feb 12 20:25:28.825072 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 12 20:25:28.825131 systemd[1]: Stopped ignition-setup.service. Feb 12 20:25:28.826257 systemd[1]: Stopping systemd-networkd.service... Feb 12 20:25:28.839000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:28.827307 systemd[1]: Stopping systemd-resolved.service... Feb 12 20:25:28.839000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:28.830541 systemd-networkd[632]: eth0: DHCPv6 lease lost Feb 12 20:25:28.840000 audit: BPF prog-id=9 op=UNLOAD Feb 12 20:25:28.840000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:28.831401 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 12 20:25:28.831493 systemd[1]: Stopped systemd-networkd.service. Feb 12 20:25:28.833615 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 12 20:25:28.833664 systemd[1]: Closed systemd-networkd.socket. Feb 12 20:25:28.835287 systemd[1]: Stopping network-cleanup.service... Feb 12 20:25:28.838575 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 12 20:25:28.844000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:28.838635 systemd[1]: Stopped parse-ip-for-networkd.service. Feb 12 20:25:28.839535 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 12 20:25:28.839577 systemd[1]: Stopped systemd-sysctl.service. Feb 12 20:25:28.840613 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 12 20:25:28.848000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:28.840654 systemd[1]: Stopped systemd-modules-load.service. Feb 12 20:25:28.841773 systemd[1]: Stopping systemd-udevd.service... Feb 12 20:25:28.843726 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 12 20:25:28.851000 audit: BPF prog-id=6 op=UNLOAD Feb 12 20:25:28.844289 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 12 20:25:28.844430 systemd[1]: Stopped systemd-resolved.service. Feb 12 20:25:28.848104 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 12 20:25:28.848267 systemd[1]: Stopped systemd-udevd.service. Feb 12 20:25:28.856000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:28.851303 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 12 20:25:28.857000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:28.851481 systemd[1]: Closed systemd-udevd-control.socket. Feb 12 20:25:28.858000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:28.854567 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 12 20:25:28.854603 systemd[1]: Closed systemd-udevd-kernel.socket. Feb 12 20:25:28.855746 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 12 20:25:28.869000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:28.855808 systemd[1]: Stopped dracut-pre-udev.service. Feb 12 20:25:28.870000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:28.856787 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 12 20:25:28.871000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:28.856830 systemd[1]: Stopped dracut-cmdline.service. Feb 12 20:25:28.857635 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 12 20:25:28.857671 systemd[1]: Stopped dracut-cmdline-ask.service. Feb 12 20:25:28.859329 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Feb 12 20:25:28.874000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:28.860059 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 12 20:25:28.879000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:28.879000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:28.860116 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Feb 12 20:25:28.870214 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 12 20:25:28.870277 systemd[1]: Stopped kmod-static-nodes.service. Feb 12 20:25:28.871023 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 12 20:25:28.871061 systemd[1]: Stopped systemd-vconsole-setup.service. Feb 12 20:25:28.873199 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Feb 12 20:25:28.873767 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 12 20:25:28.873877 systemd[1]: Stopped network-cleanup.service. Feb 12 20:25:28.874763 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 12 20:25:28.874850 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Feb 12 20:25:28.916627 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 12 20:25:28.964089 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 12 20:25:28.964000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:28.964218 systemd[1]: Stopped sysroot-boot.service. Feb 12 20:25:28.964881 systemd[1]: Reached target initrd-switch-root.target. Feb 12 20:25:28.967000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:28.966643 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 12 20:25:28.966701 systemd[1]: Stopped initrd-setup-root.service. Feb 12 20:25:28.969185 systemd[1]: Starting initrd-switch-root.service... Feb 12 20:25:28.987319 systemd[1]: Switching root. Feb 12 20:25:29.012924 systemd-journald[185]: Received SIGTERM from PID 1 (systemd). Feb 12 20:25:29.013052 iscsid[637]: iscsid shutting down. Feb 12 20:25:29.014371 systemd-journald[185]: Journal stopped Feb 12 20:25:34.530173 kernel: SELinux: Class mctp_socket not defined in policy. Feb 12 20:25:34.530227 kernel: SELinux: Class anon_inode not defined in policy. Feb 12 20:25:34.530241 kernel: SELinux: the above unknown classes and permissions will be allowed Feb 12 20:25:34.530252 kernel: SELinux: policy capability network_peer_controls=1 Feb 12 20:25:34.530263 kernel: SELinux: policy capability open_perms=1 Feb 12 20:25:34.530274 kernel: SELinux: policy capability extended_socket_class=1 Feb 12 20:25:34.530285 kernel: SELinux: policy capability always_check_network=0 Feb 12 20:25:34.530296 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 12 20:25:34.530309 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 12 20:25:34.530320 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 12 20:25:34.530332 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 12 20:25:34.530367 systemd[1]: Successfully loaded SELinux policy in 146.926ms. Feb 12 20:25:34.530386 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 24.166ms. Feb 12 20:25:34.530399 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 12 20:25:34.530411 systemd[1]: Detected virtualization kvm. Feb 12 20:25:34.530422 systemd[1]: Detected architecture x86-64. Feb 12 20:25:34.530433 systemd[1]: Detected first boot. Feb 12 20:25:34.530445 systemd[1]: Hostname set to . Feb 12 20:25:34.530458 systemd[1]: Initializing machine ID from VM UUID. Feb 12 20:25:34.530469 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Feb 12 20:25:34.530482 systemd[1]: Populated /etc with preset unit settings. Feb 12 20:25:34.530494 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 12 20:25:34.530507 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 12 20:25:34.530520 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 12 20:25:34.530535 kernel: kauditd_printk_skb: 47 callbacks suppressed Feb 12 20:25:34.530548 kernel: audit: type=1334 audit(1707769534.269:88): prog-id=12 op=LOAD Feb 12 20:25:34.530561 kernel: audit: type=1334 audit(1707769534.270:89): prog-id=3 op=UNLOAD Feb 12 20:25:34.530571 kernel: audit: type=1334 audit(1707769534.271:90): prog-id=13 op=LOAD Feb 12 20:25:34.530582 kernel: audit: type=1334 audit(1707769534.272:91): prog-id=14 op=LOAD Feb 12 20:25:34.530592 kernel: audit: type=1334 audit(1707769534.272:92): prog-id=4 op=UNLOAD Feb 12 20:25:34.530603 kernel: audit: type=1334 audit(1707769534.272:93): prog-id=5 op=UNLOAD Feb 12 20:25:34.530613 kernel: audit: type=1334 audit(1707769534.279:94): prog-id=15 op=LOAD Feb 12 20:25:34.530624 kernel: audit: type=1334 audit(1707769534.279:95): prog-id=12 op=UNLOAD Feb 12 20:25:34.530635 kernel: audit: type=1334 audit(1707769534.280:96): prog-id=16 op=LOAD Feb 12 20:25:34.530646 kernel: audit: type=1334 audit(1707769534.282:97): prog-id=17 op=LOAD Feb 12 20:25:34.530656 systemd[1]: iscsiuio.service: Deactivated successfully. Feb 12 20:25:34.530667 systemd[1]: Stopped iscsiuio.service. Feb 12 20:25:34.530678 systemd[1]: iscsid.service: Deactivated successfully. Feb 12 20:25:34.530690 systemd[1]: Stopped iscsid.service. Feb 12 20:25:34.530701 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 12 20:25:34.530712 systemd[1]: Stopped initrd-switch-root.service. Feb 12 20:25:34.530726 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 12 20:25:34.530738 systemd[1]: Created slice system-addon\x2dconfig.slice. Feb 12 20:25:34.530750 systemd[1]: Created slice system-addon\x2drun.slice. Feb 12 20:25:34.530762 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Feb 12 20:25:34.530773 systemd[1]: Created slice system-getty.slice. Feb 12 20:25:34.530785 systemd[1]: Created slice system-modprobe.slice. Feb 12 20:25:34.530796 systemd[1]: Created slice system-serial\x2dgetty.slice. Feb 12 20:25:34.530810 systemd[1]: Created slice system-system\x2dcloudinit.slice. Feb 12 20:25:34.530821 systemd[1]: Created slice system-systemd\x2dfsck.slice. Feb 12 20:25:34.530832 systemd[1]: Created slice user.slice. Feb 12 20:25:34.530844 systemd[1]: Started systemd-ask-password-console.path. Feb 12 20:25:34.530855 systemd[1]: Started systemd-ask-password-wall.path. Feb 12 20:25:34.530867 systemd[1]: Set up automount boot.automount. Feb 12 20:25:34.530878 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Feb 12 20:25:34.530890 systemd[1]: Stopped target initrd-switch-root.target. Feb 12 20:25:34.530901 systemd[1]: Stopped target initrd-fs.target. Feb 12 20:25:34.530914 systemd[1]: Stopped target initrd-root-fs.target. Feb 12 20:25:34.530925 systemd[1]: Reached target integritysetup.target. Feb 12 20:25:34.530937 systemd[1]: Reached target remote-cryptsetup.target. Feb 12 20:25:34.530948 systemd[1]: Reached target remote-fs.target. Feb 12 20:25:34.530960 systemd[1]: Reached target slices.target. Feb 12 20:25:34.530972 systemd[1]: Reached target swap.target. Feb 12 20:25:34.530983 systemd[1]: Reached target torcx.target. Feb 12 20:25:34.530994 systemd[1]: Reached target veritysetup.target. Feb 12 20:25:34.531005 systemd[1]: Listening on systemd-coredump.socket. Feb 12 20:25:34.531017 systemd[1]: Listening on systemd-initctl.socket. Feb 12 20:25:34.531030 systemd[1]: Listening on systemd-networkd.socket. Feb 12 20:25:34.531041 systemd[1]: Listening on systemd-udevd-control.socket. Feb 12 20:25:34.531053 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 12 20:25:34.531064 systemd[1]: Listening on systemd-userdbd.socket. Feb 12 20:25:34.531076 systemd[1]: Mounting dev-hugepages.mount... Feb 12 20:25:34.531087 systemd[1]: Mounting dev-mqueue.mount... Feb 12 20:25:34.531099 systemd[1]: Mounting media.mount... Feb 12 20:25:34.531110 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 12 20:25:34.531122 systemd[1]: Mounting sys-kernel-debug.mount... Feb 12 20:25:34.531134 systemd[1]: Mounting sys-kernel-tracing.mount... Feb 12 20:25:34.531146 systemd[1]: Mounting tmp.mount... Feb 12 20:25:34.531157 systemd[1]: Starting flatcar-tmpfiles.service... Feb 12 20:25:34.531168 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Feb 12 20:25:34.531180 systemd[1]: Starting kmod-static-nodes.service... Feb 12 20:25:34.531191 systemd[1]: Starting modprobe@configfs.service... Feb 12 20:25:34.531202 systemd[1]: Starting modprobe@dm_mod.service... Feb 12 20:25:34.531215 systemd[1]: Starting modprobe@drm.service... Feb 12 20:25:34.531226 systemd[1]: Starting modprobe@efi_pstore.service... Feb 12 20:25:34.531239 systemd[1]: Starting modprobe@fuse.service... Feb 12 20:25:34.531251 systemd[1]: Starting modprobe@loop.service... Feb 12 20:25:34.531264 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 12 20:25:34.531277 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 12 20:25:34.531290 systemd[1]: Stopped systemd-fsck-root.service. Feb 12 20:25:34.531302 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 12 20:25:34.531314 systemd[1]: Stopped systemd-fsck-usr.service. Feb 12 20:25:34.531326 systemd[1]: Stopped systemd-journald.service. Feb 12 20:25:34.531352 systemd[1]: Starting systemd-journald.service... Feb 12 20:25:34.531367 systemd[1]: Starting systemd-modules-load.service... Feb 12 20:25:34.531380 systemd[1]: Starting systemd-network-generator.service... Feb 12 20:25:34.531392 systemd[1]: Starting systemd-remount-fs.service... Feb 12 20:25:34.531404 systemd[1]: Starting systemd-udev-trigger.service... Feb 12 20:25:34.531416 systemd[1]: verity-setup.service: Deactivated successfully. Feb 12 20:25:34.531429 systemd[1]: Stopped verity-setup.service. Feb 12 20:25:34.531442 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 12 20:25:34.531454 systemd[1]: Mounted dev-hugepages.mount. Feb 12 20:25:34.531466 systemd[1]: Mounted dev-mqueue.mount. Feb 12 20:25:34.531480 systemd[1]: Mounted media.mount. Feb 12 20:25:34.531497 systemd-journald[914]: Journal started Feb 12 20:25:34.531540 systemd-journald[914]: Runtime Journal (/run/log/journal/7eddc33fbadb4e93b617549219825a25) is 4.9M, max 39.5M, 34.5M free. Feb 12 20:25:29.335000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 12 20:25:29.662000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 12 20:25:34.534414 systemd[1]: Started systemd-journald.service. Feb 12 20:25:34.534437 kernel: loop: module loaded Feb 12 20:25:29.662000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 12 20:25:29.662000 audit: BPF prog-id=10 op=LOAD Feb 12 20:25:29.662000 audit: BPF prog-id=10 op=UNLOAD Feb 12 20:25:29.662000 audit: BPF prog-id=11 op=LOAD Feb 12 20:25:29.663000 audit: BPF prog-id=11 op=UNLOAD Feb 12 20:25:29.835000 audit[848]: AVC avc: denied { associate } for pid=848 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Feb 12 20:25:29.835000 audit[848]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c00014d89c a1=c0000cede0 a2=c0000d7ac0 a3=32 items=0 ppid=831 pid=848 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:25:29.835000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 12 20:25:29.837000 audit[848]: AVC avc: denied { associate } for pid=848 comm="torcx-generator" name="lib" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Feb 12 20:25:29.837000 audit[848]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c00014d975 a2=1ed a3=0 items=2 ppid=831 pid=848 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:25:29.837000 audit: CWD cwd="/" Feb 12 20:25:29.837000 audit: PATH item=0 name=(null) inode=2 dev=00:1a mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:29.837000 audit: PATH item=1 name=(null) inode=3 dev=00:1a mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:29.837000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 12 20:25:34.269000 audit: BPF prog-id=12 op=LOAD Feb 12 20:25:34.270000 audit: BPF prog-id=3 op=UNLOAD Feb 12 20:25:34.271000 audit: BPF prog-id=13 op=LOAD Feb 12 20:25:34.272000 audit: BPF prog-id=14 op=LOAD Feb 12 20:25:34.272000 audit: BPF prog-id=4 op=UNLOAD Feb 12 20:25:34.272000 audit: BPF prog-id=5 op=UNLOAD Feb 12 20:25:34.279000 audit: BPF prog-id=15 op=LOAD Feb 12 20:25:34.279000 audit: BPF prog-id=12 op=UNLOAD Feb 12 20:25:34.280000 audit: BPF prog-id=16 op=LOAD Feb 12 20:25:34.282000 audit: BPF prog-id=17 op=LOAD Feb 12 20:25:34.282000 audit: BPF prog-id=13 op=UNLOAD Feb 12 20:25:34.282000 audit: BPF prog-id=14 op=UNLOAD Feb 12 20:25:34.283000 audit: BPF prog-id=18 op=LOAD Feb 12 20:25:34.283000 audit: BPF prog-id=15 op=UNLOAD Feb 12 20:25:34.285000 audit: BPF prog-id=19 op=LOAD Feb 12 20:25:34.286000 audit: BPF prog-id=20 op=LOAD Feb 12 20:25:34.286000 audit: BPF prog-id=16 op=UNLOAD Feb 12 20:25:34.286000 audit: BPF prog-id=17 op=UNLOAD Feb 12 20:25:34.288000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:34.291000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:34.294000 audit: BPF prog-id=18 op=UNLOAD Feb 12 20:25:34.295000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:34.298000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:34.298000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:34.461000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:34.468000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:34.472000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:34.472000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:34.474000 audit: BPF prog-id=21 op=LOAD Feb 12 20:25:34.474000 audit: BPF prog-id=22 op=LOAD Feb 12 20:25:34.474000 audit: BPF prog-id=23 op=LOAD Feb 12 20:25:34.474000 audit: BPF prog-id=19 op=UNLOAD Feb 12 20:25:34.474000 audit: BPF prog-id=20 op=UNLOAD Feb 12 20:25:34.517000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:34.528000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 12 20:25:34.528000 audit[914]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=4 a1=7ffe57bc8290 a2=4000 a3=7ffe57bc832c items=0 ppid=1 pid=914 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:25:34.528000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Feb 12 20:25:34.534000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:34.536000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:34.537000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:34.537000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:34.268009 systemd[1]: Queued start job for default target multi-user.target. Feb 12 20:25:29.831241 /usr/lib/systemd/system-generators/torcx-generator[848]: time="2024-02-12T20:25:29Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 12 20:25:34.268022 systemd[1]: Unnecessary job was removed for dev-vda6.device. Feb 12 20:25:29.832310 /usr/lib/systemd/system-generators/torcx-generator[848]: time="2024-02-12T20:25:29Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 12 20:25:34.288125 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 12 20:25:29.832357 /usr/lib/systemd/system-generators/torcx-generator[848]: time="2024-02-12T20:25:29Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 12 20:25:34.534901 systemd[1]: Mounted sys-kernel-debug.mount. Feb 12 20:25:29.832412 /usr/lib/systemd/system-generators/torcx-generator[848]: time="2024-02-12T20:25:29Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Feb 12 20:25:34.535400 systemd[1]: Mounted sys-kernel-tracing.mount. Feb 12 20:25:29.832426 /usr/lib/systemd/system-generators/torcx-generator[848]: time="2024-02-12T20:25:29Z" level=debug msg="skipped missing lower profile" missing profile=oem Feb 12 20:25:34.535874 systemd[1]: Mounted tmp.mount. Feb 12 20:25:29.832465 /usr/lib/systemd/system-generators/torcx-generator[848]: time="2024-02-12T20:25:29Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Feb 12 20:25:34.538000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:34.538000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:34.536497 systemd[1]: Finished kmod-static-nodes.service. Feb 12 20:25:29.832482 /usr/lib/systemd/system-generators/torcx-generator[848]: time="2024-02-12T20:25:29Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Feb 12 20:25:34.537108 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 12 20:25:29.832764 /usr/lib/systemd/system-generators/torcx-generator[848]: time="2024-02-12T20:25:29Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Feb 12 20:25:34.537226 systemd[1]: Finished modprobe@configfs.service. Feb 12 20:25:29.832813 /usr/lib/systemd/system-generators/torcx-generator[848]: time="2024-02-12T20:25:29Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 12 20:25:34.537863 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 12 20:25:29.832831 /usr/lib/systemd/system-generators/torcx-generator[848]: time="2024-02-12T20:25:29Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 12 20:25:34.537973 systemd[1]: Finished modprobe@dm_mod.service. Feb 12 20:25:29.833831 /usr/lib/systemd/system-generators/torcx-generator[848]: time="2024-02-12T20:25:29Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Feb 12 20:25:34.539581 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 12 20:25:34.540000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:34.540000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:29.833886 /usr/lib/systemd/system-generators/torcx-generator[848]: time="2024-02-12T20:25:29Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Feb 12 20:25:34.539692 systemd[1]: Finished modprobe@drm.service. Feb 12 20:25:29.833910 /usr/lib/systemd/system-generators/torcx-generator[848]: time="2024-02-12T20:25:29Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.2: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.2 Feb 12 20:25:29.833928 /usr/lib/systemd/system-generators/torcx-generator[848]: time="2024-02-12T20:25:29Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Feb 12 20:25:34.542000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:34.542000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:29.833950 /usr/lib/systemd/system-generators/torcx-generator[848]: time="2024-02-12T20:25:29Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.2: no such file or directory" path=/var/lib/torcx/store/3510.3.2 Feb 12 20:25:34.541354 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 12 20:25:29.833967 /usr/lib/systemd/system-generators/torcx-generator[848]: time="2024-02-12T20:25:29Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Feb 12 20:25:34.541498 systemd[1]: Finished modprobe@efi_pstore.service. Feb 12 20:25:33.767230 /usr/lib/systemd/system-generators/torcx-generator[848]: time="2024-02-12T20:25:33Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 12 20:25:34.543536 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 12 20:25:33.767631 /usr/lib/systemd/system-generators/torcx-generator[848]: time="2024-02-12T20:25:33Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 12 20:25:34.543662 systemd[1]: Finished modprobe@loop.service. Feb 12 20:25:33.767786 /usr/lib/systemd/system-generators/torcx-generator[848]: time="2024-02-12T20:25:33Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 12 20:25:33.768065 /usr/lib/systemd/system-generators/torcx-generator[848]: time="2024-02-12T20:25:33Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 12 20:25:34.543000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:34.543000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:33.768136 /usr/lib/systemd/system-generators/torcx-generator[848]: time="2024-02-12T20:25:33Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Feb 12 20:25:33.768226 /usr/lib/systemd/system-generators/torcx-generator[848]: time="2024-02-12T20:25:33Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Feb 12 20:25:34.544000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:34.544628 systemd[1]: Finished systemd-modules-load.service. Feb 12 20:25:34.545296 systemd[1]: Finished systemd-network-generator.service. Feb 12 20:25:34.545000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:34.546082 systemd[1]: Finished systemd-remount-fs.service. Feb 12 20:25:34.547000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:34.547770 systemd[1]: Reached target network-pre.target. Feb 12 20:25:34.549384 systemd[1]: Mounting sys-kernel-config.mount... Feb 12 20:25:34.549830 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 12 20:25:34.553061 systemd[1]: Starting systemd-hwdb-update.service... Feb 12 20:25:34.554667 systemd[1]: Starting systemd-journal-flush.service... Feb 12 20:25:34.555233 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 12 20:25:34.556369 systemd[1]: Starting systemd-random-seed.service... Feb 12 20:25:34.557069 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Feb 12 20:25:34.560224 systemd[1]: Starting systemd-sysctl.service... Feb 12 20:25:34.562190 systemd[1]: Mounted sys-kernel-config.mount. Feb 12 20:25:34.571453 systemd-journald[914]: Time spent on flushing to /var/log/journal/7eddc33fbadb4e93b617549219825a25 is 44.101ms for 1133 entries. Feb 12 20:25:34.571453 systemd-journald[914]: System Journal (/var/log/journal/7eddc33fbadb4e93b617549219825a25) is 8.0M, max 584.8M, 576.8M free. Feb 12 20:25:34.632225 systemd-journald[914]: Received client request to flush runtime journal. Feb 12 20:25:34.632274 kernel: fuse: init (API version 7.34) Feb 12 20:25:34.583000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:34.592000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:34.592000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:34.607000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:34.583609 systemd[1]: Finished systemd-random-seed.service. Feb 12 20:25:34.584235 systemd[1]: Reached target first-boot-complete.target. Feb 12 20:25:34.592259 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 12 20:25:34.592423 systemd[1]: Finished modprobe@fuse.service. Feb 12 20:25:34.594290 systemd[1]: Mounting sys-fs-fuse-connections.mount... Feb 12 20:25:34.597890 systemd[1]: Mounted sys-fs-fuse-connections.mount. Feb 12 20:25:34.607590 systemd[1]: Finished systemd-sysctl.service. Feb 12 20:25:34.633066 systemd[1]: Finished systemd-journal-flush.service. Feb 12 20:25:34.633000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:34.637862 systemd[1]: Finished flatcar-tmpfiles.service. Feb 12 20:25:34.638000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:34.639495 systemd[1]: Starting systemd-sysusers.service... Feb 12 20:25:34.644714 systemd[1]: Finished systemd-udev-trigger.service. Feb 12 20:25:34.644000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:34.646317 systemd[1]: Starting systemd-udev-settle.service... Feb 12 20:25:34.667253 udevadm[958]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 12 20:25:34.845000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:34.845208 systemd[1]: Finished systemd-sysusers.service. Feb 12 20:25:34.848408 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 12 20:25:34.908000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:34.907773 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 12 20:25:35.439591 systemd[1]: Finished systemd-hwdb-update.service. Feb 12 20:25:35.440000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:35.441000 audit: BPF prog-id=24 op=LOAD Feb 12 20:25:35.442000 audit: BPF prog-id=25 op=LOAD Feb 12 20:25:35.442000 audit: BPF prog-id=7 op=UNLOAD Feb 12 20:25:35.442000 audit: BPF prog-id=8 op=UNLOAD Feb 12 20:25:35.444123 systemd[1]: Starting systemd-udevd.service... Feb 12 20:25:35.498581 systemd-udevd[961]: Using default interface naming scheme 'v252'. Feb 12 20:25:35.565644 systemd[1]: Started systemd-udevd.service. Feb 12 20:25:35.566000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:35.572000 audit: BPF prog-id=26 op=LOAD Feb 12 20:25:35.575659 systemd[1]: Starting systemd-networkd.service... Feb 12 20:25:35.598000 audit: BPF prog-id=27 op=LOAD Feb 12 20:25:35.599000 audit: BPF prog-id=28 op=LOAD Feb 12 20:25:35.599000 audit: BPF prog-id=29 op=LOAD Feb 12 20:25:35.602434 systemd[1]: Starting systemd-userdbd.service... Feb 12 20:25:35.647240 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Feb 12 20:25:35.653967 systemd[1]: Started systemd-userdbd.service. Feb 12 20:25:35.654000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:35.737371 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Feb 12 20:25:35.748809 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 12 20:25:35.750368 kernel: ACPI: button: Power Button [PWRF] Feb 12 20:25:35.767000 audit[982]: AVC avc: denied { confidentiality } for pid=982 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 12 20:25:35.767000 audit[982]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55a1fae7cef0 a1=32194 a2=7ff00cdf4bc5 a3=5 items=108 ppid=961 pid=982 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:25:35.767000 audit: CWD cwd="/" Feb 12 20:25:35.767000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:35.767000 audit: PATH item=1 name=(null) inode=13637 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:35.767000 audit: PATH item=2 name=(null) inode=13637 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:35.767000 audit: PATH item=3 name=(null) inode=13638 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:35.767000 audit: PATH item=4 name=(null) inode=13637 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:35.767000 audit: PATH item=5 name=(null) inode=13639 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:35.767000 audit: PATH item=6 name=(null) inode=13637 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:35.767000 audit: PATH item=7 name=(null) inode=13640 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:35.767000 audit: PATH item=8 name=(null) inode=13640 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:35.767000 audit: PATH item=9 name=(null) inode=13641 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:35.767000 audit: PATH item=10 name=(null) inode=13640 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:35.767000 audit: PATH item=11 name=(null) inode=13642 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:35.767000 audit: PATH item=12 name=(null) inode=13640 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:35.767000 audit: PATH item=13 name=(null) inode=13643 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:35.767000 audit: PATH item=14 name=(null) inode=13640 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:35.767000 audit: PATH item=15 name=(null) inode=13644 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:35.767000 audit: PATH item=16 name=(null) inode=13640 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:35.767000 audit: PATH item=17 name=(null) inode=13645 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:35.767000 audit: PATH item=18 name=(null) inode=13637 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:35.767000 audit: PATH item=19 name=(null) inode=13646 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:35.767000 audit: PATH item=20 name=(null) inode=13646 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:35.767000 audit: PATH item=21 name=(null) inode=13647 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:35.767000 audit: PATH item=22 name=(null) inode=13646 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:35.767000 audit: PATH item=23 name=(null) inode=13648 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:35.767000 audit: PATH item=24 name=(null) inode=13646 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:35.767000 audit: PATH item=25 name=(null) inode=13649 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:35.767000 audit: PATH item=26 name=(null) inode=13646 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:35.767000 audit: PATH item=27 name=(null) inode=13650 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:35.767000 audit: PATH item=28 name=(null) inode=13646 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:35.767000 audit: PATH item=29 name=(null) inode=13651 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:35.767000 audit: PATH item=30 name=(null) inode=13637 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:35.767000 audit: PATH item=31 name=(null) inode=13652 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:35.767000 audit: PATH item=32 name=(null) inode=13652 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:35.767000 audit: PATH item=33 name=(null) inode=13653 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:35.767000 audit: PATH item=34 name=(null) inode=13652 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:35.767000 audit: PATH item=35 name=(null) inode=13654 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:35.767000 audit: PATH item=36 name=(null) inode=13652 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:35.767000 audit: PATH item=37 name=(null) inode=13655 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:35.767000 audit: PATH item=38 name=(null) inode=13652 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:35.767000 audit: PATH item=39 name=(null) inode=13656 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:35.767000 audit: PATH item=40 name=(null) inode=13652 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:35.767000 audit: PATH item=41 name=(null) inode=13657 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:35.767000 audit: PATH item=42 name=(null) inode=13637 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:35.767000 audit: PATH item=43 name=(null) inode=13658 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:35.767000 audit: PATH item=44 name=(null) inode=13658 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:35.767000 audit: PATH item=45 name=(null) inode=13659 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:35.767000 audit: PATH item=46 name=(null) inode=13658 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:35.767000 audit: PATH item=47 name=(null) inode=13660 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:35.767000 audit: PATH item=48 name=(null) inode=13658 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:35.767000 audit: PATH item=49 name=(null) inode=13661 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:35.767000 audit: PATH item=50 name=(null) inode=13658 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:35.767000 audit: PATH item=51 name=(null) inode=13662 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:35.767000 audit: PATH item=52 name=(null) inode=13658 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:35.767000 audit: PATH item=53 name=(null) inode=13663 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:35.767000 audit: PATH item=54 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:35.767000 audit: PATH item=55 name=(null) inode=13664 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:35.767000 audit: PATH item=56 name=(null) inode=13664 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:35.767000 audit: PATH item=57 name=(null) inode=13665 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:35.767000 audit: PATH item=58 name=(null) inode=13664 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:35.767000 audit: PATH item=59 name=(null) inode=13666 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:35.767000 audit: PATH item=60 name=(null) inode=13664 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:35.767000 audit: PATH item=61 name=(null) inode=13667 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:35.767000 audit: PATH item=62 name=(null) inode=13667 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:35.767000 audit: PATH item=63 name=(null) inode=13668 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:35.767000 audit: PATH item=64 name=(null) inode=13667 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:35.767000 audit: PATH item=65 name=(null) inode=13669 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:35.767000 audit: PATH item=66 name=(null) inode=13667 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:35.767000 audit: PATH item=67 name=(null) inode=13670 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:35.767000 audit: PATH item=68 name=(null) inode=13667 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:35.767000 audit: PATH item=69 name=(null) inode=13671 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:35.767000 audit: PATH item=70 name=(null) inode=13667 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:35.767000 audit: PATH item=71 name=(null) inode=13672 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:35.767000 audit: PATH item=72 name=(null) inode=13664 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:35.767000 audit: PATH item=73 name=(null) inode=13673 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:35.767000 audit: PATH item=74 name=(null) inode=13673 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:35.767000 audit: PATH item=75 name=(null) inode=13674 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:35.767000 audit: PATH item=76 name=(null) inode=13673 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:35.767000 audit: PATH item=77 name=(null) inode=13675 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:35.767000 audit: PATH item=78 name=(null) inode=13673 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:35.767000 audit: PATH item=79 name=(null) inode=13676 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:35.767000 audit: PATH item=80 name=(null) inode=13673 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:35.767000 audit: PATH item=81 name=(null) inode=13677 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:35.767000 audit: PATH item=82 name=(null) inode=13673 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:35.767000 audit: PATH item=83 name=(null) inode=13678 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:35.767000 audit: PATH item=84 name=(null) inode=13664 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:35.767000 audit: PATH item=85 name=(null) inode=13679 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:35.767000 audit: PATH item=86 name=(null) inode=13679 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:35.767000 audit: PATH item=87 name=(null) inode=13680 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:35.767000 audit: PATH item=88 name=(null) inode=13679 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:35.767000 audit: PATH item=89 name=(null) inode=13681 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:35.767000 audit: PATH item=90 name=(null) inode=13679 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:35.767000 audit: PATH item=91 name=(null) inode=13682 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:35.767000 audit: PATH item=92 name=(null) inode=13679 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:35.767000 audit: PATH item=93 name=(null) inode=13683 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:35.767000 audit: PATH item=94 name=(null) inode=13679 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:35.767000 audit: PATH item=95 name=(null) inode=13684 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:35.767000 audit: PATH item=96 name=(null) inode=13664 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:35.767000 audit: PATH item=97 name=(null) inode=13685 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:35.767000 audit: PATH item=98 name=(null) inode=13685 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:35.767000 audit: PATH item=99 name=(null) inode=13686 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:35.767000 audit: PATH item=100 name=(null) inode=13685 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:35.767000 audit: PATH item=101 name=(null) inode=13687 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:35.767000 audit: PATH item=102 name=(null) inode=13685 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:35.767000 audit: PATH item=103 name=(null) inode=13688 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:35.767000 audit: PATH item=104 name=(null) inode=13685 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:35.767000 audit: PATH item=105 name=(null) inode=13689 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:35.767000 audit: PATH item=106 name=(null) inode=13685 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:35.767000 audit: PATH item=107 name=(null) inode=13690 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:35.767000 audit: PROCTITLE proctitle="(udev-worker)" Feb 12 20:25:35.802379 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Feb 12 20:25:35.815375 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Feb 12 20:25:35.822378 kernel: mousedev: PS/2 mouse device common for all mice Feb 12 20:25:35.853764 systemd[1]: Finished systemd-udev-settle.service. Feb 12 20:25:35.853000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:35.855497 systemd[1]: Starting lvm2-activation-early.service... Feb 12 20:25:35.990555 lvm[990]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 12 20:25:35.999160 systemd-networkd[971]: lo: Link UP Feb 12 20:25:36.000212 systemd-networkd[971]: lo: Gained carrier Feb 12 20:25:36.001838 systemd-networkd[971]: Enumeration completed Feb 12 20:25:36.002261 systemd[1]: Started systemd-networkd.service. Feb 12 20:25:36.002597 systemd-networkd[971]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 12 20:25:36.003000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:36.007220 systemd-networkd[971]: eth0: Link UP Feb 12 20:25:36.007243 systemd-networkd[971]: eth0: Gained carrier Feb 12 20:25:36.026639 systemd-networkd[971]: eth0: DHCPv4 address 172.24.4.189/24, gateway 172.24.4.1 acquired from 172.24.4.1 Feb 12 20:25:36.034309 systemd[1]: Finished lvm2-activation-early.service. Feb 12 20:25:36.035000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:36.035784 systemd[1]: Reached target cryptsetup.target. Feb 12 20:25:36.039312 systemd[1]: Starting lvm2-activation.service... Feb 12 20:25:36.048846 lvm[991]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 12 20:25:36.090259 systemd[1]: Finished lvm2-activation.service. Feb 12 20:25:36.091000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:36.091678 systemd[1]: Reached target local-fs-pre.target. Feb 12 20:25:36.092904 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 12 20:25:36.092969 systemd[1]: Reached target local-fs.target. Feb 12 20:25:36.094116 systemd[1]: Reached target machines.target. Feb 12 20:25:36.097964 systemd[1]: Starting ldconfig.service... Feb 12 20:25:36.100819 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Feb 12 20:25:36.100931 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 12 20:25:36.103419 systemd[1]: Starting systemd-boot-update.service... Feb 12 20:25:36.110978 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Feb 12 20:25:36.119222 systemd[1]: Starting systemd-machine-id-commit.service... Feb 12 20:25:36.121097 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Feb 12 20:25:36.121191 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Feb 12 20:25:36.126379 systemd[1]: Starting systemd-tmpfiles-setup.service... Feb 12 20:25:36.144466 systemd[1]: boot.automount: Got automount request for /boot, triggered by 993 (bootctl) Feb 12 20:25:36.147000 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Feb 12 20:25:36.165802 systemd-tmpfiles[996]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Feb 12 20:25:36.184527 systemd-tmpfiles[996]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 12 20:25:36.187627 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Feb 12 20:25:36.187000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:36.194822 systemd-tmpfiles[996]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 12 20:25:36.488409 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 12 20:25:36.489701 systemd[1]: Finished systemd-machine-id-commit.service. Feb 12 20:25:36.490000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:36.959545 systemd-fsck[1002]: fsck.fat 4.2 (2021-01-31) Feb 12 20:25:36.959545 systemd-fsck[1002]: /dev/vda1: 789 files, 115339/258078 clusters Feb 12 20:25:36.963668 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Feb 12 20:25:36.964000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:36.967573 systemd[1]: Mounting boot.mount... Feb 12 20:25:36.999983 systemd[1]: Mounted boot.mount. Feb 12 20:25:37.063401 systemd[1]: Finished systemd-boot-update.service. Feb 12 20:25:37.064000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:37.161510 systemd[1]: Finished systemd-tmpfiles-setup.service. Feb 12 20:25:37.161000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:37.163385 systemd[1]: Starting audit-rules.service... Feb 12 20:25:37.164919 systemd[1]: Starting clean-ca-certificates.service... Feb 12 20:25:37.171000 audit: BPF prog-id=30 op=LOAD Feb 12 20:25:37.169736 systemd[1]: Starting systemd-journal-catalog-update.service... Feb 12 20:25:37.173860 systemd[1]: Starting systemd-resolved.service... Feb 12 20:25:37.177000 audit: BPF prog-id=31 op=LOAD Feb 12 20:25:37.181120 systemd[1]: Starting systemd-timesyncd.service... Feb 12 20:25:37.183157 systemd[1]: Starting systemd-update-utmp.service... Feb 12 20:25:37.192065 systemd[1]: Finished clean-ca-certificates.service. Feb 12 20:25:37.192000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:37.192771 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 12 20:25:37.198000 audit[1010]: SYSTEM_BOOT pid=1010 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Feb 12 20:25:37.200856 systemd[1]: Finished systemd-update-utmp.service. Feb 12 20:25:37.200000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:37.274951 systemd[1]: Finished systemd-journal-catalog-update.service. Feb 12 20:25:37.275000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:37.288000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 12 20:25:37.288000 audit[1025]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd5f6758a0 a2=420 a3=0 items=0 ppid=1005 pid=1025 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:25:37.288000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Feb 12 20:25:37.289674 augenrules[1025]: No rules Feb 12 20:25:37.290264 systemd[1]: Finished audit-rules.service. Feb 12 20:25:37.307061 systemd[1]: Started systemd-timesyncd.service. Feb 12 20:25:37.307651 systemd[1]: Reached target time-set.target. Feb 12 20:25:37.307887 systemd-resolved[1008]: Positive Trust Anchors: Feb 12 20:25:37.308454 systemd-resolved[1008]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 12 20:25:37.308561 systemd-resolved[1008]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 12 20:25:37.324061 systemd-resolved[1008]: Using system hostname 'ci-3510-3-2-4-778020c044.novalocal'. Feb 12 20:25:37.326005 systemd[1]: Started systemd-resolved.service. Feb 12 20:25:37.326613 systemd[1]: Reached target network.target. Feb 12 20:25:37.327010 systemd[1]: Reached target nss-lookup.target. Feb 12 20:25:38.157249 systemd-resolved[1008]: Clock change detected. Flushing caches. Feb 12 20:25:38.157450 systemd-timesyncd[1009]: Contacted time server 5.196.8.113:123 (0.flatcar.pool.ntp.org). Feb 12 20:25:38.157740 systemd-timesyncd[1009]: Initial clock synchronization to Mon 2024-02-12 20:25:38.157064 UTC. Feb 12 20:25:38.447140 ldconfig[992]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 12 20:25:38.482816 systemd[1]: Finished ldconfig.service. Feb 12 20:25:38.486238 systemd[1]: Starting systemd-update-done.service... Feb 12 20:25:38.501133 systemd[1]: Finished systemd-update-done.service. Feb 12 20:25:38.502421 systemd[1]: Reached target sysinit.target. Feb 12 20:25:38.503694 systemd[1]: Started motdgen.path. Feb 12 20:25:38.504786 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Feb 12 20:25:38.506401 systemd[1]: Started logrotate.timer. Feb 12 20:25:38.507642 systemd[1]: Started mdadm.timer. Feb 12 20:25:38.508633 systemd[1]: Started systemd-tmpfiles-clean.timer. Feb 12 20:25:38.509707 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 12 20:25:38.509779 systemd[1]: Reached target paths.target. Feb 12 20:25:38.510767 systemd[1]: Reached target timers.target. Feb 12 20:25:38.512620 systemd[1]: Listening on dbus.socket. Feb 12 20:25:38.515680 systemd[1]: Starting docker.socket... Feb 12 20:25:38.523929 systemd[1]: Listening on sshd.socket. Feb 12 20:25:38.525365 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 12 20:25:38.526456 systemd[1]: Listening on docker.socket. Feb 12 20:25:38.527719 systemd[1]: Reached target sockets.target. Feb 12 20:25:38.528897 systemd[1]: Reached target basic.target. Feb 12 20:25:38.530204 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 12 20:25:38.530440 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 12 20:25:38.532511 systemd[1]: Starting containerd.service... Feb 12 20:25:38.536012 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Feb 12 20:25:38.539591 systemd[1]: Starting dbus.service... Feb 12 20:25:38.544801 systemd[1]: Starting enable-oem-cloudinit.service... Feb 12 20:25:38.555730 systemd[1]: Starting extend-filesystems.service... Feb 12 20:25:38.559467 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Feb 12 20:25:38.568892 jq[1039]: false Feb 12 20:25:38.562943 systemd[1]: Starting motdgen.service... Feb 12 20:25:38.567591 systemd[1]: Starting prepare-cni-plugins.service... Feb 12 20:25:38.573279 systemd[1]: Starting prepare-critools.service... Feb 12 20:25:38.576579 systemd[1]: Starting ssh-key-proc-cmdline.service... Feb 12 20:25:38.579979 systemd[1]: Starting sshd-keygen.service... Feb 12 20:25:38.586549 systemd[1]: Starting systemd-logind.service... Feb 12 20:25:38.587167 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 12 20:25:38.633718 jq[1051]: true Feb 12 20:25:38.587237 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 12 20:25:38.638340 tar[1053]: ./ Feb 12 20:25:38.638340 tar[1053]: ./loopback Feb 12 20:25:38.587748 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 12 20:25:38.588504 systemd[1]: Starting update-engine.service... Feb 12 20:25:38.590028 systemd[1]: Starting update-ssh-keys-after-ignition.service... Feb 12 20:25:38.642463 extend-filesystems[1040]: Found vda Feb 12 20:25:38.642463 extend-filesystems[1040]: Found vda1 Feb 12 20:25:38.642463 extend-filesystems[1040]: Found vda2 Feb 12 20:25:38.642463 extend-filesystems[1040]: Found vda3 Feb 12 20:25:38.642463 extend-filesystems[1040]: Found usr Feb 12 20:25:38.664078 tar[1054]: crictl Feb 12 20:25:38.595203 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 12 20:25:38.667456 extend-filesystems[1040]: Found vda4 Feb 12 20:25:38.667456 extend-filesystems[1040]: Found vda6 Feb 12 20:25:38.667456 extend-filesystems[1040]: Found vda7 Feb 12 20:25:38.667456 extend-filesystems[1040]: Found vda9 Feb 12 20:25:38.667456 extend-filesystems[1040]: Checking size of /dev/vda9 Feb 12 20:25:38.595380 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Feb 12 20:25:38.687257 jq[1056]: true Feb 12 20:25:38.623496 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 12 20:25:38.623661 systemd[1]: Finished ssh-key-proc-cmdline.service. Feb 12 20:25:38.678257 systemd-networkd[971]: eth0: Gained IPv6LL Feb 12 20:25:38.697460 dbus-daemon[1036]: [system] SELinux support is enabled Feb 12 20:25:38.697584 systemd[1]: Started dbus.service. Feb 12 20:25:38.706718 extend-filesystems[1040]: Resized partition /dev/vda9 Feb 12 20:25:38.700033 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 12 20:25:38.700054 systemd[1]: Reached target system-config.target. Feb 12 20:25:38.700557 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 12 20:25:38.700575 systemd[1]: Reached target user-config.target. Feb 12 20:25:38.708620 systemd[1]: motdgen.service: Deactivated successfully. Feb 12 20:25:38.708775 systemd[1]: Finished motdgen.service. Feb 12 20:25:38.716714 extend-filesystems[1082]: resize2fs 1.46.5 (30-Dec-2021) Feb 12 20:25:38.742951 systemd[1]: Created slice system-sshd.slice. Feb 12 20:25:38.758414 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 4635643 blocks Feb 12 20:25:38.773944 update_engine[1049]: I0212 20:25:38.772749 1049 main.cc:92] Flatcar Update Engine starting Feb 12 20:25:38.781657 systemd[1]: Started update-engine.service. Feb 12 20:25:38.842939 update_engine[1049]: I0212 20:25:38.781738 1049 update_check_scheduler.cc:74] Next update check in 5m31s Feb 12 20:25:38.843141 coreos-metadata[1035]: Feb 12 20:25:38.825 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Feb 12 20:25:38.785323 systemd[1]: Started locksmithd.service. Feb 12 20:25:38.836670 systemd-logind[1048]: Watching system buttons on /dev/input/event1 (Power Button) Feb 12 20:25:38.836693 systemd-logind[1048]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 12 20:25:38.838755 systemd-logind[1048]: New seat seat0. Feb 12 20:25:38.846366 env[1055]: time="2024-02-12T20:25:38.844830045Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Feb 12 20:25:38.845271 systemd[1]: Started systemd-logind.service. Feb 12 20:25:38.857059 tar[1053]: ./bandwidth Feb 12 20:25:38.892125 bash[1092]: Updated "/home/core/.ssh/authorized_keys" Feb 12 20:25:38.893210 systemd[1]: Finished update-ssh-keys-after-ignition.service. Feb 12 20:25:38.898433 kernel: EXT4-fs (vda9): resized filesystem to 4635643 Feb 12 20:25:39.068230 env[1055]: time="2024-02-12T20:25:38.901041059Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 12 20:25:39.068328 coreos-metadata[1035]: Feb 12 20:25:38.984 INFO Fetch successful Feb 12 20:25:39.068328 coreos-metadata[1035]: Feb 12 20:25:38.984 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Feb 12 20:25:39.068328 coreos-metadata[1035]: Feb 12 20:25:38.998 INFO Fetch successful Feb 12 20:25:39.071916 env[1055]: time="2024-02-12T20:25:39.071602808Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 12 20:25:39.073005 extend-filesystems[1082]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 12 20:25:39.073005 extend-filesystems[1082]: old_desc_blocks = 1, new_desc_blocks = 3 Feb 12 20:25:39.073005 extend-filesystems[1082]: The filesystem on /dev/vda9 is now 4635643 (4k) blocks long. Feb 12 20:25:39.093304 extend-filesystems[1040]: Resized filesystem in /dev/vda9 Feb 12 20:25:39.096092 env[1055]: time="2024-02-12T20:25:39.073482303Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.148-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 12 20:25:39.096092 env[1055]: time="2024-02-12T20:25:39.073577401Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 12 20:25:39.096092 env[1055]: time="2024-02-12T20:25:39.080809795Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 12 20:25:39.096092 env[1055]: time="2024-02-12T20:25:39.080906367Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 12 20:25:39.096092 env[1055]: time="2024-02-12T20:25:39.080993290Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Feb 12 20:25:39.096092 env[1055]: time="2024-02-12T20:25:39.081056087Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 12 20:25:39.096092 env[1055]: time="2024-02-12T20:25:39.088533361Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 12 20:25:39.073851 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 12 20:25:39.103546 env[1055]: time="2024-02-12T20:25:39.096207644Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 12 20:25:39.103546 env[1055]: time="2024-02-12T20:25:39.096750272Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 12 20:25:39.103546 env[1055]: time="2024-02-12T20:25:39.096835892Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 12 20:25:39.103546 env[1055]: time="2024-02-12T20:25:39.097822613Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Feb 12 20:25:39.103546 env[1055]: time="2024-02-12T20:25:39.097905559Z" level=info msg="metadata content store policy set" policy=shared Feb 12 20:25:39.074037 systemd[1]: Finished extend-filesystems.service. Feb 12 20:25:39.081379 unknown[1035]: wrote ssh authorized keys file for user: core Feb 12 20:25:39.137652 env[1055]: time="2024-02-12T20:25:39.136146099Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 12 20:25:39.137652 env[1055]: time="2024-02-12T20:25:39.136211471Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 12 20:25:39.137652 env[1055]: time="2024-02-12T20:25:39.136228674Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 12 20:25:39.137652 env[1055]: time="2024-02-12T20:25:39.136318221Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 12 20:25:39.137652 env[1055]: time="2024-02-12T20:25:39.136339361Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 12 20:25:39.137652 env[1055]: time="2024-02-12T20:25:39.136371782Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 12 20:25:39.137652 env[1055]: time="2024-02-12T20:25:39.136394645Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 12 20:25:39.137652 env[1055]: time="2024-02-12T20:25:39.136412007Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 12 20:25:39.137652 env[1055]: time="2024-02-12T20:25:39.136427567Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Feb 12 20:25:39.137652 env[1055]: time="2024-02-12T20:25:39.136460629Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 12 20:25:39.137652 env[1055]: time="2024-02-12T20:25:39.136477230Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 12 20:25:39.137652 env[1055]: time="2024-02-12T20:25:39.136491867Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 12 20:25:39.137652 env[1055]: time="2024-02-12T20:25:39.136642119Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 12 20:25:39.137652 env[1055]: time="2024-02-12T20:25:39.136750873Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 12 20:25:39.138077 env[1055]: time="2024-02-12T20:25:39.137148990Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 12 20:25:39.138077 env[1055]: time="2024-02-12T20:25:39.137180789Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 12 20:25:39.138077 env[1055]: time="2024-02-12T20:25:39.137218259Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 12 20:25:39.138077 env[1055]: time="2024-02-12T20:25:39.137328386Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 12 20:25:39.138077 env[1055]: time="2024-02-12T20:25:39.137347993Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 12 20:25:39.138077 env[1055]: time="2024-02-12T20:25:39.137486833Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 12 20:25:39.138077 env[1055]: time="2024-02-12T20:25:39.137504456Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 12 20:25:39.138077 env[1055]: time="2024-02-12T20:25:39.137518933Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 12 20:25:39.138077 env[1055]: time="2024-02-12T20:25:39.137532579Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 12 20:25:39.138077 env[1055]: time="2024-02-12T20:25:39.137564298Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 12 20:25:39.138077 env[1055]: time="2024-02-12T20:25:39.137579196Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 12 20:25:39.138077 env[1055]: time="2024-02-12T20:25:39.137596629Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 12 20:25:39.143903 env[1055]: time="2024-02-12T20:25:39.142703277Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 12 20:25:39.143903 env[1055]: time="2024-02-12T20:25:39.142738252Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 12 20:25:39.143903 env[1055]: time="2024-02-12T20:25:39.142758520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 12 20:25:39.143903 env[1055]: time="2024-02-12T20:25:39.142774761Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 12 20:25:39.143903 env[1055]: time="2024-02-12T20:25:39.142795069Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Feb 12 20:25:39.143903 env[1055]: time="2024-02-12T20:25:39.142810067Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 12 20:25:39.143903 env[1055]: time="2024-02-12T20:25:39.142833892Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Feb 12 20:25:39.143903 env[1055]: time="2024-02-12T20:25:39.142880790Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 12 20:25:39.144198 env[1055]: time="2024-02-12T20:25:39.143156286Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 12 20:25:39.144198 env[1055]: time="2024-02-12T20:25:39.143236386Z" level=info msg="Connect containerd service" Feb 12 20:25:39.144198 env[1055]: time="2024-02-12T20:25:39.143275971Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 12 20:25:39.154866 env[1055]: time="2024-02-12T20:25:39.144402003Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 12 20:25:39.154866 env[1055]: time="2024-02-12T20:25:39.144737482Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 12 20:25:39.154866 env[1055]: time="2024-02-12T20:25:39.144786083Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 12 20:25:39.154866 env[1055]: time="2024-02-12T20:25:39.144840926Z" level=info msg="containerd successfully booted in 0.355296s" Feb 12 20:25:39.154866 env[1055]: time="2024-02-12T20:25:39.147154425Z" level=info msg="Start subscribing containerd event" Feb 12 20:25:39.154866 env[1055]: time="2024-02-12T20:25:39.147218405Z" level=info msg="Start recovering state" Feb 12 20:25:39.154866 env[1055]: time="2024-02-12T20:25:39.147281383Z" level=info msg="Start event monitor" Feb 12 20:25:39.154866 env[1055]: time="2024-02-12T20:25:39.147299286Z" level=info msg="Start snapshots syncer" Feb 12 20:25:39.154866 env[1055]: time="2024-02-12T20:25:39.147309576Z" level=info msg="Start cni network conf syncer for default" Feb 12 20:25:39.154866 env[1055]: time="2024-02-12T20:25:39.147317971Z" level=info msg="Start streaming server" Feb 12 20:25:39.155168 update-ssh-keys[1104]: Updated "/home/core/.ssh/authorized_keys" Feb 12 20:25:39.144970 systemd[1]: Started containerd.service. Feb 12 20:25:39.153515 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Feb 12 20:25:39.173986 tar[1053]: ./ptp Feb 12 20:25:39.216941 tar[1053]: ./vlan Feb 12 20:25:39.259346 tar[1053]: ./host-device Feb 12 20:25:39.301205 tar[1053]: ./tuning Feb 12 20:25:39.340304 tar[1053]: ./vrf Feb 12 20:25:39.407001 tar[1053]: ./sbr Feb 12 20:25:39.500570 tar[1053]: ./tap Feb 12 20:25:39.550713 tar[1053]: ./dhcp Feb 12 20:25:39.615659 sshd_keygen[1072]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 12 20:25:39.688671 systemd[1]: Finished sshd-keygen.service. Feb 12 20:25:39.691246 systemd[1]: Starting issuegen.service... Feb 12 20:25:39.693271 systemd[1]: Started sshd@0-172.24.4.189:22-172.24.4.1:50768.service. Feb 12 20:25:39.698188 systemd[1]: issuegen.service: Deactivated successfully. Feb 12 20:25:39.698414 systemd[1]: Finished issuegen.service. Feb 12 20:25:39.701090 systemd[1]: Starting systemd-user-sessions.service... Feb 12 20:25:39.718191 tar[1053]: ./static Feb 12 20:25:39.718896 systemd[1]: Finished systemd-user-sessions.service. Feb 12 20:25:39.721011 systemd[1]: Started getty@tty1.service. Feb 12 20:25:39.723784 systemd[1]: Started serial-getty@ttyS0.service. Feb 12 20:25:39.724467 systemd[1]: Reached target getty.target. Feb 12 20:25:39.769792 tar[1053]: ./firewall Feb 12 20:25:39.822376 tar[1053]: ./macvlan Feb 12 20:25:39.822560 systemd[1]: Finished prepare-critools.service. Feb 12 20:25:39.829347 locksmithd[1095]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 12 20:25:39.867277 tar[1053]: ./dummy Feb 12 20:25:39.908207 tar[1053]: ./bridge Feb 12 20:25:39.955244 tar[1053]: ./ipvlan Feb 12 20:25:39.997671 tar[1053]: ./portmap Feb 12 20:25:40.036839 tar[1053]: ./host-local Feb 12 20:25:40.097775 systemd[1]: Finished prepare-cni-plugins.service. Feb 12 20:25:40.099629 systemd[1]: Reached target multi-user.target. Feb 12 20:25:40.103568 systemd[1]: Starting systemd-update-utmp-runlevel.service... Feb 12 20:25:40.114380 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Feb 12 20:25:40.114920 systemd[1]: Finished systemd-update-utmp-runlevel.service. Feb 12 20:25:40.116478 systemd[1]: Startup finished in 1.039s (kernel) + 12.326s (initrd) + 10.158s (userspace) = 23.524s. Feb 12 20:25:40.824827 sshd[1113]: Accepted publickey for core from 172.24.4.1 port 50768 ssh2: RSA SHA256:ssFkN0BQQLPS6axJWzE8mlMTpPrpsisU+V19L5AVtX4 Feb 12 20:25:40.830650 sshd[1113]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:25:40.857811 systemd[1]: Created slice user-500.slice. Feb 12 20:25:40.861778 systemd[1]: Starting user-runtime-dir@500.service... Feb 12 20:25:40.867024 systemd-logind[1048]: New session 1 of user core. Feb 12 20:25:40.885042 systemd[1]: Finished user-runtime-dir@500.service. Feb 12 20:25:40.890092 systemd[1]: Starting user@500.service... Feb 12 20:25:40.896891 (systemd)[1125]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:25:41.031878 systemd[1125]: Queued start job for default target default.target. Feb 12 20:25:41.032473 systemd[1125]: Reached target paths.target. Feb 12 20:25:41.032493 systemd[1125]: Reached target sockets.target. Feb 12 20:25:41.032510 systemd[1125]: Reached target timers.target. Feb 12 20:25:41.032524 systemd[1125]: Reached target basic.target. Feb 12 20:25:41.032571 systemd[1125]: Reached target default.target. Feb 12 20:25:41.032597 systemd[1125]: Startup finished in 123ms. Feb 12 20:25:41.033561 systemd[1]: Started user@500.service. Feb 12 20:25:41.036028 systemd[1]: Started session-1.scope. Feb 12 20:25:41.367225 systemd[1]: Started sshd@1-172.24.4.189:22-172.24.4.1:50778.service. Feb 12 20:25:42.582636 sshd[1134]: Accepted publickey for core from 172.24.4.1 port 50778 ssh2: RSA SHA256:ssFkN0BQQLPS6axJWzE8mlMTpPrpsisU+V19L5AVtX4 Feb 12 20:25:42.585997 sshd[1134]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:25:42.597614 systemd-logind[1048]: New session 2 of user core. Feb 12 20:25:42.598934 systemd[1]: Started session-2.scope. Feb 12 20:25:43.160714 sshd[1134]: pam_unix(sshd:session): session closed for user core Feb 12 20:25:43.167662 systemd[1]: Started sshd@2-172.24.4.189:22-172.24.4.1:50784.service. Feb 12 20:25:43.171089 systemd[1]: sshd@1-172.24.4.189:22-172.24.4.1:50778.service: Deactivated successfully. Feb 12 20:25:43.172894 systemd[1]: session-2.scope: Deactivated successfully. Feb 12 20:25:43.175922 systemd-logind[1048]: Session 2 logged out. Waiting for processes to exit. Feb 12 20:25:43.178383 systemd-logind[1048]: Removed session 2. Feb 12 20:25:44.745387 sshd[1139]: Accepted publickey for core from 172.24.4.1 port 50784 ssh2: RSA SHA256:ssFkN0BQQLPS6axJWzE8mlMTpPrpsisU+V19L5AVtX4 Feb 12 20:25:44.748942 sshd[1139]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:25:44.759857 systemd-logind[1048]: New session 3 of user core. Feb 12 20:25:44.760831 systemd[1]: Started session-3.scope. Feb 12 20:25:45.408472 sshd[1139]: pam_unix(sshd:session): session closed for user core Feb 12 20:25:45.416266 systemd[1]: Started sshd@3-172.24.4.189:22-172.24.4.1:44296.service. Feb 12 20:25:45.418798 systemd[1]: sshd@2-172.24.4.189:22-172.24.4.1:50784.service: Deactivated successfully. Feb 12 20:25:45.420730 systemd[1]: session-3.scope: Deactivated successfully. Feb 12 20:25:45.423703 systemd-logind[1048]: Session 3 logged out. Waiting for processes to exit. Feb 12 20:25:45.427070 systemd-logind[1048]: Removed session 3. Feb 12 20:25:46.762524 sshd[1145]: Accepted publickey for core from 172.24.4.1 port 44296 ssh2: RSA SHA256:ssFkN0BQQLPS6axJWzE8mlMTpPrpsisU+V19L5AVtX4 Feb 12 20:25:46.766568 sshd[1145]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:25:46.776705 systemd[1]: Started session-4.scope. Feb 12 20:25:46.779334 systemd-logind[1048]: New session 4 of user core. Feb 12 20:25:47.359710 sshd[1145]: pam_unix(sshd:session): session closed for user core Feb 12 20:25:47.365267 systemd[1]: Started sshd@4-172.24.4.189:22-172.24.4.1:44308.service. Feb 12 20:25:47.370378 systemd[1]: sshd@3-172.24.4.189:22-172.24.4.1:44296.service: Deactivated successfully. Feb 12 20:25:47.371920 systemd[1]: session-4.scope: Deactivated successfully. Feb 12 20:25:47.374895 systemd-logind[1048]: Session 4 logged out. Waiting for processes to exit. Feb 12 20:25:47.377419 systemd-logind[1048]: Removed session 4. Feb 12 20:25:48.997683 sshd[1151]: Accepted publickey for core from 172.24.4.1 port 44308 ssh2: RSA SHA256:ssFkN0BQQLPS6axJWzE8mlMTpPrpsisU+V19L5AVtX4 Feb 12 20:25:49.000444 sshd[1151]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:25:49.012236 systemd[1]: Started session-5.scope. Feb 12 20:25:49.014222 systemd-logind[1048]: New session 5 of user core. Feb 12 20:25:49.554089 sudo[1155]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 12 20:25:49.555367 sudo[1155]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 12 20:25:50.244454 systemd[1]: Reloading. Feb 12 20:25:50.409519 /usr/lib/systemd/system-generators/torcx-generator[1187]: time="2024-02-12T20:25:50Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 12 20:25:50.419541 /usr/lib/systemd/system-generators/torcx-generator[1187]: time="2024-02-12T20:25:50Z" level=info msg="torcx already run" Feb 12 20:25:50.494374 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 12 20:25:50.494399 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 12 20:25:50.523882 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 12 20:25:50.634665 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 12 20:25:50.643233 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 12 20:25:50.644093 systemd[1]: Reached target network-online.target. Feb 12 20:25:50.645911 systemd[1]: Started kubelet.service. Feb 12 20:25:50.662269 systemd[1]: Starting coreos-metadata.service... Feb 12 20:25:50.710998 coreos-metadata[1239]: Feb 12 20:25:50.710 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Feb 12 20:25:50.721840 kubelet[1231]: E0212 20:25:50.721775 1231 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 12 20:25:50.724060 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 12 20:25:50.724221 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 12 20:25:50.727463 coreos-metadata[1239]: Feb 12 20:25:50.727 INFO Fetch successful Feb 12 20:25:50.727463 coreos-metadata[1239]: Feb 12 20:25:50.727 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Feb 12 20:25:50.739883 coreos-metadata[1239]: Feb 12 20:25:50.739 INFO Fetch successful Feb 12 20:25:50.740075 coreos-metadata[1239]: Feb 12 20:25:50.740 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Feb 12 20:25:50.750120 coreos-metadata[1239]: Feb 12 20:25:50.749 INFO Fetch successful Feb 12 20:25:50.750329 coreos-metadata[1239]: Feb 12 20:25:50.750 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Feb 12 20:25:50.764074 coreos-metadata[1239]: Feb 12 20:25:50.763 INFO Fetch successful Feb 12 20:25:50.764074 coreos-metadata[1239]: Feb 12 20:25:50.764 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Feb 12 20:25:50.775858 coreos-metadata[1239]: Feb 12 20:25:50.775 INFO Fetch successful Feb 12 20:25:50.788967 systemd[1]: Finished coreos-metadata.service. Feb 12 20:25:51.596232 systemd[1]: Stopped kubelet.service. Feb 12 20:25:51.633817 systemd[1]: Reloading. Feb 12 20:25:51.766686 /usr/lib/systemd/system-generators/torcx-generator[1294]: time="2024-02-12T20:25:51Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 12 20:25:51.767661 /usr/lib/systemd/system-generators/torcx-generator[1294]: time="2024-02-12T20:25:51Z" level=info msg="torcx already run" Feb 12 20:25:51.867717 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 12 20:25:51.867741 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 12 20:25:51.895481 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 12 20:25:51.987045 systemd[1]: Started kubelet.service. Feb 12 20:25:52.071229 kubelet[1341]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 12 20:25:52.071229 kubelet[1341]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 12 20:25:52.071229 kubelet[1341]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 12 20:25:52.071818 kubelet[1341]: I0212 20:25:52.071227 1341 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 12 20:25:52.557228 kubelet[1341]: I0212 20:25:52.557046 1341 server.go:467] "Kubelet version" kubeletVersion="v1.28.1" Feb 12 20:25:52.557656 kubelet[1341]: I0212 20:25:52.557622 1341 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 12 20:25:52.558526 kubelet[1341]: I0212 20:25:52.558483 1341 server.go:895] "Client rotation is on, will bootstrap in background" Feb 12 20:25:52.563619 kubelet[1341]: I0212 20:25:52.563563 1341 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 12 20:25:52.575974 kubelet[1341]: I0212 20:25:52.575928 1341 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 12 20:25:52.576282 kubelet[1341]: I0212 20:25:52.576140 1341 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 12 20:25:52.576403 kubelet[1341]: I0212 20:25:52.576302 1341 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 12 20:25:52.576403 kubelet[1341]: I0212 20:25:52.576327 1341 topology_manager.go:138] "Creating topology manager with none policy" Feb 12 20:25:52.576403 kubelet[1341]: I0212 20:25:52.576338 1341 container_manager_linux.go:301] "Creating device plugin manager" Feb 12 20:25:52.576906 kubelet[1341]: I0212 20:25:52.576444 1341 state_mem.go:36] "Initialized new in-memory state store" Feb 12 20:25:52.576906 kubelet[1341]: I0212 20:25:52.576519 1341 kubelet.go:393] "Attempting to sync node with API server" Feb 12 20:25:52.576906 kubelet[1341]: I0212 20:25:52.576535 1341 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 12 20:25:52.576906 kubelet[1341]: I0212 20:25:52.576557 1341 kubelet.go:309] "Adding apiserver pod source" Feb 12 20:25:52.576906 kubelet[1341]: I0212 20:25:52.576575 1341 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 12 20:25:52.577683 kubelet[1341]: E0212 20:25:52.577059 1341 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:25:52.577683 kubelet[1341]: E0212 20:25:52.577126 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:25:52.579781 kubelet[1341]: I0212 20:25:52.579734 1341 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 12 20:25:52.580033 kubelet[1341]: W0212 20:25:52.579970 1341 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 12 20:25:52.580560 kubelet[1341]: I0212 20:25:52.580475 1341 server.go:1232] "Started kubelet" Feb 12 20:25:52.580922 kubelet[1341]: I0212 20:25:52.580882 1341 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Feb 12 20:25:52.581330 kubelet[1341]: I0212 20:25:52.581294 1341 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 12 20:25:52.581528 kubelet[1341]: I0212 20:25:52.581293 1341 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Feb 12 20:25:52.584492 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Feb 12 20:25:52.584628 kubelet[1341]: I0212 20:25:52.584577 1341 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 12 20:25:52.586058 kubelet[1341]: I0212 20:25:52.586022 1341 server.go:462] "Adding debug handlers to kubelet server" Feb 12 20:25:52.595040 kubelet[1341]: E0212 20:25:52.594993 1341 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 12 20:25:52.595439 kubelet[1341]: E0212 20:25:52.595402 1341 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 12 20:25:52.612571 kubelet[1341]: I0212 20:25:52.612490 1341 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 12 20:25:52.613087 kubelet[1341]: I0212 20:25:52.613055 1341 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 12 20:25:52.613388 kubelet[1341]: I0212 20:25:52.613362 1341 reconciler_new.go:29] "Reconciler: start to sync state" Feb 12 20:25:52.649718 kubelet[1341]: W0212 20:25:52.649659 1341 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 12 20:25:52.649954 kubelet[1341]: E0212 20:25:52.649941 1341 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 12 20:25:52.650094 kubelet[1341]: W0212 20:25:52.650079 1341 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "172.24.4.189" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 12 20:25:52.650206 kubelet[1341]: E0212 20:25:52.650195 1341 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "172.24.4.189" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 12 20:25:52.650375 kubelet[1341]: E0212 20:25:52.650360 1341 controller.go:146] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"172.24.4.189\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Feb 12 20:25:52.650482 kubelet[1341]: W0212 20:25:52.650470 1341 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 12 20:25:52.650563 kubelet[1341]: E0212 20:25:52.650553 1341 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 12 20:25:52.650737 kubelet[1341]: E0212 20:25:52.650647 1341 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.189.17b3375c23c93341", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.189", UID:"172.24.4.189", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.189"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 25, 52, 580457281, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 25, 52, 580457281, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"172.24.4.189"}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 20:25:52.656561 kubelet[1341]: E0212 20:25:52.656448 1341 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.189.17b3375c24ac87a1", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.189", UID:"172.24.4.189", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"InvalidDiskCapacity", Message:"invalid capacity 0 on image filesystem", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.189"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 25, 52, 595355553, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 25, 52, 595355553, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"172.24.4.189"}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 20:25:52.666523 kubelet[1341]: I0212 20:25:52.666484 1341 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 12 20:25:52.666523 kubelet[1341]: I0212 20:25:52.666508 1341 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 12 20:25:52.666523 kubelet[1341]: I0212 20:25:52.666536 1341 state_mem.go:36] "Initialized new in-memory state store" Feb 12 20:25:52.670022 kubelet[1341]: E0212 20:25:52.669055 1341 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.189.17b3375c28d9e927", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.189", UID:"172.24.4.189", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.24.4.189 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.189"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 25, 52, 665438503, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 25, 52, 665438503, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"172.24.4.189"}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 20:25:52.677318 kubelet[1341]: E0212 20:25:52.677219 1341 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.189.17b3375c28da01a7", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.189", UID:"172.24.4.189", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.24.4.189 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.189"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 25, 52, 665444775, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 25, 52, 665444775, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"172.24.4.189"}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 20:25:52.679199 kubelet[1341]: E0212 20:25:52.679042 1341 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.189.17b3375c28da152e", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.189", UID:"172.24.4.189", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.24.4.189 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.189"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 25, 52, 665449774, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 25, 52, 665449774, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"172.24.4.189"}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 20:25:52.681151 kubelet[1341]: I0212 20:25:52.680624 1341 policy_none.go:49] "None policy: Start" Feb 12 20:25:52.681430 kubelet[1341]: I0212 20:25:52.681382 1341 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 12 20:25:52.681430 kubelet[1341]: I0212 20:25:52.681406 1341 state_mem.go:35] "Initializing new in-memory state store" Feb 12 20:25:52.698481 systemd[1]: Created slice kubepods.slice. Feb 12 20:25:52.704669 systemd[1]: Created slice kubepods-burstable.slice. Feb 12 20:25:52.715462 kubelet[1341]: I0212 20:25:52.713764 1341 kubelet_node_status.go:70] "Attempting to register node" node="172.24.4.189" Feb 12 20:25:52.716379 kubelet[1341]: E0212 20:25:52.716284 1341 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.189.17b3375c28d9e927", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.189", UID:"172.24.4.189", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.24.4.189 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.189"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 25, 52, 665438503, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 25, 52, 713718653, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"172.24.4.189"}': 'events "172.24.4.189.17b3375c28d9e927" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 20:25:52.716655 kubelet[1341]: E0212 20:25:52.716637 1341 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="172.24.4.189" Feb 12 20:25:52.721615 systemd[1]: Created slice kubepods-besteffort.slice. Feb 12 20:25:52.722607 kubelet[1341]: E0212 20:25:52.722505 1341 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.189.17b3375c28da01a7", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.189", UID:"172.24.4.189", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.24.4.189 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.189"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 25, 52, 665444775, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 25, 52, 713723952, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"172.24.4.189"}': 'events "172.24.4.189.17b3375c28da01a7" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 20:25:52.726512 kubelet[1341]: E0212 20:25:52.726396 1341 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.189.17b3375c28da152e", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.189", UID:"172.24.4.189", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.24.4.189 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.189"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 25, 52, 665449774, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 25, 52, 713727138, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"172.24.4.189"}': 'events "172.24.4.189.17b3375c28da152e" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 20:25:52.731170 kubelet[1341]: I0212 20:25:52.731135 1341 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 12 20:25:52.731476 kubelet[1341]: I0212 20:25:52.731453 1341 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 12 20:25:52.733549 kubelet[1341]: E0212 20:25:52.733451 1341 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172.24.4.189\" not found" Feb 12 20:25:52.735930 kubelet[1341]: E0212 20:25:52.735821 1341 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.189.17b3375c2ceaeabe", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.189", UID:"172.24.4.189", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeAllocatableEnforced", Message:"Updated Node Allocatable limit across pods", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.189"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 25, 52, 733661886, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 25, 52, 733661886, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"172.24.4.189"}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 20:25:52.823130 kubelet[1341]: I0212 20:25:52.820613 1341 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 12 20:25:52.827246 kubelet[1341]: I0212 20:25:52.827216 1341 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 12 20:25:52.827391 kubelet[1341]: I0212 20:25:52.827377 1341 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 12 20:25:52.827512 kubelet[1341]: I0212 20:25:52.827499 1341 kubelet.go:2303] "Starting kubelet main sync loop" Feb 12 20:25:52.827677 kubelet[1341]: E0212 20:25:52.827650 1341 kubelet.go:2327] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Feb 12 20:25:52.830463 kubelet[1341]: W0212 20:25:52.830434 1341 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 12 20:25:52.830654 kubelet[1341]: E0212 20:25:52.830641 1341 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 12 20:25:52.853191 kubelet[1341]: E0212 20:25:52.853169 1341 controller.go:146] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"172.24.4.189\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="400ms" Feb 12 20:25:52.918421 kubelet[1341]: I0212 20:25:52.918379 1341 kubelet_node_status.go:70] "Attempting to register node" node="172.24.4.189" Feb 12 20:25:52.921495 kubelet[1341]: E0212 20:25:52.921456 1341 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="172.24.4.189" Feb 12 20:25:52.921803 kubelet[1341]: E0212 20:25:52.921415 1341 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.189.17b3375c28d9e927", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.189", UID:"172.24.4.189", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.24.4.189 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.189"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 25, 52, 665438503, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 25, 52, 918303501, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"172.24.4.189"}': 'events "172.24.4.189.17b3375c28d9e927" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 20:25:52.924551 kubelet[1341]: E0212 20:25:52.924398 1341 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.189.17b3375c28da01a7", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.189", UID:"172.24.4.189", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.24.4.189 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.189"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 25, 52, 665444775, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 25, 52, 918315784, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"172.24.4.189"}': 'events "172.24.4.189.17b3375c28da01a7" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 20:25:52.926691 kubelet[1341]: E0212 20:25:52.926569 1341 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.189.17b3375c28da152e", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.189", UID:"172.24.4.189", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.24.4.189 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.189"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 25, 52, 665449774, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 25, 52, 918322607, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"172.24.4.189"}': 'events "172.24.4.189.17b3375c28da152e" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 20:25:53.255656 kubelet[1341]: E0212 20:25:53.255605 1341 controller.go:146] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"172.24.4.189\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="800ms" Feb 12 20:25:53.323148 kubelet[1341]: I0212 20:25:53.323064 1341 kubelet_node_status.go:70] "Attempting to register node" node="172.24.4.189" Feb 12 20:25:53.325174 kubelet[1341]: E0212 20:25:53.325087 1341 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="172.24.4.189" Feb 12 20:25:53.325522 kubelet[1341]: E0212 20:25:53.325088 1341 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.189.17b3375c28d9e927", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.189", UID:"172.24.4.189", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.24.4.189 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.189"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 25, 52, 665438503, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 25, 53, 322991363, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"172.24.4.189"}': 'events "172.24.4.189.17b3375c28d9e927" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 20:25:53.327200 kubelet[1341]: E0212 20:25:53.327089 1341 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.189.17b3375c28da01a7", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.189", UID:"172.24.4.189", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.24.4.189 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.189"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 25, 52, 665444775, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 25, 53, 323000531, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"172.24.4.189"}': 'events "172.24.4.189.17b3375c28da01a7" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 20:25:53.328429 kubelet[1341]: E0212 20:25:53.328332 1341 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.189.17b3375c28da152e", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.189", UID:"172.24.4.189", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.24.4.189 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.189"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 25, 52, 665449774, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 25, 53, 323004498, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"172.24.4.189"}': 'events "172.24.4.189.17b3375c28da152e" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 20:25:53.562864 kubelet[1341]: I0212 20:25:53.562687 1341 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 12 20:25:53.578202 kubelet[1341]: E0212 20:25:53.578064 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:25:54.016324 kubelet[1341]: E0212 20:25:54.016271 1341 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "172.24.4.189" not found Feb 12 20:25:54.065512 kubelet[1341]: E0212 20:25:54.065462 1341 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"172.24.4.189\" not found" node="172.24.4.189" Feb 12 20:25:54.127261 kubelet[1341]: I0212 20:25:54.127220 1341 kubelet_node_status.go:70] "Attempting to register node" node="172.24.4.189" Feb 12 20:25:54.138208 kubelet[1341]: I0212 20:25:54.138069 1341 kubelet_node_status.go:73] "Successfully registered node" node="172.24.4.189" Feb 12 20:25:54.187342 kubelet[1341]: I0212 20:25:54.187291 1341 kuberuntime_manager.go:1463] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Feb 12 20:25:54.188697 env[1055]: time="2024-02-12T20:25:54.188567171Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 12 20:25:54.190972 kubelet[1341]: I0212 20:25:54.190900 1341 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Feb 12 20:25:54.515412 sudo[1155]: pam_unix(sudo:session): session closed for user root Feb 12 20:25:54.578836 kubelet[1341]: E0212 20:25:54.578780 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:25:54.579629 kubelet[1341]: I0212 20:25:54.579435 1341 apiserver.go:52] "Watching apiserver" Feb 12 20:25:54.585905 kubelet[1341]: I0212 20:25:54.585864 1341 topology_manager.go:215] "Topology Admit Handler" podUID="b8f48bb9-7fd4-4352-8747-9d226a9e8f78" podNamespace="kube-system" podName="kube-proxy-lv56m" Feb 12 20:25:54.586082 kubelet[1341]: I0212 20:25:54.586006 1341 topology_manager.go:215] "Topology Admit Handler" podUID="bee0dc3b-31c8-4cc8-810b-f0f1ad747215" podNamespace="kube-system" podName="cilium-lwnwt" Feb 12 20:25:54.602491 systemd[1]: Created slice kubepods-burstable-podbee0dc3b_31c8_4cc8_810b_f0f1ad747215.slice. Feb 12 20:25:54.616639 systemd[1]: Created slice kubepods-besteffort-podb8f48bb9_7fd4_4352_8747_9d226a9e8f78.slice. Feb 12 20:25:54.617866 kubelet[1341]: I0212 20:25:54.617828 1341 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 12 20:25:54.625449 kubelet[1341]: I0212 20:25:54.625380 1341 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b8f48bb9-7fd4-4352-8747-9d226a9e8f78-kube-proxy\") pod \"kube-proxy-lv56m\" (UID: \"b8f48bb9-7fd4-4352-8747-9d226a9e8f78\") " pod="kube-system/kube-proxy-lv56m" Feb 12 20:25:54.625622 kubelet[1341]: I0212 20:25:54.625553 1341 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b8f48bb9-7fd4-4352-8747-9d226a9e8f78-xtables-lock\") pod \"kube-proxy-lv56m\" (UID: \"b8f48bb9-7fd4-4352-8747-9d226a9e8f78\") " pod="kube-system/kube-proxy-lv56m" Feb 12 20:25:54.625746 kubelet[1341]: I0212 20:25:54.625665 1341 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bee0dc3b-31c8-4cc8-810b-f0f1ad747215-xtables-lock\") pod \"cilium-lwnwt\" (UID: \"bee0dc3b-31c8-4cc8-810b-f0f1ad747215\") " pod="kube-system/cilium-lwnwt" Feb 12 20:25:54.625823 kubelet[1341]: I0212 20:25:54.625796 1341 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfq7g\" (UniqueName: \"kubernetes.io/projected/bee0dc3b-31c8-4cc8-810b-f0f1ad747215-kube-api-access-xfq7g\") pod \"cilium-lwnwt\" (UID: \"bee0dc3b-31c8-4cc8-810b-f0f1ad747215\") " pod="kube-system/cilium-lwnwt" Feb 12 20:25:54.626005 kubelet[1341]: I0212 20:25:54.625959 1341 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b8f48bb9-7fd4-4352-8747-9d226a9e8f78-lib-modules\") pod \"kube-proxy-lv56m\" (UID: \"b8f48bb9-7fd4-4352-8747-9d226a9e8f78\") " pod="kube-system/kube-proxy-lv56m" Feb 12 20:25:54.626167 kubelet[1341]: I0212 20:25:54.626076 1341 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c87pc\" (UniqueName: \"kubernetes.io/projected/b8f48bb9-7fd4-4352-8747-9d226a9e8f78-kube-api-access-c87pc\") pod \"kube-proxy-lv56m\" (UID: \"b8f48bb9-7fd4-4352-8747-9d226a9e8f78\") " pod="kube-system/kube-proxy-lv56m" Feb 12 20:25:54.626262 kubelet[1341]: I0212 20:25:54.626183 1341 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/bee0dc3b-31c8-4cc8-810b-f0f1ad747215-cilium-run\") pod \"cilium-lwnwt\" (UID: \"bee0dc3b-31c8-4cc8-810b-f0f1ad747215\") " pod="kube-system/cilium-lwnwt" Feb 12 20:25:54.626331 kubelet[1341]: I0212 20:25:54.626287 1341 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/bee0dc3b-31c8-4cc8-810b-f0f1ad747215-cni-path\") pod \"cilium-lwnwt\" (UID: \"bee0dc3b-31c8-4cc8-810b-f0f1ad747215\") " pod="kube-system/cilium-lwnwt" Feb 12 20:25:54.626414 kubelet[1341]: I0212 20:25:54.626389 1341 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bee0dc3b-31c8-4cc8-810b-f0f1ad747215-cilium-config-path\") pod \"cilium-lwnwt\" (UID: \"bee0dc3b-31c8-4cc8-810b-f0f1ad747215\") " pod="kube-system/cilium-lwnwt" Feb 12 20:25:54.626583 kubelet[1341]: I0212 20:25:54.626489 1341 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/bee0dc3b-31c8-4cc8-810b-f0f1ad747215-hubble-tls\") pod \"cilium-lwnwt\" (UID: \"bee0dc3b-31c8-4cc8-810b-f0f1ad747215\") " pod="kube-system/cilium-lwnwt" Feb 12 20:25:54.626850 kubelet[1341]: I0212 20:25:54.626738 1341 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/bee0dc3b-31c8-4cc8-810b-f0f1ad747215-bpf-maps\") pod \"cilium-lwnwt\" (UID: \"bee0dc3b-31c8-4cc8-810b-f0f1ad747215\") " pod="kube-system/cilium-lwnwt" Feb 12 20:25:54.627389 kubelet[1341]: I0212 20:25:54.627317 1341 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/bee0dc3b-31c8-4cc8-810b-f0f1ad747215-hostproc\") pod \"cilium-lwnwt\" (UID: \"bee0dc3b-31c8-4cc8-810b-f0f1ad747215\") " pod="kube-system/cilium-lwnwt" Feb 12 20:25:54.627518 kubelet[1341]: I0212 20:25:54.627435 1341 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/bee0dc3b-31c8-4cc8-810b-f0f1ad747215-cilium-cgroup\") pod \"cilium-lwnwt\" (UID: \"bee0dc3b-31c8-4cc8-810b-f0f1ad747215\") " pod="kube-system/cilium-lwnwt" Feb 12 20:25:54.627594 kubelet[1341]: I0212 20:25:54.627542 1341 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bee0dc3b-31c8-4cc8-810b-f0f1ad747215-lib-modules\") pod \"cilium-lwnwt\" (UID: \"bee0dc3b-31c8-4cc8-810b-f0f1ad747215\") " pod="kube-system/cilium-lwnwt" Feb 12 20:25:54.627746 kubelet[1341]: I0212 20:25:54.627664 1341 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/bee0dc3b-31c8-4cc8-810b-f0f1ad747215-clustermesh-secrets\") pod \"cilium-lwnwt\" (UID: \"bee0dc3b-31c8-4cc8-810b-f0f1ad747215\") " pod="kube-system/cilium-lwnwt" Feb 12 20:25:54.627849 kubelet[1341]: I0212 20:25:54.627832 1341 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bee0dc3b-31c8-4cc8-810b-f0f1ad747215-etc-cni-netd\") pod \"cilium-lwnwt\" (UID: \"bee0dc3b-31c8-4cc8-810b-f0f1ad747215\") " pod="kube-system/cilium-lwnwt" Feb 12 20:25:54.627975 kubelet[1341]: I0212 20:25:54.627935 1341 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/bee0dc3b-31c8-4cc8-810b-f0f1ad747215-host-proc-sys-net\") pod \"cilium-lwnwt\" (UID: \"bee0dc3b-31c8-4cc8-810b-f0f1ad747215\") " pod="kube-system/cilium-lwnwt" Feb 12 20:25:54.628069 kubelet[1341]: I0212 20:25:54.628048 1341 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/bee0dc3b-31c8-4cc8-810b-f0f1ad747215-host-proc-sys-kernel\") pod \"cilium-lwnwt\" (UID: \"bee0dc3b-31c8-4cc8-810b-f0f1ad747215\") " pod="kube-system/cilium-lwnwt" Feb 12 20:25:54.772494 sshd[1151]: pam_unix(sshd:session): session closed for user core Feb 12 20:25:54.796451 systemd[1]: sshd@4-172.24.4.189:22-172.24.4.1:44308.service: Deactivated successfully. Feb 12 20:25:54.798040 systemd[1]: session-5.scope: Deactivated successfully. Feb 12 20:25:54.801256 systemd-logind[1048]: Session 5 logged out. Waiting for processes to exit. Feb 12 20:25:54.804758 systemd-logind[1048]: Removed session 5. Feb 12 20:25:54.916405 env[1055]: time="2024-02-12T20:25:54.916206388Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-lwnwt,Uid:bee0dc3b-31c8-4cc8-810b-f0f1ad747215,Namespace:kube-system,Attempt:0,}" Feb 12 20:25:54.932337 env[1055]: time="2024-02-12T20:25:54.932241461Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-lv56m,Uid:b8f48bb9-7fd4-4352-8747-9d226a9e8f78,Namespace:kube-system,Attempt:0,}" Feb 12 20:25:55.579931 kubelet[1341]: E0212 20:25:55.579792 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:25:55.747776 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount275604792.mount: Deactivated successfully. Feb 12 20:25:55.754732 env[1055]: time="2024-02-12T20:25:55.754658314Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:25:55.757486 env[1055]: time="2024-02-12T20:25:55.757433699Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:25:55.766923 env[1055]: time="2024-02-12T20:25:55.766868204Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:25:55.771724 env[1055]: time="2024-02-12T20:25:55.771628001Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:25:55.778070 env[1055]: time="2024-02-12T20:25:55.778014078Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:25:55.783928 env[1055]: time="2024-02-12T20:25:55.783874509Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:25:55.786061 env[1055]: time="2024-02-12T20:25:55.785987081Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:25:55.788256 env[1055]: time="2024-02-12T20:25:55.788189061Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:25:55.855622 env[1055]: time="2024-02-12T20:25:55.853594609Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 20:25:55.855622 env[1055]: time="2024-02-12T20:25:55.853682344Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 20:25:55.855622 env[1055]: time="2024-02-12T20:25:55.853757765Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 20:25:55.856065 env[1055]: time="2024-02-12T20:25:55.855300889Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/428ee3b2779775071ded8faaab336414883b508e64fb37d0e8390f1f28463f7e pid=1404 runtime=io.containerd.runc.v2 Feb 12 20:25:55.859158 env[1055]: time="2024-02-12T20:25:55.858927521Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 20:25:55.859487 env[1055]: time="2024-02-12T20:25:55.859376162Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 20:25:55.859737 env[1055]: time="2024-02-12T20:25:55.859636781Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 20:25:55.860468 env[1055]: time="2024-02-12T20:25:55.860356531Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/706c731c6dae98caa273026f9f59ac2628f75060ae9c2be769ad2beddd467314 pid=1405 runtime=io.containerd.runc.v2 Feb 12 20:25:55.892718 systemd[1]: Started cri-containerd-428ee3b2779775071ded8faaab336414883b508e64fb37d0e8390f1f28463f7e.scope. Feb 12 20:25:55.908718 systemd[1]: Started cri-containerd-706c731c6dae98caa273026f9f59ac2628f75060ae9c2be769ad2beddd467314.scope. Feb 12 20:25:55.968413 env[1055]: time="2024-02-12T20:25:55.968328803Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-lwnwt,Uid:bee0dc3b-31c8-4cc8-810b-f0f1ad747215,Namespace:kube-system,Attempt:0,} returns sandbox id \"428ee3b2779775071ded8faaab336414883b508e64fb37d0e8390f1f28463f7e\"" Feb 12 20:25:55.971770 env[1055]: time="2024-02-12T20:25:55.971734410Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 12 20:25:55.973960 env[1055]: time="2024-02-12T20:25:55.973390236Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-lv56m,Uid:b8f48bb9-7fd4-4352-8747-9d226a9e8f78,Namespace:kube-system,Attempt:0,} returns sandbox id \"706c731c6dae98caa273026f9f59ac2628f75060ae9c2be769ad2beddd467314\"" Feb 12 20:25:56.580293 kubelet[1341]: E0212 20:25:56.580210 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:25:57.581278 kubelet[1341]: E0212 20:25:57.581116 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:25:58.582423 kubelet[1341]: E0212 20:25:58.582273 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:25:59.583552 kubelet[1341]: E0212 20:25:59.583427 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:00.583809 kubelet[1341]: E0212 20:26:00.583739 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:01.584495 kubelet[1341]: E0212 20:26:01.584439 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:02.585222 kubelet[1341]: E0212 20:26:02.585153 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:03.029862 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount764445809.mount: Deactivated successfully. Feb 12 20:26:03.585472 kubelet[1341]: E0212 20:26:03.585374 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:04.585840 kubelet[1341]: E0212 20:26:04.585674 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:05.586894 kubelet[1341]: E0212 20:26:05.586831 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:06.587112 kubelet[1341]: E0212 20:26:06.587021 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:07.587830 kubelet[1341]: E0212 20:26:07.587728 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:07.714831 env[1055]: time="2024-02-12T20:26:07.714681454Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:26:07.718697 env[1055]: time="2024-02-12T20:26:07.718666298Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:26:07.721906 env[1055]: time="2024-02-12T20:26:07.721879023Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:26:07.722804 env[1055]: time="2024-02-12T20:26:07.722776807Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Feb 12 20:26:07.726871 env[1055]: time="2024-02-12T20:26:07.726823888Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.6\"" Feb 12 20:26:07.732674 env[1055]: time="2024-02-12T20:26:07.732629015Z" level=info msg="CreateContainer within sandbox \"428ee3b2779775071ded8faaab336414883b508e64fb37d0e8390f1f28463f7e\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 12 20:26:07.762415 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1537962277.mount: Deactivated successfully. Feb 12 20:26:07.767402 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount617274754.mount: Deactivated successfully. Feb 12 20:26:07.772167 env[1055]: time="2024-02-12T20:26:07.772091557Z" level=info msg="CreateContainer within sandbox \"428ee3b2779775071ded8faaab336414883b508e64fb37d0e8390f1f28463f7e\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"09f4015a14c0faa48640c3c43e817671242b7056ccf32d05f997131e5cb41025\"" Feb 12 20:26:07.776024 env[1055]: time="2024-02-12T20:26:07.775944343Z" level=info msg="StartContainer for \"09f4015a14c0faa48640c3c43e817671242b7056ccf32d05f997131e5cb41025\"" Feb 12 20:26:07.815761 systemd[1]: Started cri-containerd-09f4015a14c0faa48640c3c43e817671242b7056ccf32d05f997131e5cb41025.scope. Feb 12 20:26:07.900014 env[1055]: time="2024-02-12T20:26:07.897534625Z" level=info msg="StartContainer for \"09f4015a14c0faa48640c3c43e817671242b7056ccf32d05f997131e5cb41025\" returns successfully" Feb 12 20:26:07.899396 systemd[1]: cri-containerd-09f4015a14c0faa48640c3c43e817671242b7056ccf32d05f997131e5cb41025.scope: Deactivated successfully. Feb 12 20:26:08.497303 env[1055]: time="2024-02-12T20:26:08.497043865Z" level=info msg="shim disconnected" id=09f4015a14c0faa48640c3c43e817671242b7056ccf32d05f997131e5cb41025 Feb 12 20:26:08.497982 env[1055]: time="2024-02-12T20:26:08.497911001Z" level=warning msg="cleaning up after shim disconnected" id=09f4015a14c0faa48640c3c43e817671242b7056ccf32d05f997131e5cb41025 namespace=k8s.io Feb 12 20:26:08.498245 env[1055]: time="2024-02-12T20:26:08.498206305Z" level=info msg="cleaning up dead shim" Feb 12 20:26:08.516844 env[1055]: time="2024-02-12T20:26:08.516765232Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:26:08Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1521 runtime=io.containerd.runc.v2\n" Feb 12 20:26:08.589147 kubelet[1341]: E0212 20:26:08.588758 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:08.755346 systemd[1]: run-containerd-runc-k8s.io-09f4015a14c0faa48640c3c43e817671242b7056ccf32d05f997131e5cb41025-runc.xx5Cbg.mount: Deactivated successfully. Feb 12 20:26:08.755571 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-09f4015a14c0faa48640c3c43e817671242b7056ccf32d05f997131e5cb41025-rootfs.mount: Deactivated successfully. Feb 12 20:26:08.918844 env[1055]: time="2024-02-12T20:26:08.918728945Z" level=info msg="CreateContainer within sandbox \"428ee3b2779775071ded8faaab336414883b508e64fb37d0e8390f1f28463f7e\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 12 20:26:08.953227 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1201670337.mount: Deactivated successfully. Feb 12 20:26:08.955043 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount25248449.mount: Deactivated successfully. Feb 12 20:26:08.982880 env[1055]: time="2024-02-12T20:26:08.982827843Z" level=info msg="CreateContainer within sandbox \"428ee3b2779775071ded8faaab336414883b508e64fb37d0e8390f1f28463f7e\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"a1a1191f3aac20eaf99f9967936e3109acf1d9297bc2ea231cd74dba4e771560\"" Feb 12 20:26:08.983503 env[1055]: time="2024-02-12T20:26:08.983351294Z" level=info msg="StartContainer for \"a1a1191f3aac20eaf99f9967936e3109acf1d9297bc2ea231cd74dba4e771560\"" Feb 12 20:26:09.005571 systemd[1]: Started cri-containerd-a1a1191f3aac20eaf99f9967936e3109acf1d9297bc2ea231cd74dba4e771560.scope. Feb 12 20:26:09.096072 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 12 20:26:09.096369 systemd[1]: Stopped systemd-sysctl.service. Feb 12 20:26:09.096818 systemd[1]: Stopping systemd-sysctl.service... Feb 12 20:26:09.098844 systemd[1]: Starting systemd-sysctl.service... Feb 12 20:26:09.099157 systemd[1]: cri-containerd-a1a1191f3aac20eaf99f9967936e3109acf1d9297bc2ea231cd74dba4e771560.scope: Deactivated successfully. Feb 12 20:26:09.115600 systemd[1]: Finished systemd-sysctl.service. Feb 12 20:26:09.132531 env[1055]: time="2024-02-12T20:26:09.132448506Z" level=info msg="StartContainer for \"a1a1191f3aac20eaf99f9967936e3109acf1d9297bc2ea231cd74dba4e771560\" returns successfully" Feb 12 20:26:09.317834 env[1055]: time="2024-02-12T20:26:09.317762392Z" level=info msg="shim disconnected" id=a1a1191f3aac20eaf99f9967936e3109acf1d9297bc2ea231cd74dba4e771560 Feb 12 20:26:09.317834 env[1055]: time="2024-02-12T20:26:09.317820882Z" level=warning msg="cleaning up after shim disconnected" id=a1a1191f3aac20eaf99f9967936e3109acf1d9297bc2ea231cd74dba4e771560 namespace=k8s.io Feb 12 20:26:09.317834 env[1055]: time="2024-02-12T20:26:09.317834387Z" level=info msg="cleaning up dead shim" Feb 12 20:26:09.397722 env[1055]: time="2024-02-12T20:26:09.397580301Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:26:09Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1586 runtime=io.containerd.runc.v2\n" Feb 12 20:26:09.590047 kubelet[1341]: E0212 20:26:09.589951 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:09.760568 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a1a1191f3aac20eaf99f9967936e3109acf1d9297bc2ea231cd74dba4e771560-rootfs.mount: Deactivated successfully. Feb 12 20:26:09.925516 env[1055]: time="2024-02-12T20:26:09.925470058Z" level=info msg="CreateContainer within sandbox \"428ee3b2779775071ded8faaab336414883b508e64fb37d0e8390f1f28463f7e\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 12 20:26:09.966821 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2941756432.mount: Deactivated successfully. Feb 12 20:26:09.978403 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2848426005.mount: Deactivated successfully. Feb 12 20:26:09.997848 env[1055]: time="2024-02-12T20:26:09.997799512Z" level=info msg="CreateContainer within sandbox \"428ee3b2779775071ded8faaab336414883b508e64fb37d0e8390f1f28463f7e\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"ad5471b36e5b94dca1483cae7fa30cb9afec47dfa9af174d9281c3dd3249843d\"" Feb 12 20:26:09.999395 env[1055]: time="2024-02-12T20:26:09.999317749Z" level=info msg="StartContainer for \"ad5471b36e5b94dca1483cae7fa30cb9afec47dfa9af174d9281c3dd3249843d\"" Feb 12 20:26:10.031385 systemd[1]: Started cri-containerd-ad5471b36e5b94dca1483cae7fa30cb9afec47dfa9af174d9281c3dd3249843d.scope. Feb 12 20:26:10.083371 systemd[1]: cri-containerd-ad5471b36e5b94dca1483cae7fa30cb9afec47dfa9af174d9281c3dd3249843d.scope: Deactivated successfully. Feb 12 20:26:10.084971 env[1055]: time="2024-02-12T20:26:10.084915848Z" level=info msg="StartContainer for \"ad5471b36e5b94dca1483cae7fa30cb9afec47dfa9af174d9281c3dd3249843d\" returns successfully" Feb 12 20:26:10.328016 env[1055]: time="2024-02-12T20:26:10.327782194Z" level=info msg="shim disconnected" id=ad5471b36e5b94dca1483cae7fa30cb9afec47dfa9af174d9281c3dd3249843d Feb 12 20:26:10.328622 env[1055]: time="2024-02-12T20:26:10.328576404Z" level=warning msg="cleaning up after shim disconnected" id=ad5471b36e5b94dca1483cae7fa30cb9afec47dfa9af174d9281c3dd3249843d namespace=k8s.io Feb 12 20:26:10.328793 env[1055]: time="2024-02-12T20:26:10.328759006Z" level=info msg="cleaning up dead shim" Feb 12 20:26:10.356791 env[1055]: time="2024-02-12T20:26:10.356703967Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:26:10Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1643 runtime=io.containerd.runc.v2\n" Feb 12 20:26:10.590934 kubelet[1341]: E0212 20:26:10.590674 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:10.936057 env[1055]: time="2024-02-12T20:26:10.935863971Z" level=info msg="CreateContainer within sandbox \"428ee3b2779775071ded8faaab336414883b508e64fb37d0e8390f1f28463f7e\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 12 20:26:10.972975 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount961021529.mount: Deactivated successfully. Feb 12 20:26:10.987861 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3138460183.mount: Deactivated successfully. Feb 12 20:26:11.008837 env[1055]: time="2024-02-12T20:26:11.008765432Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:26:11.014238 env[1055]: time="2024-02-12T20:26:11.014152988Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:342a759d88156b4f56ba522a1aed0e3d32d72542545346b40877f6583bebe05f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:26:11.014629 env[1055]: time="2024-02-12T20:26:11.014571706Z" level=info msg="CreateContainer within sandbox \"428ee3b2779775071ded8faaab336414883b508e64fb37d0e8390f1f28463f7e\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"6294ba581d03021dcadcacaeabb5fa54807fead338c7c64da04c9125d585e11d\"" Feb 12 20:26:11.015280 env[1055]: time="2024-02-12T20:26:11.015229822Z" level=info msg="StartContainer for \"6294ba581d03021dcadcacaeabb5fa54807fead338c7c64da04c9125d585e11d\"" Feb 12 20:26:11.018434 env[1055]: time="2024-02-12T20:26:11.018390026Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:26:11.022937 env[1055]: time="2024-02-12T20:26:11.022865002Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.6\" returns image reference \"sha256:342a759d88156b4f56ba522a1aed0e3d32d72542545346b40877f6583bebe05f\"" Feb 12 20:26:11.023364 env[1055]: time="2024-02-12T20:26:11.023173198Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:3898a1671ae42be1cd3c2e777549bc7b5b306b8da3a224b747365f6679fb902a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:26:11.028569 env[1055]: time="2024-02-12T20:26:11.028515244Z" level=info msg="CreateContainer within sandbox \"706c731c6dae98caa273026f9f59ac2628f75060ae9c2be769ad2beddd467314\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 12 20:26:11.052096 systemd[1]: Started cri-containerd-6294ba581d03021dcadcacaeabb5fa54807fead338c7c64da04c9125d585e11d.scope. Feb 12 20:26:11.091015 env[1055]: time="2024-02-12T20:26:11.084479863Z" level=info msg="CreateContainer within sandbox \"706c731c6dae98caa273026f9f59ac2628f75060ae9c2be769ad2beddd467314\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"9d39a4a3b0a415e46366acbbbb5952966bc025bdab0ad4cce05b2e0057182729\"" Feb 12 20:26:11.092455 env[1055]: time="2024-02-12T20:26:11.092412962Z" level=info msg="StartContainer for \"9d39a4a3b0a415e46366acbbbb5952966bc025bdab0ad4cce05b2e0057182729\"" Feb 12 20:26:11.101536 systemd[1]: cri-containerd-6294ba581d03021dcadcacaeabb5fa54807fead338c7c64da04c9125d585e11d.scope: Deactivated successfully. Feb 12 20:26:11.105505 env[1055]: time="2024-02-12T20:26:11.105202819Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbee0dc3b_31c8_4cc8_810b_f0f1ad747215.slice/cri-containerd-6294ba581d03021dcadcacaeabb5fa54807fead338c7c64da04c9125d585e11d.scope/memory.events\": no such file or directory" Feb 12 20:26:11.110316 env[1055]: time="2024-02-12T20:26:11.110272202Z" level=info msg="StartContainer for \"6294ba581d03021dcadcacaeabb5fa54807fead338c7c64da04c9125d585e11d\" returns successfully" Feb 12 20:26:11.127634 systemd[1]: Started cri-containerd-9d39a4a3b0a415e46366acbbbb5952966bc025bdab0ad4cce05b2e0057182729.scope. Feb 12 20:26:11.373901 env[1055]: time="2024-02-12T20:26:11.373783626Z" level=info msg="StartContainer for \"9d39a4a3b0a415e46366acbbbb5952966bc025bdab0ad4cce05b2e0057182729\" returns successfully" Feb 12 20:26:11.381745 env[1055]: time="2024-02-12T20:26:11.381518800Z" level=info msg="shim disconnected" id=6294ba581d03021dcadcacaeabb5fa54807fead338c7c64da04c9125d585e11d Feb 12 20:26:11.382948 env[1055]: time="2024-02-12T20:26:11.382859643Z" level=warning msg="cleaning up after shim disconnected" id=6294ba581d03021dcadcacaeabb5fa54807fead338c7c64da04c9125d585e11d namespace=k8s.io Feb 12 20:26:11.383321 env[1055]: time="2024-02-12T20:26:11.383246351Z" level=info msg="cleaning up dead shim" Feb 12 20:26:11.401732 env[1055]: time="2024-02-12T20:26:11.401646266Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:26:11Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1752 runtime=io.containerd.runc.v2\n" Feb 12 20:26:11.591666 kubelet[1341]: E0212 20:26:11.591602 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:11.950844 env[1055]: time="2024-02-12T20:26:11.950730388Z" level=info msg="CreateContainer within sandbox \"428ee3b2779775071ded8faaab336414883b508e64fb37d0e8390f1f28463f7e\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 12 20:26:11.980623 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount597385146.mount: Deactivated successfully. Feb 12 20:26:11.993172 env[1055]: time="2024-02-12T20:26:11.992905025Z" level=info msg="CreateContainer within sandbox \"428ee3b2779775071ded8faaab336414883b508e64fb37d0e8390f1f28463f7e\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"95c7a06564c90d785eebd68ea988d887240a56aa812cb475521302c9ba6cfaff\"" Feb 12 20:26:11.993925 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2177096199.mount: Deactivated successfully. Feb 12 20:26:11.995528 kubelet[1341]: I0212 20:26:11.995485 1341 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-lv56m" podStartSLOduration=2.9466737309999997 podCreationTimestamp="2024-02-12 20:25:54 +0000 UTC" firstStartedPulling="2024-02-12 20:25:55.975174863 +0000 UTC m=+3.979873793" lastFinishedPulling="2024-02-12 20:26:11.023875213 +0000 UTC m=+19.028574183" observedRunningTime="2024-02-12 20:26:11.958932384 +0000 UTC m=+19.963631344" watchObservedRunningTime="2024-02-12 20:26:11.995374121 +0000 UTC m=+20.000073071" Feb 12 20:26:11.997236 env[1055]: time="2024-02-12T20:26:11.996209292Z" level=info msg="StartContainer for \"95c7a06564c90d785eebd68ea988d887240a56aa812cb475521302c9ba6cfaff\"" Feb 12 20:26:12.031806 systemd[1]: Started cri-containerd-95c7a06564c90d785eebd68ea988d887240a56aa812cb475521302c9ba6cfaff.scope. Feb 12 20:26:12.088052 env[1055]: time="2024-02-12T20:26:12.087973929Z" level=info msg="StartContainer for \"95c7a06564c90d785eebd68ea988d887240a56aa812cb475521302c9ba6cfaff\" returns successfully" Feb 12 20:26:12.207326 kubelet[1341]: I0212 20:26:12.206683 1341 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Feb 12 20:26:12.577889 kubelet[1341]: E0212 20:26:12.577834 1341 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:12.592863 kubelet[1341]: E0212 20:26:12.592765 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:12.668149 kernel: Initializing XFRM netlink socket Feb 12 20:26:13.003781 kubelet[1341]: I0212 20:26:13.003669 1341 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-lwnwt" podStartSLOduration=7.249722895 podCreationTimestamp="2024-02-12 20:25:54 +0000 UTC" firstStartedPulling="2024-02-12 20:25:55.970605994 +0000 UTC m=+3.975304924" lastFinishedPulling="2024-02-12 20:26:07.724459082 +0000 UTC m=+15.729158042" observedRunningTime="2024-02-12 20:26:12.997532064 +0000 UTC m=+21.002231034" watchObservedRunningTime="2024-02-12 20:26:13.003576013 +0000 UTC m=+21.008274972" Feb 12 20:26:13.593207 kubelet[1341]: E0212 20:26:13.593093 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:14.459445 systemd-networkd[971]: cilium_host: Link UP Feb 12 20:26:14.461619 systemd-networkd[971]: cilium_net: Link UP Feb 12 20:26:14.461985 systemd-networkd[971]: cilium_net: Gained carrier Feb 12 20:26:14.465169 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Feb 12 20:26:14.465287 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Feb 12 20:26:14.466173 systemd-networkd[971]: cilium_host: Gained carrier Feb 12 20:26:14.489707 systemd-networkd[971]: cilium_net: Gained IPv6LL Feb 12 20:26:14.594301 kubelet[1341]: E0212 20:26:14.594236 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:14.618520 systemd-networkd[971]: cilium_vxlan: Link UP Feb 12 20:26:14.618529 systemd-networkd[971]: cilium_vxlan: Gained carrier Feb 12 20:26:14.622392 systemd-networkd[971]: cilium_host: Gained IPv6LL Feb 12 20:26:14.880203 kernel: NET: Registered PF_ALG protocol family Feb 12 20:26:15.596164 kubelet[1341]: E0212 20:26:15.595963 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:15.924716 systemd-networkd[971]: lxc_health: Link UP Feb 12 20:26:15.933905 systemd-networkd[971]: lxc_health: Gained carrier Feb 12 20:26:15.934224 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 12 20:26:16.321245 systemd-networkd[971]: cilium_vxlan: Gained IPv6LL Feb 12 20:26:16.597017 kubelet[1341]: E0212 20:26:16.596845 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:17.597580 kubelet[1341]: E0212 20:26:17.597475 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:17.684271 systemd-networkd[971]: lxc_health: Gained IPv6LL Feb 12 20:26:18.598473 kubelet[1341]: E0212 20:26:18.598387 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:19.599705 kubelet[1341]: E0212 20:26:19.599571 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:20.600262 kubelet[1341]: E0212 20:26:20.600086 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:21.600843 kubelet[1341]: E0212 20:26:21.600613 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:21.930606 kubelet[1341]: I0212 20:26:21.930343 1341 topology_manager.go:215] "Topology Admit Handler" podUID="199ff4f1-e3fb-44a1-bced-58fc27bd22e6" podNamespace="default" podName="nginx-deployment-6d5f899847-vm27j" Feb 12 20:26:21.953308 systemd[1]: Created slice kubepods-besteffort-pod199ff4f1_e3fb_44a1_bced_58fc27bd22e6.slice. Feb 12 20:26:22.034427 kubelet[1341]: I0212 20:26:22.034351 1341 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zsxpz\" (UniqueName: \"kubernetes.io/projected/199ff4f1-e3fb-44a1-bced-58fc27bd22e6-kube-api-access-zsxpz\") pod \"nginx-deployment-6d5f899847-vm27j\" (UID: \"199ff4f1-e3fb-44a1-bced-58fc27bd22e6\") " pod="default/nginx-deployment-6d5f899847-vm27j" Feb 12 20:26:22.261002 env[1055]: time="2024-02-12T20:26:22.260805661Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-vm27j,Uid:199ff4f1-e3fb-44a1-bced-58fc27bd22e6,Namespace:default,Attempt:0,}" Feb 12 20:26:22.368891 systemd-networkd[971]: lxcdb42c7a9fff6: Link UP Feb 12 20:26:22.377148 kernel: eth0: renamed from tmp311b4 Feb 12 20:26:22.384854 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 12 20:26:22.384973 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcdb42c7a9fff6: link becomes ready Feb 12 20:26:22.385020 systemd-networkd[971]: lxcdb42c7a9fff6: Gained carrier Feb 12 20:26:22.601703 kubelet[1341]: E0212 20:26:22.601303 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:22.819502 env[1055]: time="2024-02-12T20:26:22.819356005Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 20:26:22.819746 env[1055]: time="2024-02-12T20:26:22.819507935Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 20:26:22.819746 env[1055]: time="2024-02-12T20:26:22.819584276Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 20:26:22.820147 env[1055]: time="2024-02-12T20:26:22.820070650Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/311b40b54846f7105a9480d1e96a295c7f042cb00dd03c01090cea2d3d1be5d3 pid=2389 runtime=io.containerd.runc.v2 Feb 12 20:26:22.842537 systemd[1]: Started cri-containerd-311b40b54846f7105a9480d1e96a295c7f042cb00dd03c01090cea2d3d1be5d3.scope. Feb 12 20:26:22.902807 env[1055]: time="2024-02-12T20:26:22.902725847Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-vm27j,Uid:199ff4f1-e3fb-44a1-bced-58fc27bd22e6,Namespace:default,Attempt:0,} returns sandbox id \"311b40b54846f7105a9480d1e96a295c7f042cb00dd03c01090cea2d3d1be5d3\"" Feb 12 20:26:22.904859 env[1055]: time="2024-02-12T20:26:22.904829451Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 12 20:26:23.167247 systemd[1]: run-containerd-runc-k8s.io-311b40b54846f7105a9480d1e96a295c7f042cb00dd03c01090cea2d3d1be5d3-runc.4z2AKv.mount: Deactivated successfully. Feb 12 20:26:23.594244 update_engine[1049]: I0212 20:26:23.593200 1049 update_attempter.cc:509] Updating boot flags... Feb 12 20:26:23.602336 kubelet[1341]: E0212 20:26:23.602181 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:24.375302 systemd-networkd[971]: lxcdb42c7a9fff6: Gained IPv6LL Feb 12 20:26:24.602517 kubelet[1341]: E0212 20:26:24.602373 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:25.602837 kubelet[1341]: E0212 20:26:25.602792 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:26.603648 kubelet[1341]: E0212 20:26:26.603596 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:27.604760 kubelet[1341]: E0212 20:26:27.604710 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:27.689286 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount939449833.mount: Deactivated successfully. Feb 12 20:26:28.605187 kubelet[1341]: E0212 20:26:28.605135 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:29.369636 env[1055]: time="2024-02-12T20:26:29.369436207Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:26:29.375098 env[1055]: time="2024-02-12T20:26:29.374990501Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3a8963c304a2f89d2bfa055e07403bae348b293c891b8ea01f7136642eaa277a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:26:29.385312 env[1055]: time="2024-02-12T20:26:29.385219208Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:26:29.392690 env[1055]: time="2024-02-12T20:26:29.392523313Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx@sha256:e34a272f01984c973b1e034e197c02f77dda18981038e3a54e957554ada4fec6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:26:29.395435 env[1055]: time="2024-02-12T20:26:29.395355132Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:3a8963c304a2f89d2bfa055e07403bae348b293c891b8ea01f7136642eaa277a\"" Feb 12 20:26:29.401573 env[1055]: time="2024-02-12T20:26:29.401497898Z" level=info msg="CreateContainer within sandbox \"311b40b54846f7105a9480d1e96a295c7f042cb00dd03c01090cea2d3d1be5d3\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Feb 12 20:26:29.430846 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3642615242.mount: Deactivated successfully. Feb 12 20:26:29.446264 env[1055]: time="2024-02-12T20:26:29.445920094Z" level=info msg="CreateContainer within sandbox \"311b40b54846f7105a9480d1e96a295c7f042cb00dd03c01090cea2d3d1be5d3\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"b01b425f143641d60af5971b0351b2402ffc7e59c700986875b4c49ee1ce45e3\"" Feb 12 20:26:29.448176 env[1055]: time="2024-02-12T20:26:29.448056841Z" level=info msg="StartContainer for \"b01b425f143641d60af5971b0351b2402ffc7e59c700986875b4c49ee1ce45e3\"" Feb 12 20:26:29.514614 systemd[1]: Started cri-containerd-b01b425f143641d60af5971b0351b2402ffc7e59c700986875b4c49ee1ce45e3.scope. Feb 12 20:26:29.575044 env[1055]: time="2024-02-12T20:26:29.574974006Z" level=info msg="StartContainer for \"b01b425f143641d60af5971b0351b2402ffc7e59c700986875b4c49ee1ce45e3\" returns successfully" Feb 12 20:26:29.606201 kubelet[1341]: E0212 20:26:29.606157 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:30.095775 kubelet[1341]: I0212 20:26:30.095703 1341 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nginx-deployment-6d5f899847-vm27j" podStartSLOduration=2.603511584 podCreationTimestamp="2024-02-12 20:26:21 +0000 UTC" firstStartedPulling="2024-02-12 20:26:22.90404626 +0000 UTC m=+30.908745179" lastFinishedPulling="2024-02-12 20:26:29.396137769 +0000 UTC m=+37.400836739" observedRunningTime="2024-02-12 20:26:30.093457006 +0000 UTC m=+38.098155986" watchObservedRunningTime="2024-02-12 20:26:30.095603144 +0000 UTC m=+38.100302114" Feb 12 20:26:30.425455 systemd[1]: run-containerd-runc-k8s.io-b01b425f143641d60af5971b0351b2402ffc7e59c700986875b4c49ee1ce45e3-runc.h0gN39.mount: Deactivated successfully. Feb 12 20:26:30.606944 kubelet[1341]: E0212 20:26:30.606868 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:31.608088 kubelet[1341]: E0212 20:26:31.607971 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:32.577577 kubelet[1341]: E0212 20:26:32.577502 1341 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:32.609217 kubelet[1341]: E0212 20:26:32.609153 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:33.610497 kubelet[1341]: E0212 20:26:33.610396 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:34.610968 kubelet[1341]: E0212 20:26:34.610875 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:35.612963 kubelet[1341]: E0212 20:26:35.612767 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:36.614801 kubelet[1341]: E0212 20:26:36.614755 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:37.616091 kubelet[1341]: E0212 20:26:37.615998 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:38.617286 kubelet[1341]: E0212 20:26:38.617210 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:38.901402 kubelet[1341]: I0212 20:26:38.900953 1341 topology_manager.go:215] "Topology Admit Handler" podUID="dcb3a886-2e95-4664-9b97-23893054ab73" podNamespace="default" podName="nfs-server-provisioner-0" Feb 12 20:26:38.914027 systemd[1]: Created slice kubepods-besteffort-poddcb3a886_2e95_4664_9b97_23893054ab73.slice. Feb 12 20:26:38.948492 kubelet[1341]: I0212 20:26:38.948301 1341 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vpbh\" (UniqueName: \"kubernetes.io/projected/dcb3a886-2e95-4664-9b97-23893054ab73-kube-api-access-6vpbh\") pod \"nfs-server-provisioner-0\" (UID: \"dcb3a886-2e95-4664-9b97-23893054ab73\") " pod="default/nfs-server-provisioner-0" Feb 12 20:26:38.948492 kubelet[1341]: I0212 20:26:38.948480 1341 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/dcb3a886-2e95-4664-9b97-23893054ab73-data\") pod \"nfs-server-provisioner-0\" (UID: \"dcb3a886-2e95-4664-9b97-23893054ab73\") " pod="default/nfs-server-provisioner-0" Feb 12 20:26:39.225857 env[1055]: time="2024-02-12T20:26:39.224812424Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:dcb3a886-2e95-4664-9b97-23893054ab73,Namespace:default,Attempt:0,}" Feb 12 20:26:39.346329 systemd-networkd[971]: lxcb5947a0cd771: Link UP Feb 12 20:26:39.351235 kernel: eth0: renamed from tmpd13d7 Feb 12 20:26:39.360338 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 12 20:26:39.360503 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcb5947a0cd771: link becomes ready Feb 12 20:26:39.360875 systemd-networkd[971]: lxcb5947a0cd771: Gained carrier Feb 12 20:26:39.617886 kubelet[1341]: E0212 20:26:39.617791 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:39.686007 env[1055]: time="2024-02-12T20:26:39.685890523Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 20:26:39.686007 env[1055]: time="2024-02-12T20:26:39.685948759Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 20:26:39.686007 env[1055]: time="2024-02-12T20:26:39.685971380Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 20:26:39.686504 env[1055]: time="2024-02-12T20:26:39.686438060Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d13d72ad51be0a50101de2997a47e44d68100be033b6606f08c382375a0797f1 pid=2521 runtime=io.containerd.runc.v2 Feb 12 20:26:39.713133 systemd[1]: Started cri-containerd-d13d72ad51be0a50101de2997a47e44d68100be033b6606f08c382375a0797f1.scope. Feb 12 20:26:39.770707 env[1055]: time="2024-02-12T20:26:39.770638818Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:dcb3a886-2e95-4664-9b97-23893054ab73,Namespace:default,Attempt:0,} returns sandbox id \"d13d72ad51be0a50101de2997a47e44d68100be033b6606f08c382375a0797f1\"" Feb 12 20:26:39.772699 env[1055]: time="2024-02-12T20:26:39.772662514Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Feb 12 20:26:40.438656 systemd-networkd[971]: lxcb5947a0cd771: Gained IPv6LL Feb 12 20:26:40.618201 kubelet[1341]: E0212 20:26:40.618136 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:41.618365 kubelet[1341]: E0212 20:26:41.618305 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:42.619197 kubelet[1341]: E0212 20:26:42.619019 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:43.620069 kubelet[1341]: E0212 20:26:43.620012 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:44.337314 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3759796887.mount: Deactivated successfully. Feb 12 20:26:44.621971 kubelet[1341]: E0212 20:26:44.621302 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:45.622420 kubelet[1341]: E0212 20:26:45.622368 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:46.623066 kubelet[1341]: E0212 20:26:46.622993 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:47.623790 kubelet[1341]: E0212 20:26:47.623722 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:48.115789 env[1055]: time="2024-02-12T20:26:48.115574938Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:26:48.131953 env[1055]: time="2024-02-12T20:26:48.131792172Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:26:48.137294 env[1055]: time="2024-02-12T20:26:48.137223634Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:26:48.143943 env[1055]: time="2024-02-12T20:26:48.143853899Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:26:48.144814 env[1055]: time="2024-02-12T20:26:48.144731858Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Feb 12 20:26:48.152916 env[1055]: time="2024-02-12T20:26:48.152789858Z" level=info msg="CreateContainer within sandbox \"d13d72ad51be0a50101de2997a47e44d68100be033b6606f08c382375a0797f1\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Feb 12 20:26:48.177439 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3194622487.mount: Deactivated successfully. Feb 12 20:26:48.193869 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1063048752.mount: Deactivated successfully. Feb 12 20:26:48.204864 env[1055]: time="2024-02-12T20:26:48.204752609Z" level=info msg="CreateContainer within sandbox \"d13d72ad51be0a50101de2997a47e44d68100be033b6606f08c382375a0797f1\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"f095074711b29d81f765d62314c1671b02f9b70b8beb68cf792bf3755ddccaa7\"" Feb 12 20:26:48.206552 env[1055]: time="2024-02-12T20:26:48.206500915Z" level=info msg="StartContainer for \"f095074711b29d81f765d62314c1671b02f9b70b8beb68cf792bf3755ddccaa7\"" Feb 12 20:26:48.256792 systemd[1]: Started cri-containerd-f095074711b29d81f765d62314c1671b02f9b70b8beb68cf792bf3755ddccaa7.scope. Feb 12 20:26:48.298489 env[1055]: time="2024-02-12T20:26:48.298404756Z" level=info msg="StartContainer for \"f095074711b29d81f765d62314c1671b02f9b70b8beb68cf792bf3755ddccaa7\" returns successfully" Feb 12 20:26:48.624517 kubelet[1341]: E0212 20:26:48.624430 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:49.129043 kubelet[1341]: I0212 20:26:49.128931 1341 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=2.755279051 podCreationTimestamp="2024-02-12 20:26:38 +0000 UTC" firstStartedPulling="2024-02-12 20:26:39.772185034 +0000 UTC m=+47.776883954" lastFinishedPulling="2024-02-12 20:26:48.145711878 +0000 UTC m=+56.150410908" observedRunningTime="2024-02-12 20:26:49.127453517 +0000 UTC m=+57.132152537" watchObservedRunningTime="2024-02-12 20:26:49.128806005 +0000 UTC m=+57.133504975" Feb 12 20:26:49.626339 kubelet[1341]: E0212 20:26:49.626244 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:50.626776 kubelet[1341]: E0212 20:26:50.626693 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:51.627820 kubelet[1341]: E0212 20:26:51.627756 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:52.577867 kubelet[1341]: E0212 20:26:52.577717 1341 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:52.629821 kubelet[1341]: E0212 20:26:52.629761 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:53.630401 kubelet[1341]: E0212 20:26:53.630349 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:54.631347 kubelet[1341]: E0212 20:26:54.631264 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:55.633143 kubelet[1341]: E0212 20:26:55.632971 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:56.633935 kubelet[1341]: E0212 20:26:56.633805 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:57.634880 kubelet[1341]: E0212 20:26:57.634822 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:57.906567 kubelet[1341]: I0212 20:26:57.906349 1341 topology_manager.go:215] "Topology Admit Handler" podUID="c3c832f7-3eac-400e-b76f-ef2c1e667bb5" podNamespace="default" podName="test-pod-1" Feb 12 20:26:57.917910 systemd[1]: Created slice kubepods-besteffort-podc3c832f7_3eac_400e_b76f_ef2c1e667bb5.slice. Feb 12 20:26:58.074904 kubelet[1341]: I0212 20:26:58.074860 1341 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-71a57284-5105-4e28-9d5d-f1286ef381c5\" (UniqueName: \"kubernetes.io/nfs/c3c832f7-3eac-400e-b76f-ef2c1e667bb5-pvc-71a57284-5105-4e28-9d5d-f1286ef381c5\") pod \"test-pod-1\" (UID: \"c3c832f7-3eac-400e-b76f-ef2c1e667bb5\") " pod="default/test-pod-1" Feb 12 20:26:58.075303 kubelet[1341]: I0212 20:26:58.075275 1341 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7bqg\" (UniqueName: \"kubernetes.io/projected/c3c832f7-3eac-400e-b76f-ef2c1e667bb5-kube-api-access-t7bqg\") pod \"test-pod-1\" (UID: \"c3c832f7-3eac-400e-b76f-ef2c1e667bb5\") " pod="default/test-pod-1" Feb 12 20:26:58.255955 kernel: FS-Cache: Loaded Feb 12 20:26:58.325906 kernel: RPC: Registered named UNIX socket transport module. Feb 12 20:26:58.326204 kernel: RPC: Registered udp transport module. Feb 12 20:26:58.326255 kernel: RPC: Registered tcp transport module. Feb 12 20:26:58.326296 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Feb 12 20:26:58.383190 kernel: FS-Cache: Netfs 'nfs' registered for caching Feb 12 20:26:58.613600 kernel: NFS: Registering the id_resolver key type Feb 12 20:26:58.613846 kernel: Key type id_resolver registered Feb 12 20:26:58.613892 kernel: Key type id_legacy registered Feb 12 20:26:58.636479 kubelet[1341]: E0212 20:26:58.636388 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:58.673276 nfsidmap[2642]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'novalocal' Feb 12 20:26:58.683482 nfsidmap[2643]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'novalocal' Feb 12 20:26:58.828914 env[1055]: time="2024-02-12T20:26:58.827310658Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:c3c832f7-3eac-400e-b76f-ef2c1e667bb5,Namespace:default,Attempt:0,}" Feb 12 20:26:58.913956 systemd-networkd[971]: lxcd3edeb7d801e: Link UP Feb 12 20:26:58.924370 kernel: eth0: renamed from tmped045 Feb 12 20:26:58.934140 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 12 20:26:58.934244 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcd3edeb7d801e: link becomes ready Feb 12 20:26:58.934376 systemd-networkd[971]: lxcd3edeb7d801e: Gained carrier Feb 12 20:26:59.302789 env[1055]: time="2024-02-12T20:26:59.302545229Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 20:26:59.302789 env[1055]: time="2024-02-12T20:26:59.302605792Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 20:26:59.302789 env[1055]: time="2024-02-12T20:26:59.302623936Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 20:26:59.303277 env[1055]: time="2024-02-12T20:26:59.302830710Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ed045db82df8ec89c2512669c295040c823a914719c60db0eed57d6e7f692c58 pid=2671 runtime=io.containerd.runc.v2 Feb 12 20:26:59.332362 systemd[1]: run-containerd-runc-k8s.io-ed045db82df8ec89c2512669c295040c823a914719c60db0eed57d6e7f692c58-runc.sLcuvh.mount: Deactivated successfully. Feb 12 20:26:59.339218 systemd[1]: Started cri-containerd-ed045db82df8ec89c2512669c295040c823a914719c60db0eed57d6e7f692c58.scope. Feb 12 20:26:59.390750 env[1055]: time="2024-02-12T20:26:59.390696924Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:c3c832f7-3eac-400e-b76f-ef2c1e667bb5,Namespace:default,Attempt:0,} returns sandbox id \"ed045db82df8ec89c2512669c295040c823a914719c60db0eed57d6e7f692c58\"" Feb 12 20:26:59.392459 env[1055]: time="2024-02-12T20:26:59.392427384Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 12 20:26:59.638227 kubelet[1341]: E0212 20:26:59.636826 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:59.927543 env[1055]: time="2024-02-12T20:26:59.927327560Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:26:59.931592 env[1055]: time="2024-02-12T20:26:59.931514951Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:3a8963c304a2f89d2bfa055e07403bae348b293c891b8ea01f7136642eaa277a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:26:59.935664 env[1055]: time="2024-02-12T20:26:59.935596946Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:26:59.939800 env[1055]: time="2024-02-12T20:26:59.939738332Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx@sha256:e34a272f01984c973b1e034e197c02f77dda18981038e3a54e957554ada4fec6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:26:59.941852 env[1055]: time="2024-02-12T20:26:59.941759412Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:3a8963c304a2f89d2bfa055e07403bae348b293c891b8ea01f7136642eaa277a\"" Feb 12 20:26:59.946074 env[1055]: time="2024-02-12T20:26:59.945985185Z" level=info msg="CreateContainer within sandbox \"ed045db82df8ec89c2512669c295040c823a914719c60db0eed57d6e7f692c58\" for container &ContainerMetadata{Name:test,Attempt:0,}" Feb 12 20:26:59.985533 env[1055]: time="2024-02-12T20:26:59.985431729Z" level=info msg="CreateContainer within sandbox \"ed045db82df8ec89c2512669c295040c823a914719c60db0eed57d6e7f692c58\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"11cdf2f53c996af91d207f7ace14751f33fb52fea5951ac9738e7c5f33b6ccbd\"" Feb 12 20:26:59.987162 env[1055]: time="2024-02-12T20:26:59.987063886Z" level=info msg="StartContainer for \"11cdf2f53c996af91d207f7ace14751f33fb52fea5951ac9738e7c5f33b6ccbd\"" Feb 12 20:27:00.024243 systemd[1]: Started cri-containerd-11cdf2f53c996af91d207f7ace14751f33fb52fea5951ac9738e7c5f33b6ccbd.scope. Feb 12 20:27:00.090734 env[1055]: time="2024-02-12T20:27:00.090623063Z" level=info msg="StartContainer for \"11cdf2f53c996af91d207f7ace14751f33fb52fea5951ac9738e7c5f33b6ccbd\" returns successfully" Feb 12 20:27:00.151726 systemd-networkd[971]: lxcd3edeb7d801e: Gained IPv6LL Feb 12 20:27:00.167382 kubelet[1341]: I0212 20:27:00.167332 1341 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=18.616783988 podCreationTimestamp="2024-02-12 20:26:41 +0000 UTC" firstStartedPulling="2024-02-12 20:26:59.391915271 +0000 UTC m=+67.396614191" lastFinishedPulling="2024-02-12 20:26:59.942348177 +0000 UTC m=+67.947047148" observedRunningTime="2024-02-12 20:27:00.166235678 +0000 UTC m=+68.170934648" watchObservedRunningTime="2024-02-12 20:27:00.167216945 +0000 UTC m=+68.171915915" Feb 12 20:27:00.311244 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3167577754.mount: Deactivated successfully. Feb 12 20:27:00.637257 kubelet[1341]: E0212 20:27:00.637014 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:27:01.638177 kubelet[1341]: E0212 20:27:01.638088 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:27:02.639685 kubelet[1341]: E0212 20:27:02.639577 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:27:03.640633 kubelet[1341]: E0212 20:27:03.640501 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:27:04.641175 kubelet[1341]: E0212 20:27:04.641085 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:27:05.642368 kubelet[1341]: E0212 20:27:05.642184 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:27:06.643425 kubelet[1341]: E0212 20:27:06.643364 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:27:07.644853 kubelet[1341]: E0212 20:27:07.644784 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:27:08.645479 kubelet[1341]: E0212 20:27:08.645406 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:27:09.332913 env[1055]: time="2024-02-12T20:27:09.332821453Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 12 20:27:09.344414 env[1055]: time="2024-02-12T20:27:09.344363950Z" level=info msg="StopContainer for \"95c7a06564c90d785eebd68ea988d887240a56aa812cb475521302c9ba6cfaff\" with timeout 2 (s)" Feb 12 20:27:09.345407 env[1055]: time="2024-02-12T20:27:09.345346546Z" level=info msg="Stop container \"95c7a06564c90d785eebd68ea988d887240a56aa812cb475521302c9ba6cfaff\" with signal terminated" Feb 12 20:27:09.361145 systemd-networkd[971]: lxc_health: Link DOWN Feb 12 20:27:09.361163 systemd-networkd[971]: lxc_health: Lost carrier Feb 12 20:27:09.414002 systemd[1]: cri-containerd-95c7a06564c90d785eebd68ea988d887240a56aa812cb475521302c9ba6cfaff.scope: Deactivated successfully. Feb 12 20:27:09.414910 systemd[1]: cri-containerd-95c7a06564c90d785eebd68ea988d887240a56aa812cb475521302c9ba6cfaff.scope: Consumed 9.280s CPU time. Feb 12 20:27:09.449877 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-95c7a06564c90d785eebd68ea988d887240a56aa812cb475521302c9ba6cfaff-rootfs.mount: Deactivated successfully. Feb 12 20:27:09.469843 env[1055]: time="2024-02-12T20:27:09.468585307Z" level=info msg="shim disconnected" id=95c7a06564c90d785eebd68ea988d887240a56aa812cb475521302c9ba6cfaff Feb 12 20:27:09.470154 env[1055]: time="2024-02-12T20:27:09.470124562Z" level=warning msg="cleaning up after shim disconnected" id=95c7a06564c90d785eebd68ea988d887240a56aa812cb475521302c9ba6cfaff namespace=k8s.io Feb 12 20:27:09.470261 env[1055]: time="2024-02-12T20:27:09.470243735Z" level=info msg="cleaning up dead shim" Feb 12 20:27:09.480003 env[1055]: time="2024-02-12T20:27:09.479929104Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:27:09Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2803 runtime=io.containerd.runc.v2\n" Feb 12 20:27:09.483935 env[1055]: time="2024-02-12T20:27:09.483882838Z" level=info msg="StopContainer for \"95c7a06564c90d785eebd68ea988d887240a56aa812cb475521302c9ba6cfaff\" returns successfully" Feb 12 20:27:09.484898 env[1055]: time="2024-02-12T20:27:09.484856718Z" level=info msg="StopPodSandbox for \"428ee3b2779775071ded8faaab336414883b508e64fb37d0e8390f1f28463f7e\"" Feb 12 20:27:09.485251 env[1055]: time="2024-02-12T20:27:09.485208575Z" level=info msg="Container to stop \"95c7a06564c90d785eebd68ea988d887240a56aa812cb475521302c9ba6cfaff\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 20:27:09.485340 env[1055]: time="2024-02-12T20:27:09.485320784Z" level=info msg="Container to stop \"09f4015a14c0faa48640c3c43e817671242b7056ccf32d05f997131e5cb41025\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 20:27:09.485431 env[1055]: time="2024-02-12T20:27:09.485411092Z" level=info msg="Container to stop \"6294ba581d03021dcadcacaeabb5fa54807fead338c7c64da04c9125d585e11d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 20:27:09.485505 env[1055]: time="2024-02-12T20:27:09.485487436Z" level=info msg="Container to stop \"a1a1191f3aac20eaf99f9967936e3109acf1d9297bc2ea231cd74dba4e771560\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 20:27:09.485577 env[1055]: time="2024-02-12T20:27:09.485559239Z" level=info msg="Container to stop \"ad5471b36e5b94dca1483cae7fa30cb9afec47dfa9af174d9281c3dd3249843d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 20:27:09.487363 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-428ee3b2779775071ded8faaab336414883b508e64fb37d0e8390f1f28463f7e-shm.mount: Deactivated successfully. Feb 12 20:27:09.496502 systemd[1]: cri-containerd-428ee3b2779775071ded8faaab336414883b508e64fb37d0e8390f1f28463f7e.scope: Deactivated successfully. Feb 12 20:27:09.526587 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-428ee3b2779775071ded8faaab336414883b508e64fb37d0e8390f1f28463f7e-rootfs.mount: Deactivated successfully. Feb 12 20:27:09.536021 env[1055]: time="2024-02-12T20:27:09.535909308Z" level=info msg="shim disconnected" id=428ee3b2779775071ded8faaab336414883b508e64fb37d0e8390f1f28463f7e Feb 12 20:27:09.536223 env[1055]: time="2024-02-12T20:27:09.536026647Z" level=warning msg="cleaning up after shim disconnected" id=428ee3b2779775071ded8faaab336414883b508e64fb37d0e8390f1f28463f7e namespace=k8s.io Feb 12 20:27:09.536223 env[1055]: time="2024-02-12T20:27:09.536052054Z" level=info msg="cleaning up dead shim" Feb 12 20:27:09.550004 env[1055]: time="2024-02-12T20:27:09.549947296Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:27:09Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2835 runtime=io.containerd.runc.v2\n" Feb 12 20:27:09.550589 env[1055]: time="2024-02-12T20:27:09.550557175Z" level=info msg="TearDown network for sandbox \"428ee3b2779775071ded8faaab336414883b508e64fb37d0e8390f1f28463f7e\" successfully" Feb 12 20:27:09.550687 env[1055]: time="2024-02-12T20:27:09.550663904Z" level=info msg="StopPodSandbox for \"428ee3b2779775071ded8faaab336414883b508e64fb37d0e8390f1f28463f7e\" returns successfully" Feb 12 20:27:09.646694 kubelet[1341]: E0212 20:27:09.646476 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:27:09.666894 kubelet[1341]: I0212 20:27:09.666826 1341 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bee0dc3b-31c8-4cc8-810b-f0f1ad747215-xtables-lock\") pod \"bee0dc3b-31c8-4cc8-810b-f0f1ad747215\" (UID: \"bee0dc3b-31c8-4cc8-810b-f0f1ad747215\") " Feb 12 20:27:09.667314 kubelet[1341]: I0212 20:27:09.667289 1341 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/bee0dc3b-31c8-4cc8-810b-f0f1ad747215-bpf-maps\") pod \"bee0dc3b-31c8-4cc8-810b-f0f1ad747215\" (UID: \"bee0dc3b-31c8-4cc8-810b-f0f1ad747215\") " Feb 12 20:27:09.667586 kubelet[1341]: I0212 20:27:09.667014 1341 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bee0dc3b-31c8-4cc8-810b-f0f1ad747215-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "bee0dc3b-31c8-4cc8-810b-f0f1ad747215" (UID: "bee0dc3b-31c8-4cc8-810b-f0f1ad747215"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:27:09.667809 kubelet[1341]: I0212 20:27:09.667384 1341 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bee0dc3b-31c8-4cc8-810b-f0f1ad747215-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "bee0dc3b-31c8-4cc8-810b-f0f1ad747215" (UID: "bee0dc3b-31c8-4cc8-810b-f0f1ad747215"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:27:09.667809 kubelet[1341]: I0212 20:27:09.667719 1341 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bee0dc3b-31c8-4cc8-810b-f0f1ad747215-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "bee0dc3b-31c8-4cc8-810b-f0f1ad747215" (UID: "bee0dc3b-31c8-4cc8-810b-f0f1ad747215"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:27:09.668024 kubelet[1341]: I0212 20:27:09.667996 1341 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/bee0dc3b-31c8-4cc8-810b-f0f1ad747215-cilium-cgroup\") pod \"bee0dc3b-31c8-4cc8-810b-f0f1ad747215\" (UID: \"bee0dc3b-31c8-4cc8-810b-f0f1ad747215\") " Feb 12 20:27:09.668374 kubelet[1341]: I0212 20:27:09.668349 1341 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/bee0dc3b-31c8-4cc8-810b-f0f1ad747215-cilium-run\") pod \"bee0dc3b-31c8-4cc8-810b-f0f1ad747215\" (UID: \"bee0dc3b-31c8-4cc8-810b-f0f1ad747215\") " Feb 12 20:27:09.668767 kubelet[1341]: I0212 20:27:09.668419 1341 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bee0dc3b-31c8-4cc8-810b-f0f1ad747215-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "bee0dc3b-31c8-4cc8-810b-f0f1ad747215" (UID: "bee0dc3b-31c8-4cc8-810b-f0f1ad747215"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:27:09.669039 kubelet[1341]: I0212 20:27:09.668991 1341 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bee0dc3b-31c8-4cc8-810b-f0f1ad747215-cilium-config-path\") pod \"bee0dc3b-31c8-4cc8-810b-f0f1ad747215\" (UID: \"bee0dc3b-31c8-4cc8-810b-f0f1ad747215\") " Feb 12 20:27:09.669330 kubelet[1341]: I0212 20:27:09.669306 1341 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/bee0dc3b-31c8-4cc8-810b-f0f1ad747215-host-proc-sys-kernel\") pod \"bee0dc3b-31c8-4cc8-810b-f0f1ad747215\" (UID: \"bee0dc3b-31c8-4cc8-810b-f0f1ad747215\") " Feb 12 20:27:09.669608 kubelet[1341]: I0212 20:27:09.669548 1341 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bee0dc3b-31c8-4cc8-810b-f0f1ad747215-etc-cni-netd\") pod \"bee0dc3b-31c8-4cc8-810b-f0f1ad747215\" (UID: \"bee0dc3b-31c8-4cc8-810b-f0f1ad747215\") " Feb 12 20:27:09.669887 kubelet[1341]: I0212 20:27:09.669864 1341 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xfq7g\" (UniqueName: \"kubernetes.io/projected/bee0dc3b-31c8-4cc8-810b-f0f1ad747215-kube-api-access-xfq7g\") pod \"bee0dc3b-31c8-4cc8-810b-f0f1ad747215\" (UID: \"bee0dc3b-31c8-4cc8-810b-f0f1ad747215\") " Feb 12 20:27:09.670164 kubelet[1341]: I0212 20:27:09.670095 1341 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/bee0dc3b-31c8-4cc8-810b-f0f1ad747215-cni-path\") pod \"bee0dc3b-31c8-4cc8-810b-f0f1ad747215\" (UID: \"bee0dc3b-31c8-4cc8-810b-f0f1ad747215\") " Feb 12 20:27:09.670442 kubelet[1341]: I0212 20:27:09.670420 1341 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/bee0dc3b-31c8-4cc8-810b-f0f1ad747215-clustermesh-secrets\") pod \"bee0dc3b-31c8-4cc8-810b-f0f1ad747215\" (UID: \"bee0dc3b-31c8-4cc8-810b-f0f1ad747215\") " Feb 12 20:27:09.670702 kubelet[1341]: I0212 20:27:09.670680 1341 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/bee0dc3b-31c8-4cc8-810b-f0f1ad747215-hubble-tls\") pod \"bee0dc3b-31c8-4cc8-810b-f0f1ad747215\" (UID: \"bee0dc3b-31c8-4cc8-810b-f0f1ad747215\") " Feb 12 20:27:09.670936 kubelet[1341]: I0212 20:27:09.670895 1341 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/bee0dc3b-31c8-4cc8-810b-f0f1ad747215-hostproc\") pod \"bee0dc3b-31c8-4cc8-810b-f0f1ad747215\" (UID: \"bee0dc3b-31c8-4cc8-810b-f0f1ad747215\") " Feb 12 20:27:09.671190 kubelet[1341]: I0212 20:27:09.671166 1341 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bee0dc3b-31c8-4cc8-810b-f0f1ad747215-lib-modules\") pod \"bee0dc3b-31c8-4cc8-810b-f0f1ad747215\" (UID: \"bee0dc3b-31c8-4cc8-810b-f0f1ad747215\") " Feb 12 20:27:09.671451 kubelet[1341]: I0212 20:27:09.671395 1341 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/bee0dc3b-31c8-4cc8-810b-f0f1ad747215-host-proc-sys-net\") pod \"bee0dc3b-31c8-4cc8-810b-f0f1ad747215\" (UID: \"bee0dc3b-31c8-4cc8-810b-f0f1ad747215\") " Feb 12 20:27:09.674027 kubelet[1341]: I0212 20:27:09.673985 1341 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/bee0dc3b-31c8-4cc8-810b-f0f1ad747215-cilium-run\") on node \"172.24.4.189\" DevicePath \"\"" Feb 12 20:27:09.674198 kubelet[1341]: I0212 20:27:09.674036 1341 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bee0dc3b-31c8-4cc8-810b-f0f1ad747215-xtables-lock\") on node \"172.24.4.189\" DevicePath \"\"" Feb 12 20:27:09.674198 kubelet[1341]: I0212 20:27:09.674065 1341 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/bee0dc3b-31c8-4cc8-810b-f0f1ad747215-bpf-maps\") on node \"172.24.4.189\" DevicePath \"\"" Feb 12 20:27:09.674198 kubelet[1341]: I0212 20:27:09.674091 1341 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/bee0dc3b-31c8-4cc8-810b-f0f1ad747215-cilium-cgroup\") on node \"172.24.4.189\" DevicePath \"\"" Feb 12 20:27:09.674198 kubelet[1341]: I0212 20:27:09.671664 1341 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bee0dc3b-31c8-4cc8-810b-f0f1ad747215-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "bee0dc3b-31c8-4cc8-810b-f0f1ad747215" (UID: "bee0dc3b-31c8-4cc8-810b-f0f1ad747215"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:27:09.674198 kubelet[1341]: I0212 20:27:09.671725 1341 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bee0dc3b-31c8-4cc8-810b-f0f1ad747215-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "bee0dc3b-31c8-4cc8-810b-f0f1ad747215" (UID: "bee0dc3b-31c8-4cc8-810b-f0f1ad747215"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:27:09.674198 kubelet[1341]: I0212 20:27:09.671756 1341 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bee0dc3b-31c8-4cc8-810b-f0f1ad747215-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "bee0dc3b-31c8-4cc8-810b-f0f1ad747215" (UID: "bee0dc3b-31c8-4cc8-810b-f0f1ad747215"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:27:09.674551 kubelet[1341]: I0212 20:27:09.672792 1341 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bee0dc3b-31c8-4cc8-810b-f0f1ad747215-cni-path" (OuterVolumeSpecName: "cni-path") pod "bee0dc3b-31c8-4cc8-810b-f0f1ad747215" (UID: "bee0dc3b-31c8-4cc8-810b-f0f1ad747215"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:27:09.674551 kubelet[1341]: I0212 20:27:09.673882 1341 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bee0dc3b-31c8-4cc8-810b-f0f1ad747215-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "bee0dc3b-31c8-4cc8-810b-f0f1ad747215" (UID: "bee0dc3b-31c8-4cc8-810b-f0f1ad747215"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 12 20:27:09.674923 kubelet[1341]: I0212 20:27:09.674874 1341 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bee0dc3b-31c8-4cc8-810b-f0f1ad747215-hostproc" (OuterVolumeSpecName: "hostproc") pod "bee0dc3b-31c8-4cc8-810b-f0f1ad747215" (UID: "bee0dc3b-31c8-4cc8-810b-f0f1ad747215"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:27:09.675463 kubelet[1341]: I0212 20:27:09.675386 1341 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bee0dc3b-31c8-4cc8-810b-f0f1ad747215-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "bee0dc3b-31c8-4cc8-810b-f0f1ad747215" (UID: "bee0dc3b-31c8-4cc8-810b-f0f1ad747215"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:27:09.683472 systemd[1]: var-lib-kubelet-pods-bee0dc3b\x2d31c8\x2d4cc8\x2d810b\x2df0f1ad747215-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 12 20:27:09.685467 kubelet[1341]: I0212 20:27:09.685419 1341 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bee0dc3b-31c8-4cc8-810b-f0f1ad747215-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "bee0dc3b-31c8-4cc8-810b-f0f1ad747215" (UID: "bee0dc3b-31c8-4cc8-810b-f0f1ad747215"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 12 20:27:09.687897 kubelet[1341]: I0212 20:27:09.687821 1341 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bee0dc3b-31c8-4cc8-810b-f0f1ad747215-kube-api-access-xfq7g" (OuterVolumeSpecName: "kube-api-access-xfq7g") pod "bee0dc3b-31c8-4cc8-810b-f0f1ad747215" (UID: "bee0dc3b-31c8-4cc8-810b-f0f1ad747215"). InnerVolumeSpecName "kube-api-access-xfq7g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 12 20:27:09.695458 kubelet[1341]: I0212 20:27:09.695383 1341 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bee0dc3b-31c8-4cc8-810b-f0f1ad747215-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "bee0dc3b-31c8-4cc8-810b-f0f1ad747215" (UID: "bee0dc3b-31c8-4cc8-810b-f0f1ad747215"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 12 20:27:09.774650 kubelet[1341]: I0212 20:27:09.774596 1341 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bee0dc3b-31c8-4cc8-810b-f0f1ad747215-cilium-config-path\") on node \"172.24.4.189\" DevicePath \"\"" Feb 12 20:27:09.774982 kubelet[1341]: I0212 20:27:09.774937 1341 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/bee0dc3b-31c8-4cc8-810b-f0f1ad747215-host-proc-sys-kernel\") on node \"172.24.4.189\" DevicePath \"\"" Feb 12 20:27:09.775247 kubelet[1341]: I0212 20:27:09.775224 1341 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bee0dc3b-31c8-4cc8-810b-f0f1ad747215-etc-cni-netd\") on node \"172.24.4.189\" DevicePath \"\"" Feb 12 20:27:09.775440 kubelet[1341]: I0212 20:27:09.775418 1341 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-xfq7g\" (UniqueName: \"kubernetes.io/projected/bee0dc3b-31c8-4cc8-810b-f0f1ad747215-kube-api-access-xfq7g\") on node \"172.24.4.189\" DevicePath \"\"" Feb 12 20:27:09.775645 kubelet[1341]: I0212 20:27:09.775624 1341 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/bee0dc3b-31c8-4cc8-810b-f0f1ad747215-cni-path\") on node \"172.24.4.189\" DevicePath \"\"" Feb 12 20:27:09.775839 kubelet[1341]: I0212 20:27:09.775818 1341 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/bee0dc3b-31c8-4cc8-810b-f0f1ad747215-clustermesh-secrets\") on node \"172.24.4.189\" DevicePath \"\"" Feb 12 20:27:09.776029 kubelet[1341]: I0212 20:27:09.776009 1341 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/bee0dc3b-31c8-4cc8-810b-f0f1ad747215-hubble-tls\") on node \"172.24.4.189\" DevicePath \"\"" Feb 12 20:27:09.776263 kubelet[1341]: I0212 20:27:09.776241 1341 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/bee0dc3b-31c8-4cc8-810b-f0f1ad747215-hostproc\") on node \"172.24.4.189\" DevicePath \"\"" Feb 12 20:27:09.776455 kubelet[1341]: I0212 20:27:09.776435 1341 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bee0dc3b-31c8-4cc8-810b-f0f1ad747215-lib-modules\") on node \"172.24.4.189\" DevicePath \"\"" Feb 12 20:27:09.776717 kubelet[1341]: I0212 20:27:09.776693 1341 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/bee0dc3b-31c8-4cc8-810b-f0f1ad747215-host-proc-sys-net\") on node \"172.24.4.189\" DevicePath \"\"" Feb 12 20:27:10.191298 kubelet[1341]: I0212 20:27:10.191253 1341 scope.go:117] "RemoveContainer" containerID="95c7a06564c90d785eebd68ea988d887240a56aa812cb475521302c9ba6cfaff" Feb 12 20:27:10.198355 env[1055]: time="2024-02-12T20:27:10.196592222Z" level=info msg="RemoveContainer for \"95c7a06564c90d785eebd68ea988d887240a56aa812cb475521302c9ba6cfaff\"" Feb 12 20:27:10.196946 systemd[1]: Removed slice kubepods-burstable-podbee0dc3b_31c8_4cc8_810b_f0f1ad747215.slice. Feb 12 20:27:10.197233 systemd[1]: kubepods-burstable-podbee0dc3b_31c8_4cc8_810b_f0f1ad747215.slice: Consumed 9.448s CPU time. Feb 12 20:27:10.202998 env[1055]: time="2024-02-12T20:27:10.202921065Z" level=info msg="RemoveContainer for \"95c7a06564c90d785eebd68ea988d887240a56aa812cb475521302c9ba6cfaff\" returns successfully" Feb 12 20:27:10.206010 kubelet[1341]: I0212 20:27:10.205949 1341 scope.go:117] "RemoveContainer" containerID="6294ba581d03021dcadcacaeabb5fa54807fead338c7c64da04c9125d585e11d" Feb 12 20:27:10.208585 env[1055]: time="2024-02-12T20:27:10.208476503Z" level=info msg="RemoveContainer for \"6294ba581d03021dcadcacaeabb5fa54807fead338c7c64da04c9125d585e11d\"" Feb 12 20:27:10.215586 env[1055]: time="2024-02-12T20:27:10.215453678Z" level=info msg="RemoveContainer for \"6294ba581d03021dcadcacaeabb5fa54807fead338c7c64da04c9125d585e11d\" returns successfully" Feb 12 20:27:10.216639 kubelet[1341]: I0212 20:27:10.216502 1341 scope.go:117] "RemoveContainer" containerID="ad5471b36e5b94dca1483cae7fa30cb9afec47dfa9af174d9281c3dd3249843d" Feb 12 20:27:10.221067 env[1055]: time="2024-02-12T20:27:10.220979941Z" level=info msg="RemoveContainer for \"ad5471b36e5b94dca1483cae7fa30cb9afec47dfa9af174d9281c3dd3249843d\"" Feb 12 20:27:10.228807 env[1055]: time="2024-02-12T20:27:10.228715674Z" level=info msg="RemoveContainer for \"ad5471b36e5b94dca1483cae7fa30cb9afec47dfa9af174d9281c3dd3249843d\" returns successfully" Feb 12 20:27:10.229884 kubelet[1341]: I0212 20:27:10.229815 1341 scope.go:117] "RemoveContainer" containerID="a1a1191f3aac20eaf99f9967936e3109acf1d9297bc2ea231cd74dba4e771560" Feb 12 20:27:10.234456 env[1055]: time="2024-02-12T20:27:10.234379033Z" level=info msg="RemoveContainer for \"a1a1191f3aac20eaf99f9967936e3109acf1d9297bc2ea231cd74dba4e771560\"" Feb 12 20:27:10.239832 env[1055]: time="2024-02-12T20:27:10.239742333Z" level=info msg="RemoveContainer for \"a1a1191f3aac20eaf99f9967936e3109acf1d9297bc2ea231cd74dba4e771560\" returns successfully" Feb 12 20:27:10.240392 kubelet[1341]: I0212 20:27:10.240285 1341 scope.go:117] "RemoveContainer" containerID="09f4015a14c0faa48640c3c43e817671242b7056ccf32d05f997131e5cb41025" Feb 12 20:27:10.243421 env[1055]: time="2024-02-12T20:27:10.243363097Z" level=info msg="RemoveContainer for \"09f4015a14c0faa48640c3c43e817671242b7056ccf32d05f997131e5cb41025\"" Feb 12 20:27:10.248827 env[1055]: time="2024-02-12T20:27:10.248766421Z" level=info msg="RemoveContainer for \"09f4015a14c0faa48640c3c43e817671242b7056ccf32d05f997131e5cb41025\" returns successfully" Feb 12 20:27:10.249521 kubelet[1341]: I0212 20:27:10.249407 1341 scope.go:117] "RemoveContainer" containerID="95c7a06564c90d785eebd68ea988d887240a56aa812cb475521302c9ba6cfaff" Feb 12 20:27:10.250171 env[1055]: time="2024-02-12T20:27:10.249978757Z" level=error msg="ContainerStatus for \"95c7a06564c90d785eebd68ea988d887240a56aa812cb475521302c9ba6cfaff\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"95c7a06564c90d785eebd68ea988d887240a56aa812cb475521302c9ba6cfaff\": not found" Feb 12 20:27:10.250918 kubelet[1341]: E0212 20:27:10.250878 1341 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"95c7a06564c90d785eebd68ea988d887240a56aa812cb475521302c9ba6cfaff\": not found" containerID="95c7a06564c90d785eebd68ea988d887240a56aa812cb475521302c9ba6cfaff" Feb 12 20:27:10.251291 kubelet[1341]: I0212 20:27:10.251231 1341 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"95c7a06564c90d785eebd68ea988d887240a56aa812cb475521302c9ba6cfaff"} err="failed to get container status \"95c7a06564c90d785eebd68ea988d887240a56aa812cb475521302c9ba6cfaff\": rpc error: code = NotFound desc = an error occurred when try to find container \"95c7a06564c90d785eebd68ea988d887240a56aa812cb475521302c9ba6cfaff\": not found" Feb 12 20:27:10.251291 kubelet[1341]: I0212 20:27:10.251285 1341 scope.go:117] "RemoveContainer" containerID="6294ba581d03021dcadcacaeabb5fa54807fead338c7c64da04c9125d585e11d" Feb 12 20:27:10.251880 env[1055]: time="2024-02-12T20:27:10.251756839Z" level=error msg="ContainerStatus for \"6294ba581d03021dcadcacaeabb5fa54807fead338c7c64da04c9125d585e11d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6294ba581d03021dcadcacaeabb5fa54807fead338c7c64da04c9125d585e11d\": not found" Feb 12 20:27:10.252497 kubelet[1341]: E0212 20:27:10.252408 1341 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6294ba581d03021dcadcacaeabb5fa54807fead338c7c64da04c9125d585e11d\": not found" containerID="6294ba581d03021dcadcacaeabb5fa54807fead338c7c64da04c9125d585e11d" Feb 12 20:27:10.252716 kubelet[1341]: I0212 20:27:10.252566 1341 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6294ba581d03021dcadcacaeabb5fa54807fead338c7c64da04c9125d585e11d"} err="failed to get container status \"6294ba581d03021dcadcacaeabb5fa54807fead338c7c64da04c9125d585e11d\": rpc error: code = NotFound desc = an error occurred when try to find container \"6294ba581d03021dcadcacaeabb5fa54807fead338c7c64da04c9125d585e11d\": not found" Feb 12 20:27:10.252716 kubelet[1341]: I0212 20:27:10.252596 1341 scope.go:117] "RemoveContainer" containerID="ad5471b36e5b94dca1483cae7fa30cb9afec47dfa9af174d9281c3dd3249843d" Feb 12 20:27:10.253252 env[1055]: time="2024-02-12T20:27:10.253148658Z" level=error msg="ContainerStatus for \"ad5471b36e5b94dca1483cae7fa30cb9afec47dfa9af174d9281c3dd3249843d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ad5471b36e5b94dca1483cae7fa30cb9afec47dfa9af174d9281c3dd3249843d\": not found" Feb 12 20:27:10.253901 kubelet[1341]: E0212 20:27:10.253847 1341 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ad5471b36e5b94dca1483cae7fa30cb9afec47dfa9af174d9281c3dd3249843d\": not found" containerID="ad5471b36e5b94dca1483cae7fa30cb9afec47dfa9af174d9281c3dd3249843d" Feb 12 20:27:10.254032 kubelet[1341]: I0212 20:27:10.253957 1341 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ad5471b36e5b94dca1483cae7fa30cb9afec47dfa9af174d9281c3dd3249843d"} err="failed to get container status \"ad5471b36e5b94dca1483cae7fa30cb9afec47dfa9af174d9281c3dd3249843d\": rpc error: code = NotFound desc = an error occurred when try to find container \"ad5471b36e5b94dca1483cae7fa30cb9afec47dfa9af174d9281c3dd3249843d\": not found" Feb 12 20:27:10.254032 kubelet[1341]: I0212 20:27:10.254022 1341 scope.go:117] "RemoveContainer" containerID="a1a1191f3aac20eaf99f9967936e3109acf1d9297bc2ea231cd74dba4e771560" Feb 12 20:27:10.254610 env[1055]: time="2024-02-12T20:27:10.254510913Z" level=error msg="ContainerStatus for \"a1a1191f3aac20eaf99f9967936e3109acf1d9297bc2ea231cd74dba4e771560\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a1a1191f3aac20eaf99f9967936e3109acf1d9297bc2ea231cd74dba4e771560\": not found" Feb 12 20:27:10.255245 kubelet[1341]: E0212 20:27:10.255065 1341 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a1a1191f3aac20eaf99f9967936e3109acf1d9297bc2ea231cd74dba4e771560\": not found" containerID="a1a1191f3aac20eaf99f9967936e3109acf1d9297bc2ea231cd74dba4e771560" Feb 12 20:27:10.255245 kubelet[1341]: I0212 20:27:10.255202 1341 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a1a1191f3aac20eaf99f9967936e3109acf1d9297bc2ea231cd74dba4e771560"} err="failed to get container status \"a1a1191f3aac20eaf99f9967936e3109acf1d9297bc2ea231cd74dba4e771560\": rpc error: code = NotFound desc = an error occurred when try to find container \"a1a1191f3aac20eaf99f9967936e3109acf1d9297bc2ea231cd74dba4e771560\": not found" Feb 12 20:27:10.255475 kubelet[1341]: I0212 20:27:10.255276 1341 scope.go:117] "RemoveContainer" containerID="09f4015a14c0faa48640c3c43e817671242b7056ccf32d05f997131e5cb41025" Feb 12 20:27:10.255905 env[1055]: time="2024-02-12T20:27:10.255809740Z" level=error msg="ContainerStatus for \"09f4015a14c0faa48640c3c43e817671242b7056ccf32d05f997131e5cb41025\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"09f4015a14c0faa48640c3c43e817671242b7056ccf32d05f997131e5cb41025\": not found" Feb 12 20:27:10.256569 kubelet[1341]: E0212 20:27:10.256402 1341 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"09f4015a14c0faa48640c3c43e817671242b7056ccf32d05f997131e5cb41025\": not found" containerID="09f4015a14c0faa48640c3c43e817671242b7056ccf32d05f997131e5cb41025" Feb 12 20:27:10.256569 kubelet[1341]: I0212 20:27:10.256488 1341 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"09f4015a14c0faa48640c3c43e817671242b7056ccf32d05f997131e5cb41025"} err="failed to get container status \"09f4015a14c0faa48640c3c43e817671242b7056ccf32d05f997131e5cb41025\": rpc error: code = NotFound desc = an error occurred when try to find container \"09f4015a14c0faa48640c3c43e817671242b7056ccf32d05f997131e5cb41025\": not found" Feb 12 20:27:10.295827 systemd[1]: var-lib-kubelet-pods-bee0dc3b\x2d31c8\x2d4cc8\x2d810b\x2df0f1ad747215-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dxfq7g.mount: Deactivated successfully. Feb 12 20:27:10.296072 systemd[1]: var-lib-kubelet-pods-bee0dc3b\x2d31c8\x2d4cc8\x2d810b\x2df0f1ad747215-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 12 20:27:10.647334 kubelet[1341]: E0212 20:27:10.647274 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:27:10.835225 kubelet[1341]: I0212 20:27:10.834084 1341 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="bee0dc3b-31c8-4cc8-810b-f0f1ad747215" path="/var/lib/kubelet/pods/bee0dc3b-31c8-4cc8-810b-f0f1ad747215/volumes" Feb 12 20:27:11.648467 kubelet[1341]: E0212 20:27:11.648300 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:27:12.577173 kubelet[1341]: E0212 20:27:12.577062 1341 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:27:12.649322 kubelet[1341]: E0212 20:27:12.649215 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:27:12.755321 kubelet[1341]: E0212 20:27:12.755275 1341 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 12 20:27:13.649843 kubelet[1341]: E0212 20:27:13.649772 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:27:14.286006 kubelet[1341]: I0212 20:27:14.285860 1341 topology_manager.go:215] "Topology Admit Handler" podUID="9c8e7bf5-c35f-4afd-a974-364948761d1f" podNamespace="kube-system" podName="cilium-operator-6bc8ccdb58-krsnb" Feb 12 20:27:14.286006 kubelet[1341]: E0212 20:27:14.286011 1341 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="bee0dc3b-31c8-4cc8-810b-f0f1ad747215" containerName="mount-bpf-fs" Feb 12 20:27:14.286341 kubelet[1341]: E0212 20:27:14.286041 1341 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="bee0dc3b-31c8-4cc8-810b-f0f1ad747215" containerName="apply-sysctl-overwrites" Feb 12 20:27:14.286341 kubelet[1341]: E0212 20:27:14.286059 1341 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="bee0dc3b-31c8-4cc8-810b-f0f1ad747215" containerName="clean-cilium-state" Feb 12 20:27:14.286341 kubelet[1341]: E0212 20:27:14.286125 1341 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="bee0dc3b-31c8-4cc8-810b-f0f1ad747215" containerName="cilium-agent" Feb 12 20:27:14.286341 kubelet[1341]: E0212 20:27:14.286147 1341 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="bee0dc3b-31c8-4cc8-810b-f0f1ad747215" containerName="mount-cgroup" Feb 12 20:27:14.286341 kubelet[1341]: I0212 20:27:14.286225 1341 memory_manager.go:346] "RemoveStaleState removing state" podUID="bee0dc3b-31c8-4cc8-810b-f0f1ad747215" containerName="cilium-agent" Feb 12 20:27:14.299483 systemd[1]: Created slice kubepods-besteffort-pod9c8e7bf5_c35f_4afd_a974_364948761d1f.slice. Feb 12 20:27:14.301973 kubelet[1341]: W0212 20:27:14.301914 1341 reflector.go:535] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:172.24.4.189" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node '172.24.4.189' and this object Feb 12 20:27:14.302318 kubelet[1341]: E0212 20:27:14.302270 1341 reflector.go:147] object-"kube-system"/"cilium-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:172.24.4.189" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node '172.24.4.189' and this object Feb 12 20:27:14.323326 kubelet[1341]: I0212 20:27:14.323253 1341 topology_manager.go:215] "Topology Admit Handler" podUID="5e73f1ba-ada9-49ed-9e8a-514e5b7327c8" podNamespace="kube-system" podName="cilium-spfq2" Feb 12 20:27:14.335036 systemd[1]: Created slice kubepods-burstable-pod5e73f1ba_ada9_49ed_9e8a_514e5b7327c8.slice. Feb 12 20:27:14.410598 kubelet[1341]: I0212 20:27:14.410524 1341 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5e73f1ba-ada9-49ed-9e8a-514e5b7327c8-hostproc\") pod \"cilium-spfq2\" (UID: \"5e73f1ba-ada9-49ed-9e8a-514e5b7327c8\") " pod="kube-system/cilium-spfq2" Feb 12 20:27:14.411019 kubelet[1341]: I0212 20:27:14.410993 1341 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5e73f1ba-ada9-49ed-9e8a-514e5b7327c8-etc-cni-netd\") pod \"cilium-spfq2\" (UID: \"5e73f1ba-ada9-49ed-9e8a-514e5b7327c8\") " pod="kube-system/cilium-spfq2" Feb 12 20:27:14.411375 kubelet[1341]: I0212 20:27:14.411350 1341 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5e73f1ba-ada9-49ed-9e8a-514e5b7327c8-lib-modules\") pod \"cilium-spfq2\" (UID: \"5e73f1ba-ada9-49ed-9e8a-514e5b7327c8\") " pod="kube-system/cilium-spfq2" Feb 12 20:27:14.411673 kubelet[1341]: I0212 20:27:14.411649 1341 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5e73f1ba-ada9-49ed-9e8a-514e5b7327c8-clustermesh-secrets\") pod \"cilium-spfq2\" (UID: \"5e73f1ba-ada9-49ed-9e8a-514e5b7327c8\") " pod="kube-system/cilium-spfq2" Feb 12 20:27:14.411964 kubelet[1341]: I0212 20:27:14.411941 1341 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5e73f1ba-ada9-49ed-9e8a-514e5b7327c8-cilium-config-path\") pod \"cilium-spfq2\" (UID: \"5e73f1ba-ada9-49ed-9e8a-514e5b7327c8\") " pod="kube-system/cilium-spfq2" Feb 12 20:27:14.412283 kubelet[1341]: I0212 20:27:14.412261 1341 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5e73f1ba-ada9-49ed-9e8a-514e5b7327c8-host-proc-sys-net\") pod \"cilium-spfq2\" (UID: \"5e73f1ba-ada9-49ed-9e8a-514e5b7327c8\") " pod="kube-system/cilium-spfq2" Feb 12 20:27:14.412723 kubelet[1341]: I0212 20:27:14.412686 1341 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9c8e7bf5-c35f-4afd-a974-364948761d1f-cilium-config-path\") pod \"cilium-operator-6bc8ccdb58-krsnb\" (UID: \"9c8e7bf5-c35f-4afd-a974-364948761d1f\") " pod="kube-system/cilium-operator-6bc8ccdb58-krsnb" Feb 12 20:27:14.413079 kubelet[1341]: I0212 20:27:14.413054 1341 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7dc5\" (UniqueName: \"kubernetes.io/projected/5e73f1ba-ada9-49ed-9e8a-514e5b7327c8-kube-api-access-k7dc5\") pod \"cilium-spfq2\" (UID: \"5e73f1ba-ada9-49ed-9e8a-514e5b7327c8\") " pod="kube-system/cilium-spfq2" Feb 12 20:27:14.413394 kubelet[1341]: I0212 20:27:14.413369 1341 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5e73f1ba-ada9-49ed-9e8a-514e5b7327c8-host-proc-sys-kernel\") pod \"cilium-spfq2\" (UID: \"5e73f1ba-ada9-49ed-9e8a-514e5b7327c8\") " pod="kube-system/cilium-spfq2" Feb 12 20:27:14.413684 kubelet[1341]: I0212 20:27:14.413661 1341 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/5e73f1ba-ada9-49ed-9e8a-514e5b7327c8-cilium-ipsec-secrets\") pod \"cilium-spfq2\" (UID: \"5e73f1ba-ada9-49ed-9e8a-514e5b7327c8\") " pod="kube-system/cilium-spfq2" Feb 12 20:27:14.413975 kubelet[1341]: I0212 20:27:14.413953 1341 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kg4zt\" (UniqueName: \"kubernetes.io/projected/9c8e7bf5-c35f-4afd-a974-364948761d1f-kube-api-access-kg4zt\") pod \"cilium-operator-6bc8ccdb58-krsnb\" (UID: \"9c8e7bf5-c35f-4afd-a974-364948761d1f\") " pod="kube-system/cilium-operator-6bc8ccdb58-krsnb" Feb 12 20:27:14.414304 kubelet[1341]: I0212 20:27:14.414281 1341 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5e73f1ba-ada9-49ed-9e8a-514e5b7327c8-cni-path\") pod \"cilium-spfq2\" (UID: \"5e73f1ba-ada9-49ed-9e8a-514e5b7327c8\") " pod="kube-system/cilium-spfq2" Feb 12 20:27:14.414587 kubelet[1341]: I0212 20:27:14.414564 1341 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5e73f1ba-ada9-49ed-9e8a-514e5b7327c8-xtables-lock\") pod \"cilium-spfq2\" (UID: \"5e73f1ba-ada9-49ed-9e8a-514e5b7327c8\") " pod="kube-system/cilium-spfq2" Feb 12 20:27:14.414883 kubelet[1341]: I0212 20:27:14.414860 1341 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5e73f1ba-ada9-49ed-9e8a-514e5b7327c8-hubble-tls\") pod \"cilium-spfq2\" (UID: \"5e73f1ba-ada9-49ed-9e8a-514e5b7327c8\") " pod="kube-system/cilium-spfq2" Feb 12 20:27:14.415183 kubelet[1341]: I0212 20:27:14.415160 1341 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5e73f1ba-ada9-49ed-9e8a-514e5b7327c8-cilium-run\") pod \"cilium-spfq2\" (UID: \"5e73f1ba-ada9-49ed-9e8a-514e5b7327c8\") " pod="kube-system/cilium-spfq2" Feb 12 20:27:14.415464 kubelet[1341]: I0212 20:27:14.415441 1341 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5e73f1ba-ada9-49ed-9e8a-514e5b7327c8-bpf-maps\") pod \"cilium-spfq2\" (UID: \"5e73f1ba-ada9-49ed-9e8a-514e5b7327c8\") " pod="kube-system/cilium-spfq2" Feb 12 20:27:14.415709 kubelet[1341]: I0212 20:27:14.415683 1341 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5e73f1ba-ada9-49ed-9e8a-514e5b7327c8-cilium-cgroup\") pod \"cilium-spfq2\" (UID: \"5e73f1ba-ada9-49ed-9e8a-514e5b7327c8\") " pod="kube-system/cilium-spfq2" Feb 12 20:27:14.651509 kubelet[1341]: E0212 20:27:14.650377 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:27:15.521317 kubelet[1341]: E0212 20:27:15.521198 1341 configmap.go:199] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Feb 12 20:27:15.521934 kubelet[1341]: E0212 20:27:15.521884 1341 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5e73f1ba-ada9-49ed-9e8a-514e5b7327c8-cilium-config-path podName:5e73f1ba-ada9-49ed-9e8a-514e5b7327c8 nodeName:}" failed. No retries permitted until 2024-02-12 20:27:16.021372353 +0000 UTC m=+84.026071314 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/5e73f1ba-ada9-49ed-9e8a-514e5b7327c8-cilium-config-path") pod "cilium-spfq2" (UID: "5e73f1ba-ada9-49ed-9e8a-514e5b7327c8") : failed to sync configmap cache: timed out waiting for the condition Feb 12 20:27:15.536873 kubelet[1341]: E0212 20:27:15.536834 1341 configmap.go:199] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Feb 12 20:27:15.537254 kubelet[1341]: E0212 20:27:15.537223 1341 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9c8e7bf5-c35f-4afd-a974-364948761d1f-cilium-config-path podName:9c8e7bf5-c35f-4afd-a974-364948761d1f nodeName:}" failed. No retries permitted until 2024-02-12 20:27:16.03718823 +0000 UTC m=+84.041887200 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/9c8e7bf5-c35f-4afd-a974-364948761d1f-cilium-config-path") pod "cilium-operator-6bc8ccdb58-krsnb" (UID: "9c8e7bf5-c35f-4afd-a974-364948761d1f") : failed to sync configmap cache: timed out waiting for the condition Feb 12 20:27:15.651971 kubelet[1341]: E0212 20:27:15.651924 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:27:16.146589 env[1055]: time="2024-02-12T20:27:16.146498993Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-spfq2,Uid:5e73f1ba-ada9-49ed-9e8a-514e5b7327c8,Namespace:kube-system,Attempt:0,}" Feb 12 20:27:16.189859 env[1055]: time="2024-02-12T20:27:16.189702713Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 20:27:16.190068 env[1055]: time="2024-02-12T20:27:16.189892689Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 20:27:16.190068 env[1055]: time="2024-02-12T20:27:16.190018745Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 20:27:16.190794 env[1055]: time="2024-02-12T20:27:16.190687969Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b3e00fd49f1a2d58122915e35ba2fd7336d8d69514386c9d8ca64d764818fa19 pid=2867 runtime=io.containerd.runc.v2 Feb 12 20:27:16.237414 systemd[1]: run-containerd-runc-k8s.io-b3e00fd49f1a2d58122915e35ba2fd7336d8d69514386c9d8ca64d764818fa19-runc.h8IhkF.mount: Deactivated successfully. Feb 12 20:27:16.247172 systemd[1]: Started cri-containerd-b3e00fd49f1a2d58122915e35ba2fd7336d8d69514386c9d8ca64d764818fa19.scope. Feb 12 20:27:16.273452 env[1055]: time="2024-02-12T20:27:16.273389965Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-spfq2,Uid:5e73f1ba-ada9-49ed-9e8a-514e5b7327c8,Namespace:kube-system,Attempt:0,} returns sandbox id \"b3e00fd49f1a2d58122915e35ba2fd7336d8d69514386c9d8ca64d764818fa19\"" Feb 12 20:27:16.277239 env[1055]: time="2024-02-12T20:27:16.277196056Z" level=info msg="CreateContainer within sandbox \"b3e00fd49f1a2d58122915e35ba2fd7336d8d69514386c9d8ca64d764818fa19\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 12 20:27:16.296693 env[1055]: time="2024-02-12T20:27:16.296624363Z" level=info msg="CreateContainer within sandbox \"b3e00fd49f1a2d58122915e35ba2fd7336d8d69514386c9d8ca64d764818fa19\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"510c07f0dc41959d6432a98e9f21c4d24f09257c8a3dabb1b9ebebf08ba8a9a7\"" Feb 12 20:27:16.297254 env[1055]: time="2024-02-12T20:27:16.297227493Z" level=info msg="StartContainer for \"510c07f0dc41959d6432a98e9f21c4d24f09257c8a3dabb1b9ebebf08ba8a9a7\"" Feb 12 20:27:16.311773 kubelet[1341]: I0212 20:27:16.311702 1341 setters.go:552] "Node became not ready" node="172.24.4.189" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-02-12T20:27:16Z","lastTransitionTime":"2024-02-12T20:27:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Feb 12 20:27:16.323077 systemd[1]: Started cri-containerd-510c07f0dc41959d6432a98e9f21c4d24f09257c8a3dabb1b9ebebf08ba8a9a7.scope. Feb 12 20:27:16.337336 systemd[1]: cri-containerd-510c07f0dc41959d6432a98e9f21c4d24f09257c8a3dabb1b9ebebf08ba8a9a7.scope: Deactivated successfully. Feb 12 20:27:16.362736 env[1055]: time="2024-02-12T20:27:16.362630531Z" level=info msg="shim disconnected" id=510c07f0dc41959d6432a98e9f21c4d24f09257c8a3dabb1b9ebebf08ba8a9a7 Feb 12 20:27:16.363052 env[1055]: time="2024-02-12T20:27:16.362745607Z" level=warning msg="cleaning up after shim disconnected" id=510c07f0dc41959d6432a98e9f21c4d24f09257c8a3dabb1b9ebebf08ba8a9a7 namespace=k8s.io Feb 12 20:27:16.363052 env[1055]: time="2024-02-12T20:27:16.362773500Z" level=info msg="cleaning up dead shim" Feb 12 20:27:16.379363 env[1055]: time="2024-02-12T20:27:16.379273269Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:27:16Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2926 runtime=io.containerd.runc.v2\ntime=\"2024-02-12T20:27:16Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/510c07f0dc41959d6432a98e9f21c4d24f09257c8a3dabb1b9ebebf08ba8a9a7/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Feb 12 20:27:16.380276 env[1055]: time="2024-02-12T20:27:16.380036318Z" level=error msg="copy shim log" error="read /proc/self/fd/47: file already closed" Feb 12 20:27:16.384252 env[1055]: time="2024-02-12T20:27:16.384182398Z" level=error msg="Failed to pipe stderr of container \"510c07f0dc41959d6432a98e9f21c4d24f09257c8a3dabb1b9ebebf08ba8a9a7\"" error="reading from a closed fifo" Feb 12 20:27:16.384541 env[1055]: time="2024-02-12T20:27:16.384467001Z" level=error msg="Failed to pipe stdout of container \"510c07f0dc41959d6432a98e9f21c4d24f09257c8a3dabb1b9ebebf08ba8a9a7\"" error="reading from a closed fifo" Feb 12 20:27:16.389420 env[1055]: time="2024-02-12T20:27:16.389345023Z" level=error msg="StartContainer for \"510c07f0dc41959d6432a98e9f21c4d24f09257c8a3dabb1b9ebebf08ba8a9a7\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Feb 12 20:27:16.390290 kubelet[1341]: E0212 20:27:16.390023 1341 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="510c07f0dc41959d6432a98e9f21c4d24f09257c8a3dabb1b9ebebf08ba8a9a7" Feb 12 20:27:16.392066 kubelet[1341]: E0212 20:27:16.391935 1341 kuberuntime_manager.go:1209] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Feb 12 20:27:16.392066 kubelet[1341]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Feb 12 20:27:16.392066 kubelet[1341]: rm /hostbin/cilium-mount Feb 12 20:27:16.392379 kubelet[1341]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-k7dc5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-spfq2_kube-system(5e73f1ba-ada9-49ed-9e8a-514e5b7327c8): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Feb 12 20:27:16.392379 kubelet[1341]: E0212 20:27:16.392028 1341 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-spfq2" podUID="5e73f1ba-ada9-49ed-9e8a-514e5b7327c8" Feb 12 20:27:16.407209 env[1055]: time="2024-02-12T20:27:16.407016887Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6bc8ccdb58-krsnb,Uid:9c8e7bf5-c35f-4afd-a974-364948761d1f,Namespace:kube-system,Attempt:0,}" Feb 12 20:27:16.432827 env[1055]: time="2024-02-12T20:27:16.432467599Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 20:27:16.432827 env[1055]: time="2024-02-12T20:27:16.432598845Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 20:27:16.432827 env[1055]: time="2024-02-12T20:27:16.432625054Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 20:27:16.433443 env[1055]: time="2024-02-12T20:27:16.433329885Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/03481052ccc4c3ccd6c429fa9e0a7a59b1fef37511aa47d37d2667da0f6191d2 pid=2944 runtime=io.containerd.runc.v2 Feb 12 20:27:16.458805 systemd[1]: Started cri-containerd-03481052ccc4c3ccd6c429fa9e0a7a59b1fef37511aa47d37d2667da0f6191d2.scope. Feb 12 20:27:16.520351 env[1055]: time="2024-02-12T20:27:16.520071901Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6bc8ccdb58-krsnb,Uid:9c8e7bf5-c35f-4afd-a974-364948761d1f,Namespace:kube-system,Attempt:0,} returns sandbox id \"03481052ccc4c3ccd6c429fa9e0a7a59b1fef37511aa47d37d2667da0f6191d2\"" Feb 12 20:27:16.524527 env[1055]: time="2024-02-12T20:27:16.524455285Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 12 20:27:16.653757 kubelet[1341]: E0212 20:27:16.653630 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:27:17.169915 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount534217909.mount: Deactivated successfully. Feb 12 20:27:17.222125 env[1055]: time="2024-02-12T20:27:17.221956707Z" level=info msg="StopPodSandbox for \"b3e00fd49f1a2d58122915e35ba2fd7336d8d69514386c9d8ca64d764818fa19\"" Feb 12 20:27:17.223175 env[1055]: time="2024-02-12T20:27:17.223052751Z" level=info msg="Container to stop \"510c07f0dc41959d6432a98e9f21c4d24f09257c8a3dabb1b9ebebf08ba8a9a7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 20:27:17.227378 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b3e00fd49f1a2d58122915e35ba2fd7336d8d69514386c9d8ca64d764818fa19-shm.mount: Deactivated successfully. Feb 12 20:27:17.245411 systemd[1]: cri-containerd-b3e00fd49f1a2d58122915e35ba2fd7336d8d69514386c9d8ca64d764818fa19.scope: Deactivated successfully. Feb 12 20:27:17.305417 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b3e00fd49f1a2d58122915e35ba2fd7336d8d69514386c9d8ca64d764818fa19-rootfs.mount: Deactivated successfully. Feb 12 20:27:17.317389 env[1055]: time="2024-02-12T20:27:17.317144605Z" level=info msg="shim disconnected" id=b3e00fd49f1a2d58122915e35ba2fd7336d8d69514386c9d8ca64d764818fa19 Feb 12 20:27:17.317389 env[1055]: time="2024-02-12T20:27:17.317302009Z" level=warning msg="cleaning up after shim disconnected" id=b3e00fd49f1a2d58122915e35ba2fd7336d8d69514386c9d8ca64d764818fa19 namespace=k8s.io Feb 12 20:27:17.317389 env[1055]: time="2024-02-12T20:27:17.317330623Z" level=info msg="cleaning up dead shim" Feb 12 20:27:17.333719 env[1055]: time="2024-02-12T20:27:17.333575626Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:27:17Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3000 runtime=io.containerd.runc.v2\n" Feb 12 20:27:17.334660 env[1055]: time="2024-02-12T20:27:17.334472206Z" level=info msg="TearDown network for sandbox \"b3e00fd49f1a2d58122915e35ba2fd7336d8d69514386c9d8ca64d764818fa19\" successfully" Feb 12 20:27:17.334660 env[1055]: time="2024-02-12T20:27:17.334540945Z" level=info msg="StopPodSandbox for \"b3e00fd49f1a2d58122915e35ba2fd7336d8d69514386c9d8ca64d764818fa19\" returns successfully" Feb 12 20:27:17.449014 kubelet[1341]: I0212 20:27:17.444186 1341 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5e73f1ba-ada9-49ed-9e8a-514e5b7327c8-lib-modules\") pod \"5e73f1ba-ada9-49ed-9e8a-514e5b7327c8\" (UID: \"5e73f1ba-ada9-49ed-9e8a-514e5b7327c8\") " Feb 12 20:27:17.449014 kubelet[1341]: I0212 20:27:17.444336 1341 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5e73f1ba-ada9-49ed-9e8a-514e5b7327c8-host-proc-sys-kernel\") pod \"5e73f1ba-ada9-49ed-9e8a-514e5b7327c8\" (UID: \"5e73f1ba-ada9-49ed-9e8a-514e5b7327c8\") " Feb 12 20:27:17.449014 kubelet[1341]: I0212 20:27:17.444608 1341 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/5e73f1ba-ada9-49ed-9e8a-514e5b7327c8-cilium-ipsec-secrets\") pod \"5e73f1ba-ada9-49ed-9e8a-514e5b7327c8\" (UID: \"5e73f1ba-ada9-49ed-9e8a-514e5b7327c8\") " Feb 12 20:27:17.449014 kubelet[1341]: I0212 20:27:17.444713 1341 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5e73f1ba-ada9-49ed-9e8a-514e5b7327c8-hostproc\") pod \"5e73f1ba-ada9-49ed-9e8a-514e5b7327c8\" (UID: \"5e73f1ba-ada9-49ed-9e8a-514e5b7327c8\") " Feb 12 20:27:17.449014 kubelet[1341]: I0212 20:27:17.444884 1341 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k7dc5\" (UniqueName: \"kubernetes.io/projected/5e73f1ba-ada9-49ed-9e8a-514e5b7327c8-kube-api-access-k7dc5\") pod \"5e73f1ba-ada9-49ed-9e8a-514e5b7327c8\" (UID: \"5e73f1ba-ada9-49ed-9e8a-514e5b7327c8\") " Feb 12 20:27:17.449014 kubelet[1341]: I0212 20:27:17.445001 1341 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5e73f1ba-ada9-49ed-9e8a-514e5b7327c8-xtables-lock\") pod \"5e73f1ba-ada9-49ed-9e8a-514e5b7327c8\" (UID: \"5e73f1ba-ada9-49ed-9e8a-514e5b7327c8\") " Feb 12 20:27:17.449014 kubelet[1341]: I0212 20:27:17.445180 1341 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5e73f1ba-ada9-49ed-9e8a-514e5b7327c8-hubble-tls\") pod \"5e73f1ba-ada9-49ed-9e8a-514e5b7327c8\" (UID: \"5e73f1ba-ada9-49ed-9e8a-514e5b7327c8\") " Feb 12 20:27:17.449014 kubelet[1341]: I0212 20:27:17.445283 1341 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5e73f1ba-ada9-49ed-9e8a-514e5b7327c8-cilium-run\") pod \"5e73f1ba-ada9-49ed-9e8a-514e5b7327c8\" (UID: \"5e73f1ba-ada9-49ed-9e8a-514e5b7327c8\") " Feb 12 20:27:17.449014 kubelet[1341]: I0212 20:27:17.445520 1341 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5e73f1ba-ada9-49ed-9e8a-514e5b7327c8-host-proc-sys-net\") pod \"5e73f1ba-ada9-49ed-9e8a-514e5b7327c8\" (UID: \"5e73f1ba-ada9-49ed-9e8a-514e5b7327c8\") " Feb 12 20:27:17.449014 kubelet[1341]: I0212 20:27:17.445707 1341 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5e73f1ba-ada9-49ed-9e8a-514e5b7327c8-cilium-config-path\") pod \"5e73f1ba-ada9-49ed-9e8a-514e5b7327c8\" (UID: \"5e73f1ba-ada9-49ed-9e8a-514e5b7327c8\") " Feb 12 20:27:17.449014 kubelet[1341]: I0212 20:27:17.445891 1341 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5e73f1ba-ada9-49ed-9e8a-514e5b7327c8-etc-cni-netd\") pod \"5e73f1ba-ada9-49ed-9e8a-514e5b7327c8\" (UID: \"5e73f1ba-ada9-49ed-9e8a-514e5b7327c8\") " Feb 12 20:27:17.449014 kubelet[1341]: I0212 20:27:17.446002 1341 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5e73f1ba-ada9-49ed-9e8a-514e5b7327c8-clustermesh-secrets\") pod \"5e73f1ba-ada9-49ed-9e8a-514e5b7327c8\" (UID: \"5e73f1ba-ada9-49ed-9e8a-514e5b7327c8\") " Feb 12 20:27:17.449014 kubelet[1341]: I0212 20:27:17.446057 1341 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5e73f1ba-ada9-49ed-9e8a-514e5b7327c8-cilium-cgroup\") pod \"5e73f1ba-ada9-49ed-9e8a-514e5b7327c8\" (UID: \"5e73f1ba-ada9-49ed-9e8a-514e5b7327c8\") " Feb 12 20:27:17.449014 kubelet[1341]: I0212 20:27:17.446257 1341 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5e73f1ba-ada9-49ed-9e8a-514e5b7327c8-cni-path\") pod \"5e73f1ba-ada9-49ed-9e8a-514e5b7327c8\" (UID: \"5e73f1ba-ada9-49ed-9e8a-514e5b7327c8\") " Feb 12 20:27:17.449014 kubelet[1341]: I0212 20:27:17.446333 1341 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5e73f1ba-ada9-49ed-9e8a-514e5b7327c8-bpf-maps\") pod \"5e73f1ba-ada9-49ed-9e8a-514e5b7327c8\" (UID: \"5e73f1ba-ada9-49ed-9e8a-514e5b7327c8\") " Feb 12 20:27:17.449014 kubelet[1341]: I0212 20:27:17.446495 1341 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5e73f1ba-ada9-49ed-9e8a-514e5b7327c8-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "5e73f1ba-ada9-49ed-9e8a-514e5b7327c8" (UID: "5e73f1ba-ada9-49ed-9e8a-514e5b7327c8"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:27:17.450407 kubelet[1341]: I0212 20:27:17.446558 1341 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5e73f1ba-ada9-49ed-9e8a-514e5b7327c8-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "5e73f1ba-ada9-49ed-9e8a-514e5b7327c8" (UID: "5e73f1ba-ada9-49ed-9e8a-514e5b7327c8"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:27:17.450407 kubelet[1341]: I0212 20:27:17.446603 1341 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5e73f1ba-ada9-49ed-9e8a-514e5b7327c8-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "5e73f1ba-ada9-49ed-9e8a-514e5b7327c8" (UID: "5e73f1ba-ada9-49ed-9e8a-514e5b7327c8"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:27:17.453143 kubelet[1341]: I0212 20:27:17.450627 1341 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5e73f1ba-ada9-49ed-9e8a-514e5b7327c8-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "5e73f1ba-ada9-49ed-9e8a-514e5b7327c8" (UID: "5e73f1ba-ada9-49ed-9e8a-514e5b7327c8"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:27:17.453143 kubelet[1341]: I0212 20:27:17.450722 1341 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5e73f1ba-ada9-49ed-9e8a-514e5b7327c8-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "5e73f1ba-ada9-49ed-9e8a-514e5b7327c8" (UID: "5e73f1ba-ada9-49ed-9e8a-514e5b7327c8"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:27:17.453143 kubelet[1341]: I0212 20:27:17.451278 1341 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5e73f1ba-ada9-49ed-9e8a-514e5b7327c8-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "5e73f1ba-ada9-49ed-9e8a-514e5b7327c8" (UID: "5e73f1ba-ada9-49ed-9e8a-514e5b7327c8"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:27:17.453143 kubelet[1341]: I0212 20:27:17.451663 1341 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5e73f1ba-ada9-49ed-9e8a-514e5b7327c8-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "5e73f1ba-ada9-49ed-9e8a-514e5b7327c8" (UID: "5e73f1ba-ada9-49ed-9e8a-514e5b7327c8"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:27:17.453143 kubelet[1341]: I0212 20:27:17.451724 1341 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5e73f1ba-ada9-49ed-9e8a-514e5b7327c8-cni-path" (OuterVolumeSpecName: "cni-path") pod "5e73f1ba-ada9-49ed-9e8a-514e5b7327c8" (UID: "5e73f1ba-ada9-49ed-9e8a-514e5b7327c8"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:27:17.454357 kubelet[1341]: I0212 20:27:17.454312 1341 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5e73f1ba-ada9-49ed-9e8a-514e5b7327c8-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "5e73f1ba-ada9-49ed-9e8a-514e5b7327c8" (UID: "5e73f1ba-ada9-49ed-9e8a-514e5b7327c8"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:27:17.454582 kubelet[1341]: I0212 20:27:17.454546 1341 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5e73f1ba-ada9-49ed-9e8a-514e5b7327c8-hostproc" (OuterVolumeSpecName: "hostproc") pod "5e73f1ba-ada9-49ed-9e8a-514e5b7327c8" (UID: "5e73f1ba-ada9-49ed-9e8a-514e5b7327c8"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:27:17.456976 kubelet[1341]: I0212 20:27:17.456894 1341 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e73f1ba-ada9-49ed-9e8a-514e5b7327c8-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "5e73f1ba-ada9-49ed-9e8a-514e5b7327c8" (UID: "5e73f1ba-ada9-49ed-9e8a-514e5b7327c8"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 12 20:27:17.462695 systemd[1]: var-lib-kubelet-pods-5e73f1ba\x2dada9\x2d49ed\x2d9e8a\x2d514e5b7327c8-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Feb 12 20:27:17.464549 kubelet[1341]: I0212 20:27:17.464457 1341 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e73f1ba-ada9-49ed-9e8a-514e5b7327c8-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "5e73f1ba-ada9-49ed-9e8a-514e5b7327c8" (UID: "5e73f1ba-ada9-49ed-9e8a-514e5b7327c8"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 12 20:27:17.469493 kubelet[1341]: I0212 20:27:17.469442 1341 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e73f1ba-ada9-49ed-9e8a-514e5b7327c8-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "5e73f1ba-ada9-49ed-9e8a-514e5b7327c8" (UID: "5e73f1ba-ada9-49ed-9e8a-514e5b7327c8"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 12 20:27:17.470288 systemd[1]: var-lib-kubelet-pods-5e73f1ba\x2dada9\x2d49ed\x2d9e8a\x2d514e5b7327c8-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 12 20:27:17.476886 kubelet[1341]: I0212 20:27:17.476796 1341 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e73f1ba-ada9-49ed-9e8a-514e5b7327c8-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "5e73f1ba-ada9-49ed-9e8a-514e5b7327c8" (UID: "5e73f1ba-ada9-49ed-9e8a-514e5b7327c8"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 12 20:27:17.478244 kubelet[1341]: I0212 20:27:17.478185 1341 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e73f1ba-ada9-49ed-9e8a-514e5b7327c8-kube-api-access-k7dc5" (OuterVolumeSpecName: "kube-api-access-k7dc5") pod "5e73f1ba-ada9-49ed-9e8a-514e5b7327c8" (UID: "5e73f1ba-ada9-49ed-9e8a-514e5b7327c8"). InnerVolumeSpecName "kube-api-access-k7dc5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 12 20:27:17.547405 kubelet[1341]: I0212 20:27:17.547350 1341 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5e73f1ba-ada9-49ed-9e8a-514e5b7327c8-hostproc\") on node \"172.24.4.189\" DevicePath \"\"" Feb 12 20:27:17.547793 kubelet[1341]: I0212 20:27:17.547740 1341 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5e73f1ba-ada9-49ed-9e8a-514e5b7327c8-host-proc-sys-kernel\") on node \"172.24.4.189\" DevicePath \"\"" Feb 12 20:27:17.548038 kubelet[1341]: I0212 20:27:17.547984 1341 reconciler_common.go:300] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/5e73f1ba-ada9-49ed-9e8a-514e5b7327c8-cilium-ipsec-secrets\") on node \"172.24.4.189\" DevicePath \"\"" Feb 12 20:27:17.548318 kubelet[1341]: I0212 20:27:17.548262 1341 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5e73f1ba-ada9-49ed-9e8a-514e5b7327c8-host-proc-sys-net\") on node \"172.24.4.189\" DevicePath \"\"" Feb 12 20:27:17.548574 kubelet[1341]: I0212 20:27:17.548549 1341 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-k7dc5\" (UniqueName: \"kubernetes.io/projected/5e73f1ba-ada9-49ed-9e8a-514e5b7327c8-kube-api-access-k7dc5\") on node \"172.24.4.189\" DevicePath \"\"" Feb 12 20:27:17.548771 kubelet[1341]: I0212 20:27:17.548748 1341 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5e73f1ba-ada9-49ed-9e8a-514e5b7327c8-xtables-lock\") on node \"172.24.4.189\" DevicePath \"\"" Feb 12 20:27:17.548983 kubelet[1341]: I0212 20:27:17.548961 1341 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5e73f1ba-ada9-49ed-9e8a-514e5b7327c8-hubble-tls\") on node \"172.24.4.189\" DevicePath \"\"" Feb 12 20:27:17.549275 kubelet[1341]: I0212 20:27:17.549229 1341 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5e73f1ba-ada9-49ed-9e8a-514e5b7327c8-cilium-run\") on node \"172.24.4.189\" DevicePath \"\"" Feb 12 20:27:17.549478 kubelet[1341]: I0212 20:27:17.549454 1341 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5e73f1ba-ada9-49ed-9e8a-514e5b7327c8-clustermesh-secrets\") on node \"172.24.4.189\" DevicePath \"\"" Feb 12 20:27:17.549758 kubelet[1341]: I0212 20:27:17.549711 1341 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5e73f1ba-ada9-49ed-9e8a-514e5b7327c8-cilium-config-path\") on node \"172.24.4.189\" DevicePath \"\"" Feb 12 20:27:17.550021 kubelet[1341]: I0212 20:27:17.549970 1341 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5e73f1ba-ada9-49ed-9e8a-514e5b7327c8-etc-cni-netd\") on node \"172.24.4.189\" DevicePath \"\"" Feb 12 20:27:17.550258 kubelet[1341]: I0212 20:27:17.550234 1341 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5e73f1ba-ada9-49ed-9e8a-514e5b7327c8-bpf-maps\") on node \"172.24.4.189\" DevicePath \"\"" Feb 12 20:27:17.550479 kubelet[1341]: I0212 20:27:17.550456 1341 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5e73f1ba-ada9-49ed-9e8a-514e5b7327c8-cilium-cgroup\") on node \"172.24.4.189\" DevicePath \"\"" Feb 12 20:27:17.550693 kubelet[1341]: I0212 20:27:17.550670 1341 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5e73f1ba-ada9-49ed-9e8a-514e5b7327c8-cni-path\") on node \"172.24.4.189\" DevicePath \"\"" Feb 12 20:27:17.550904 kubelet[1341]: I0212 20:27:17.550882 1341 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5e73f1ba-ada9-49ed-9e8a-514e5b7327c8-lib-modules\") on node \"172.24.4.189\" DevicePath \"\"" Feb 12 20:27:17.654532 kubelet[1341]: E0212 20:27:17.654485 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:27:17.760474 kubelet[1341]: E0212 20:27:17.760394 1341 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 12 20:27:18.164778 systemd[1]: var-lib-kubelet-pods-5e73f1ba\x2dada9\x2d49ed\x2d9e8a\x2d514e5b7327c8-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dk7dc5.mount: Deactivated successfully. Feb 12 20:27:18.165190 systemd[1]: var-lib-kubelet-pods-5e73f1ba\x2dada9\x2d49ed\x2d9e8a\x2d514e5b7327c8-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 12 20:27:18.231181 kubelet[1341]: I0212 20:27:18.229021 1341 scope.go:117] "RemoveContainer" containerID="510c07f0dc41959d6432a98e9f21c4d24f09257c8a3dabb1b9ebebf08ba8a9a7" Feb 12 20:27:18.244374 systemd[1]: Removed slice kubepods-burstable-pod5e73f1ba_ada9_49ed_9e8a_514e5b7327c8.slice. Feb 12 20:27:18.247885 env[1055]: time="2024-02-12T20:27:18.247736412Z" level=info msg="RemoveContainer for \"510c07f0dc41959d6432a98e9f21c4d24f09257c8a3dabb1b9ebebf08ba8a9a7\"" Feb 12 20:27:18.429571 env[1055]: time="2024-02-12T20:27:18.429390829Z" level=info msg="RemoveContainer for \"510c07f0dc41959d6432a98e9f21c4d24f09257c8a3dabb1b9ebebf08ba8a9a7\" returns successfully" Feb 12 20:27:18.459870 kubelet[1341]: I0212 20:27:18.459819 1341 topology_manager.go:215] "Topology Admit Handler" podUID="384cd90d-ae7c-4201-8b78-5871adfb34a8" podNamespace="kube-system" podName="cilium-8cxp8" Feb 12 20:27:18.460162 kubelet[1341]: E0212 20:27:18.459914 1341 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="5e73f1ba-ada9-49ed-9e8a-514e5b7327c8" containerName="mount-cgroup" Feb 12 20:27:18.460162 kubelet[1341]: I0212 20:27:18.459965 1341 memory_manager.go:346] "RemoveStaleState removing state" podUID="5e73f1ba-ada9-49ed-9e8a-514e5b7327c8" containerName="mount-cgroup" Feb 12 20:27:18.473493 systemd[1]: Created slice kubepods-burstable-pod384cd90d_ae7c_4201_8b78_5871adfb34a8.slice. Feb 12 20:27:18.558551 kubelet[1341]: I0212 20:27:18.558485 1341 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/384cd90d-ae7c-4201-8b78-5871adfb34a8-bpf-maps\") pod \"cilium-8cxp8\" (UID: \"384cd90d-ae7c-4201-8b78-5871adfb34a8\") " pod="kube-system/cilium-8cxp8" Feb 12 20:27:18.558996 kubelet[1341]: I0212 20:27:18.558969 1341 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/384cd90d-ae7c-4201-8b78-5871adfb34a8-lib-modules\") pod \"cilium-8cxp8\" (UID: \"384cd90d-ae7c-4201-8b78-5871adfb34a8\") " pod="kube-system/cilium-8cxp8" Feb 12 20:27:18.559403 kubelet[1341]: I0212 20:27:18.559375 1341 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/384cd90d-ae7c-4201-8b78-5871adfb34a8-cilium-config-path\") pod \"cilium-8cxp8\" (UID: \"384cd90d-ae7c-4201-8b78-5871adfb34a8\") " pod="kube-system/cilium-8cxp8" Feb 12 20:27:18.559727 kubelet[1341]: I0212 20:27:18.559701 1341 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/384cd90d-ae7c-4201-8b78-5871adfb34a8-cilium-ipsec-secrets\") pod \"cilium-8cxp8\" (UID: \"384cd90d-ae7c-4201-8b78-5871adfb34a8\") " pod="kube-system/cilium-8cxp8" Feb 12 20:27:18.560045 kubelet[1341]: I0212 20:27:18.560020 1341 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/384cd90d-ae7c-4201-8b78-5871adfb34a8-hubble-tls\") pod \"cilium-8cxp8\" (UID: \"384cd90d-ae7c-4201-8b78-5871adfb34a8\") " pod="kube-system/cilium-8cxp8" Feb 12 20:27:18.560404 kubelet[1341]: I0212 20:27:18.560377 1341 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/384cd90d-ae7c-4201-8b78-5871adfb34a8-cilium-run\") pod \"cilium-8cxp8\" (UID: \"384cd90d-ae7c-4201-8b78-5871adfb34a8\") " pod="kube-system/cilium-8cxp8" Feb 12 20:27:18.560753 kubelet[1341]: I0212 20:27:18.560727 1341 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/384cd90d-ae7c-4201-8b78-5871adfb34a8-hostproc\") pod \"cilium-8cxp8\" (UID: \"384cd90d-ae7c-4201-8b78-5871adfb34a8\") " pod="kube-system/cilium-8cxp8" Feb 12 20:27:18.561081 kubelet[1341]: I0212 20:27:18.561054 1341 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/384cd90d-ae7c-4201-8b78-5871adfb34a8-cni-path\") pod \"cilium-8cxp8\" (UID: \"384cd90d-ae7c-4201-8b78-5871adfb34a8\") " pod="kube-system/cilium-8cxp8" Feb 12 20:27:18.561411 kubelet[1341]: I0212 20:27:18.561386 1341 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/384cd90d-ae7c-4201-8b78-5871adfb34a8-etc-cni-netd\") pod \"cilium-8cxp8\" (UID: \"384cd90d-ae7c-4201-8b78-5871adfb34a8\") " pod="kube-system/cilium-8cxp8" Feb 12 20:27:18.561737 kubelet[1341]: I0212 20:27:18.561711 1341 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/384cd90d-ae7c-4201-8b78-5871adfb34a8-clustermesh-secrets\") pod \"cilium-8cxp8\" (UID: \"384cd90d-ae7c-4201-8b78-5871adfb34a8\") " pod="kube-system/cilium-8cxp8" Feb 12 20:27:18.562007 kubelet[1341]: I0212 20:27:18.561982 1341 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/384cd90d-ae7c-4201-8b78-5871adfb34a8-host-proc-sys-kernel\") pod \"cilium-8cxp8\" (UID: \"384cd90d-ae7c-4201-8b78-5871adfb34a8\") " pod="kube-system/cilium-8cxp8" Feb 12 20:27:18.562295 kubelet[1341]: I0212 20:27:18.562269 1341 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/384cd90d-ae7c-4201-8b78-5871adfb34a8-cilium-cgroup\") pod \"cilium-8cxp8\" (UID: \"384cd90d-ae7c-4201-8b78-5871adfb34a8\") " pod="kube-system/cilium-8cxp8" Feb 12 20:27:18.562673 kubelet[1341]: I0212 20:27:18.562646 1341 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/384cd90d-ae7c-4201-8b78-5871adfb34a8-host-proc-sys-net\") pod \"cilium-8cxp8\" (UID: \"384cd90d-ae7c-4201-8b78-5871adfb34a8\") " pod="kube-system/cilium-8cxp8" Feb 12 20:27:18.562922 kubelet[1341]: I0212 20:27:18.562897 1341 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dphm7\" (UniqueName: \"kubernetes.io/projected/384cd90d-ae7c-4201-8b78-5871adfb34a8-kube-api-access-dphm7\") pod \"cilium-8cxp8\" (UID: \"384cd90d-ae7c-4201-8b78-5871adfb34a8\") " pod="kube-system/cilium-8cxp8" Feb 12 20:27:18.563194 kubelet[1341]: I0212 20:27:18.563166 1341 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/384cd90d-ae7c-4201-8b78-5871adfb34a8-xtables-lock\") pod \"cilium-8cxp8\" (UID: \"384cd90d-ae7c-4201-8b78-5871adfb34a8\") " pod="kube-system/cilium-8cxp8" Feb 12 20:27:18.656274 kubelet[1341]: E0212 20:27:18.656215 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:27:18.787580 env[1055]: time="2024-02-12T20:27:18.787509017Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8cxp8,Uid:384cd90d-ae7c-4201-8b78-5871adfb34a8,Namespace:kube-system,Attempt:0,}" Feb 12 20:27:18.816024 env[1055]: time="2024-02-12T20:27:18.815891064Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 20:27:18.816318 env[1055]: time="2024-02-12T20:27:18.816064189Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 20:27:18.816318 env[1055]: time="2024-02-12T20:27:18.816196968Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 20:27:18.816701 env[1055]: time="2024-02-12T20:27:18.816594543Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a12f23601af4fe226534fdb14ee2c25e135fde40cb4b0518954d698328bb9aee pid=3029 runtime=io.containerd.runc.v2 Feb 12 20:27:18.834399 kubelet[1341]: I0212 20:27:18.834320 1341 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="5e73f1ba-ada9-49ed-9e8a-514e5b7327c8" path="/var/lib/kubelet/pods/5e73f1ba-ada9-49ed-9e8a-514e5b7327c8/volumes" Feb 12 20:27:18.848704 systemd[1]: Started cri-containerd-a12f23601af4fe226534fdb14ee2c25e135fde40cb4b0518954d698328bb9aee.scope. Feb 12 20:27:18.905561 env[1055]: time="2024-02-12T20:27:18.905405873Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8cxp8,Uid:384cd90d-ae7c-4201-8b78-5871adfb34a8,Namespace:kube-system,Attempt:0,} returns sandbox id \"a12f23601af4fe226534fdb14ee2c25e135fde40cb4b0518954d698328bb9aee\"" Feb 12 20:27:18.912792 env[1055]: time="2024-02-12T20:27:18.912722004Z" level=info msg="CreateContainer within sandbox \"a12f23601af4fe226534fdb14ee2c25e135fde40cb4b0518954d698328bb9aee\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 12 20:27:18.935240 env[1055]: time="2024-02-12T20:27:18.935005564Z" level=info msg="CreateContainer within sandbox \"a12f23601af4fe226534fdb14ee2c25e135fde40cb4b0518954d698328bb9aee\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"317f90abedcc7445d235e7810c9ef794d5780854ac26d084471f5afb7b582b1e\"" Feb 12 20:27:18.935967 env[1055]: time="2024-02-12T20:27:18.935923684Z" level=info msg="StartContainer for \"317f90abedcc7445d235e7810c9ef794d5780854ac26d084471f5afb7b582b1e\"" Feb 12 20:27:18.961771 systemd[1]: Started cri-containerd-317f90abedcc7445d235e7810c9ef794d5780854ac26d084471f5afb7b582b1e.scope. Feb 12 20:27:18.992119 env[1055]: time="2024-02-12T20:27:18.992026169Z" level=info msg="StartContainer for \"317f90abedcc7445d235e7810c9ef794d5780854ac26d084471f5afb7b582b1e\" returns successfully" Feb 12 20:27:19.021688 systemd[1]: cri-containerd-317f90abedcc7445d235e7810c9ef794d5780854ac26d084471f5afb7b582b1e.scope: Deactivated successfully. Feb 12 20:27:19.065425 env[1055]: time="2024-02-12T20:27:19.063184841Z" level=info msg="shim disconnected" id=317f90abedcc7445d235e7810c9ef794d5780854ac26d084471f5afb7b582b1e Feb 12 20:27:19.065425 env[1055]: time="2024-02-12T20:27:19.064139170Z" level=warning msg="cleaning up after shim disconnected" id=317f90abedcc7445d235e7810c9ef794d5780854ac26d084471f5afb7b582b1e namespace=k8s.io Feb 12 20:27:19.065425 env[1055]: time="2024-02-12T20:27:19.064160159Z" level=info msg="cleaning up dead shim" Feb 12 20:27:19.076072 env[1055]: time="2024-02-12T20:27:19.076017501Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:27:19Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3113 runtime=io.containerd.runc.v2\n" Feb 12 20:27:19.240751 env[1055]: time="2024-02-12T20:27:19.240662420Z" level=info msg="CreateContainer within sandbox \"a12f23601af4fe226534fdb14ee2c25e135fde40cb4b0518954d698328bb9aee\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 12 20:27:19.277243 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1364346319.mount: Deactivated successfully. Feb 12 20:27:19.290567 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2772612459.mount: Deactivated successfully. Feb 12 20:27:19.298688 env[1055]: time="2024-02-12T20:27:19.298637717Z" level=info msg="CreateContainer within sandbox \"a12f23601af4fe226534fdb14ee2c25e135fde40cb4b0518954d698328bb9aee\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"67b4130a98c32602adbd2f329940c070e654cbcc293d3fded3001ce3141444cc\"" Feb 12 20:27:19.299527 env[1055]: time="2024-02-12T20:27:19.299498350Z" level=info msg="StartContainer for \"67b4130a98c32602adbd2f329940c070e654cbcc293d3fded3001ce3141444cc\"" Feb 12 20:27:19.326167 systemd[1]: Started cri-containerd-67b4130a98c32602adbd2f329940c070e654cbcc293d3fded3001ce3141444cc.scope. Feb 12 20:27:19.374561 env[1055]: time="2024-02-12T20:27:19.374477563Z" level=info msg="StartContainer for \"67b4130a98c32602adbd2f329940c070e654cbcc293d3fded3001ce3141444cc\" returns successfully" Feb 12 20:27:19.382382 systemd[1]: cri-containerd-67b4130a98c32602adbd2f329940c070e654cbcc293d3fded3001ce3141444cc.scope: Deactivated successfully. Feb 12 20:27:19.430674 env[1055]: time="2024-02-12T20:27:19.430581022Z" level=info msg="shim disconnected" id=67b4130a98c32602adbd2f329940c070e654cbcc293d3fded3001ce3141444cc Feb 12 20:27:19.430674 env[1055]: time="2024-02-12T20:27:19.430657395Z" level=warning msg="cleaning up after shim disconnected" id=67b4130a98c32602adbd2f329940c070e654cbcc293d3fded3001ce3141444cc namespace=k8s.io Feb 12 20:27:19.430674 env[1055]: time="2024-02-12T20:27:19.430669097Z" level=info msg="cleaning up dead shim" Feb 12 20:27:19.454130 env[1055]: time="2024-02-12T20:27:19.454068258Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:27:19Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3176 runtime=io.containerd.runc.v2\n" Feb 12 20:27:19.468988 kubelet[1341]: W0212 20:27:19.468871 1341 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5e73f1ba_ada9_49ed_9e8a_514e5b7327c8.slice/cri-containerd-510c07f0dc41959d6432a98e9f21c4d24f09257c8a3dabb1b9ebebf08ba8a9a7.scope WatchSource:0}: container "510c07f0dc41959d6432a98e9f21c4d24f09257c8a3dabb1b9ebebf08ba8a9a7" in namespace "k8s.io": not found Feb 12 20:27:19.658369 kubelet[1341]: E0212 20:27:19.657623 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:27:20.244668 env[1055]: time="2024-02-12T20:27:20.244598752Z" level=info msg="CreateContainer within sandbox \"a12f23601af4fe226534fdb14ee2c25e135fde40cb4b0518954d698328bb9aee\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 12 20:27:20.249139 env[1055]: time="2024-02-12T20:27:20.248888211Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:27:20.253829 env[1055]: time="2024-02-12T20:27:20.253802310Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:27:20.256998 env[1055]: time="2024-02-12T20:27:20.256941093Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:27:20.257665 env[1055]: time="2024-02-12T20:27:20.257618953Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Feb 12 20:27:20.274661 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4157326869.mount: Deactivated successfully. Feb 12 20:27:20.276256 env[1055]: time="2024-02-12T20:27:20.269868059Z" level=info msg="CreateContainer within sandbox \"03481052ccc4c3ccd6c429fa9e0a7a59b1fef37511aa47d37d2667da0f6191d2\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 12 20:27:20.284745 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3815226335.mount: Deactivated successfully. Feb 12 20:27:20.285609 env[1055]: time="2024-02-12T20:27:20.285566519Z" level=info msg="CreateContainer within sandbox \"a12f23601af4fe226534fdb14ee2c25e135fde40cb4b0518954d698328bb9aee\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"2d4754a206a050a75c899f93234281dd131f727d2bbb033ea9aea4e8962296d4\"" Feb 12 20:27:20.286381 env[1055]: time="2024-02-12T20:27:20.286341260Z" level=info msg="StartContainer for \"2d4754a206a050a75c899f93234281dd131f727d2bbb033ea9aea4e8962296d4\"" Feb 12 20:27:20.306496 env[1055]: time="2024-02-12T20:27:20.306451649Z" level=info msg="CreateContainer within sandbox \"03481052ccc4c3ccd6c429fa9e0a7a59b1fef37511aa47d37d2667da0f6191d2\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"880d972570c79250efad6dc93ebd6dffcaad1d07807512e97aa68645b746a56d\"" Feb 12 20:27:20.307513 env[1055]: time="2024-02-12T20:27:20.307490145Z" level=info msg="StartContainer for \"880d972570c79250efad6dc93ebd6dffcaad1d07807512e97aa68645b746a56d\"" Feb 12 20:27:20.324686 systemd[1]: Started cri-containerd-2d4754a206a050a75c899f93234281dd131f727d2bbb033ea9aea4e8962296d4.scope. Feb 12 20:27:20.350166 systemd[1]: Started cri-containerd-880d972570c79250efad6dc93ebd6dffcaad1d07807512e97aa68645b746a56d.scope. Feb 12 20:27:20.377133 env[1055]: time="2024-02-12T20:27:20.377017924Z" level=info msg="StartContainer for \"2d4754a206a050a75c899f93234281dd131f727d2bbb033ea9aea4e8962296d4\" returns successfully" Feb 12 20:27:20.395241 env[1055]: time="2024-02-12T20:27:20.395164412Z" level=info msg="StartContainer for \"880d972570c79250efad6dc93ebd6dffcaad1d07807512e97aa68645b746a56d\" returns successfully" Feb 12 20:27:20.399754 systemd[1]: cri-containerd-2d4754a206a050a75c899f93234281dd131f727d2bbb033ea9aea4e8962296d4.scope: Deactivated successfully. Feb 12 20:27:20.661269 kubelet[1341]: E0212 20:27:20.658591 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:27:20.720586 env[1055]: time="2024-02-12T20:27:20.720372153Z" level=info msg="shim disconnected" id=2d4754a206a050a75c899f93234281dd131f727d2bbb033ea9aea4e8962296d4 Feb 12 20:27:20.720586 env[1055]: time="2024-02-12T20:27:20.720521483Z" level=warning msg="cleaning up after shim disconnected" id=2d4754a206a050a75c899f93234281dd131f727d2bbb033ea9aea4e8962296d4 namespace=k8s.io Feb 12 20:27:20.720586 env[1055]: time="2024-02-12T20:27:20.720549356Z" level=info msg="cleaning up dead shim" Feb 12 20:27:20.738172 env[1055]: time="2024-02-12T20:27:20.736635172Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:27:20Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3276 runtime=io.containerd.runc.v2\n" Feb 12 20:27:21.254083 env[1055]: time="2024-02-12T20:27:21.253933212Z" level=info msg="CreateContainer within sandbox \"a12f23601af4fe226534fdb14ee2c25e135fde40cb4b0518954d698328bb9aee\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 12 20:27:21.285586 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3078369066.mount: Deactivated successfully. Feb 12 20:27:21.297954 env[1055]: time="2024-02-12T20:27:21.297832104Z" level=info msg="CreateContainer within sandbox \"a12f23601af4fe226534fdb14ee2c25e135fde40cb4b0518954d698328bb9aee\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"454acb39ca23efc7ee62805c13649cac6abc13a8682aaf1d4d30425ac0f30147\"" Feb 12 20:27:21.299887 env[1055]: time="2024-02-12T20:27:21.299804821Z" level=info msg="StartContainer for \"454acb39ca23efc7ee62805c13649cac6abc13a8682aaf1d4d30425ac0f30147\"" Feb 12 20:27:21.320166 kubelet[1341]: I0212 20:27:21.319798 1341 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-6bc8ccdb58-krsnb" podStartSLOduration=3.58488306 podCreationTimestamp="2024-02-12 20:27:14 +0000 UTC" firstStartedPulling="2024-02-12 20:27:16.52334317 +0000 UTC m=+84.528042131" lastFinishedPulling="2024-02-12 20:27:20.258094925 +0000 UTC m=+88.262793885" observedRunningTime="2024-02-12 20:27:21.318630202 +0000 UTC m=+89.323329172" watchObservedRunningTime="2024-02-12 20:27:21.319634814 +0000 UTC m=+89.324333784" Feb 12 20:27:21.360556 systemd[1]: Started cri-containerd-454acb39ca23efc7ee62805c13649cac6abc13a8682aaf1d4d30425ac0f30147.scope. Feb 12 20:27:21.398644 systemd[1]: cri-containerd-454acb39ca23efc7ee62805c13649cac6abc13a8682aaf1d4d30425ac0f30147.scope: Deactivated successfully. Feb 12 20:27:21.402429 env[1055]: time="2024-02-12T20:27:21.402319674Z" level=info msg="StartContainer for \"454acb39ca23efc7ee62805c13649cac6abc13a8682aaf1d4d30425ac0f30147\" returns successfully" Feb 12 20:27:21.425215 env[1055]: time="2024-02-12T20:27:21.425170710Z" level=info msg="shim disconnected" id=454acb39ca23efc7ee62805c13649cac6abc13a8682aaf1d4d30425ac0f30147 Feb 12 20:27:21.425414 env[1055]: time="2024-02-12T20:27:21.425394519Z" level=warning msg="cleaning up after shim disconnected" id=454acb39ca23efc7ee62805c13649cac6abc13a8682aaf1d4d30425ac0f30147 namespace=k8s.io Feb 12 20:27:21.425485 env[1055]: time="2024-02-12T20:27:21.425470310Z" level=info msg="cleaning up dead shim" Feb 12 20:27:21.433452 env[1055]: time="2024-02-12T20:27:21.433408908Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:27:21Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3332 runtime=io.containerd.runc.v2\n" Feb 12 20:27:21.661197 kubelet[1341]: E0212 20:27:21.659591 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:27:22.166142 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-454acb39ca23efc7ee62805c13649cac6abc13a8682aaf1d4d30425ac0f30147-rootfs.mount: Deactivated successfully. Feb 12 20:27:22.276418 env[1055]: time="2024-02-12T20:27:22.276326535Z" level=info msg="CreateContainer within sandbox \"a12f23601af4fe226534fdb14ee2c25e135fde40cb4b0518954d698328bb9aee\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 12 20:27:22.319677 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1609742031.mount: Deactivated successfully. Feb 12 20:27:22.334869 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1107609724.mount: Deactivated successfully. Feb 12 20:27:22.364654 env[1055]: time="2024-02-12T20:27:22.364518019Z" level=info msg="CreateContainer within sandbox \"a12f23601af4fe226534fdb14ee2c25e135fde40cb4b0518954d698328bb9aee\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"87921d4ea0538a6d41b5a2c0cd79c67c8d76b533b5bb8725b690ba7184d4ca10\"" Feb 12 20:27:22.366845 env[1055]: time="2024-02-12T20:27:22.366724394Z" level=info msg="StartContainer for \"87921d4ea0538a6d41b5a2c0cd79c67c8d76b533b5bb8725b690ba7184d4ca10\"" Feb 12 20:27:22.411395 systemd[1]: Started cri-containerd-87921d4ea0538a6d41b5a2c0cd79c67c8d76b533b5bb8725b690ba7184d4ca10.scope. Feb 12 20:27:22.480867 env[1055]: time="2024-02-12T20:27:22.480803454Z" level=info msg="StartContainer for \"87921d4ea0538a6d41b5a2c0cd79c67c8d76b533b5bb8725b690ba7184d4ca10\" returns successfully" Feb 12 20:27:22.582777 kubelet[1341]: W0212 20:27:22.582645 1341 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod384cd90d_ae7c_4201_8b78_5871adfb34a8.slice/cri-containerd-317f90abedcc7445d235e7810c9ef794d5780854ac26d084471f5afb7b582b1e.scope WatchSource:0}: task 317f90abedcc7445d235e7810c9ef794d5780854ac26d084471f5afb7b582b1e not found: not found Feb 12 20:27:22.659802 kubelet[1341]: E0212 20:27:22.659708 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:27:23.479166 kernel: cryptd: max_cpu_qlen set to 1000 Feb 12 20:27:23.524146 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm_base(ctr(aes-generic),ghash-generic)))) Feb 12 20:27:23.660956 kubelet[1341]: E0212 20:27:23.660780 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:27:24.661370 kubelet[1341]: E0212 20:27:24.661272 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:27:25.511423 systemd[1]: run-containerd-runc-k8s.io-87921d4ea0538a6d41b5a2c0cd79c67c8d76b533b5bb8725b690ba7184d4ca10-runc.c39iBE.mount: Deactivated successfully. Feb 12 20:27:25.662391 kubelet[1341]: E0212 20:27:25.662297 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:27:25.692068 kubelet[1341]: W0212 20:27:25.691993 1341 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod384cd90d_ae7c_4201_8b78_5871adfb34a8.slice/cri-containerd-67b4130a98c32602adbd2f329940c070e654cbcc293d3fded3001ce3141444cc.scope WatchSource:0}: task 67b4130a98c32602adbd2f329940c070e654cbcc293d3fded3001ce3141444cc not found: not found Feb 12 20:27:26.662503 kubelet[1341]: E0212 20:27:26.662452 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:27:27.018285 systemd-networkd[971]: lxc_health: Link UP Feb 12 20:27:27.023388 systemd-networkd[971]: lxc_health: Gained carrier Feb 12 20:27:27.024127 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 12 20:27:27.663885 kubelet[1341]: E0212 20:27:27.663820 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:27:27.695113 systemd[1]: run-containerd-runc-k8s.io-87921d4ea0538a6d41b5a2c0cd79c67c8d76b533b5bb8725b690ba7184d4ca10-runc.brT581.mount: Deactivated successfully. Feb 12 20:27:28.630543 systemd-networkd[971]: lxc_health: Gained IPv6LL Feb 12 20:27:28.664282 kubelet[1341]: E0212 20:27:28.664237 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:27:28.806627 kubelet[1341]: W0212 20:27:28.806587 1341 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod384cd90d_ae7c_4201_8b78_5871adfb34a8.slice/cri-containerd-2d4754a206a050a75c899f93234281dd131f727d2bbb033ea9aea4e8962296d4.scope WatchSource:0}: task 2d4754a206a050a75c899f93234281dd131f727d2bbb033ea9aea4e8962296d4 not found: not found Feb 12 20:27:28.810540 kubelet[1341]: I0212 20:27:28.810489 1341 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-8cxp8" podStartSLOduration=10.810401592 podCreationTimestamp="2024-02-12 20:27:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 20:27:23.315664525 +0000 UTC m=+91.320363445" watchObservedRunningTime="2024-02-12 20:27:28.810401592 +0000 UTC m=+96.815100562" Feb 12 20:27:29.664398 kubelet[1341]: E0212 20:27:29.664352 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:27:29.934806 systemd[1]: run-containerd-runc-k8s.io-87921d4ea0538a6d41b5a2c0cd79c67c8d76b533b5bb8725b690ba7184d4ca10-runc.CZ7dlD.mount: Deactivated successfully. Feb 12 20:27:30.665460 kubelet[1341]: E0212 20:27:30.665411 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:27:31.666681 kubelet[1341]: E0212 20:27:31.666637 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:27:31.930742 kubelet[1341]: W0212 20:27:31.930609 1341 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod384cd90d_ae7c_4201_8b78_5871adfb34a8.slice/cri-containerd-454acb39ca23efc7ee62805c13649cac6abc13a8682aaf1d4d30425ac0f30147.scope WatchSource:0}: task 454acb39ca23efc7ee62805c13649cac6abc13a8682aaf1d4d30425ac0f30147 not found: not found Feb 12 20:27:32.137690 systemd[1]: run-containerd-runc-k8s.io-87921d4ea0538a6d41b5a2c0cd79c67c8d76b533b5bb8725b690ba7184d4ca10-runc.3yupet.mount: Deactivated successfully. Feb 12 20:27:32.577468 kubelet[1341]: E0212 20:27:32.577385 1341 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:27:32.668288 kubelet[1341]: E0212 20:27:32.668225 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:27:33.669817 kubelet[1341]: E0212 20:27:33.669739 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:27:34.670501 kubelet[1341]: E0212 20:27:34.670401 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:27:35.671337 kubelet[1341]: E0212 20:27:35.671290 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:27:36.673184 kubelet[1341]: E0212 20:27:36.673084 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:27:37.674253 kubelet[1341]: E0212 20:27:37.674182 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:27:38.675161 kubelet[1341]: E0212 20:27:38.675050 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"