Feb 9 19:21:49.983363 kernel: Linux version 5.15.148-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri Feb 9 17:23:38 -00 2024 Feb 9 19:21:49.983409 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=4dbf910aaff679d18007a871aba359cc2cf6cb85992bb7598afad40271debbd6 Feb 9 19:21:49.983432 kernel: BIOS-provided physical RAM map: Feb 9 19:21:49.983446 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Feb 9 19:21:49.983458 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Feb 9 19:21:49.983471 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Feb 9 19:21:49.983486 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdcfff] usable Feb 9 19:21:49.983499 kernel: BIOS-e820: [mem 0x000000007ffdd000-0x000000007fffffff] reserved Feb 9 19:21:49.983514 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Feb 9 19:21:49.983526 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Feb 9 19:21:49.983539 kernel: NX (Execute Disable) protection: active Feb 9 19:21:49.983551 kernel: SMBIOS 2.8 present. Feb 9 19:21:49.983563 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014 Feb 9 19:21:49.983576 kernel: Hypervisor detected: KVM Feb 9 19:21:49.983591 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Feb 9 19:21:49.983609 kernel: kvm-clock: cpu 0, msr 7cfaa001, primary cpu clock Feb 9 19:21:49.983622 kernel: kvm-clock: using sched offset of 7650880043 cycles Feb 9 19:21:49.983636 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Feb 9 19:21:49.983651 kernel: tsc: Detected 1996.249 MHz processor Feb 9 19:21:49.983665 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 9 19:21:49.983679 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 9 19:21:49.983693 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Feb 9 19:21:49.983707 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 9 19:21:49.983723 kernel: ACPI: Early table checksum verification disabled Feb 9 19:21:49.983737 kernel: ACPI: RSDP 0x00000000000F5930 000014 (v00 BOCHS ) Feb 9 19:21:49.983751 kernel: ACPI: RSDT 0x000000007FFE1848 000030 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 19:21:49.983765 kernel: ACPI: FACP 0x000000007FFE172C 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 19:21:49.983779 kernel: ACPI: DSDT 0x000000007FFE0040 0016EC (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 19:21:49.983792 kernel: ACPI: FACS 0x000000007FFE0000 000040 Feb 9 19:21:49.983806 kernel: ACPI: APIC 0x000000007FFE17A0 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 19:21:49.983820 kernel: ACPI: WAET 0x000000007FFE1820 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 19:21:49.983834 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe172c-0x7ffe179f] Feb 9 19:21:49.983850 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe172b] Feb 9 19:21:49.983864 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Feb 9 19:21:49.983877 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17a0-0x7ffe181f] Feb 9 19:21:49.983891 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe1820-0x7ffe1847] Feb 9 19:21:49.983904 kernel: No NUMA configuration found Feb 9 19:21:49.983918 kernel: Faking a node at [mem 0x0000000000000000-0x000000007ffdcfff] Feb 9 19:21:49.983931 kernel: NODE_DATA(0) allocated [mem 0x7ffd7000-0x7ffdcfff] Feb 9 19:21:49.983945 kernel: Zone ranges: Feb 9 19:21:49.983967 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 9 19:21:49.983981 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdcfff] Feb 9 19:21:49.983995 kernel: Normal empty Feb 9 19:21:49.984009 kernel: Movable zone start for each node Feb 9 19:21:49.984023 kernel: Early memory node ranges Feb 9 19:21:49.984037 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Feb 9 19:21:49.984055 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdcfff] Feb 9 19:21:49.984094 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdcfff] Feb 9 19:21:49.984109 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 9 19:21:49.984123 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Feb 9 19:21:49.984137 kernel: On node 0, zone DMA32: 35 pages in unavailable ranges Feb 9 19:21:49.984151 kernel: ACPI: PM-Timer IO Port: 0x608 Feb 9 19:21:49.984165 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Feb 9 19:21:49.984179 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Feb 9 19:21:49.984193 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Feb 9 19:21:49.984211 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Feb 9 19:21:49.984225 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 9 19:21:49.984240 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Feb 9 19:21:49.984254 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Feb 9 19:21:49.984268 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 9 19:21:49.984282 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Feb 9 19:21:49.984296 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Feb 9 19:21:49.984310 kernel: Booting paravirtualized kernel on KVM Feb 9 19:21:49.984325 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 9 19:21:49.984339 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 Feb 9 19:21:49.984357 kernel: percpu: Embedded 55 pages/cpu s185624 r8192 d31464 u1048576 Feb 9 19:21:49.984372 kernel: pcpu-alloc: s185624 r8192 d31464 u1048576 alloc=1*2097152 Feb 9 19:21:49.984384 kernel: pcpu-alloc: [0] 0 1 Feb 9 19:21:49.984394 kernel: kvm-guest: stealtime: cpu 0, msr 7dc1c0c0 Feb 9 19:21:49.984403 kernel: kvm-guest: PV spinlocks disabled, no host support Feb 9 19:21:49.984412 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515805 Feb 9 19:21:49.984422 kernel: Policy zone: DMA32 Feb 9 19:21:49.984433 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=4dbf910aaff679d18007a871aba359cc2cf6cb85992bb7598afad40271debbd6 Feb 9 19:21:49.984445 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 9 19:21:49.984453 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 9 19:21:49.984460 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Feb 9 19:21:49.984468 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 9 19:21:49.984476 kernel: Memory: 1975340K/2096620K available (12294K kernel code, 2275K rwdata, 13700K rodata, 45496K init, 4048K bss, 121020K reserved, 0K cma-reserved) Feb 9 19:21:49.984484 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 9 19:21:49.984491 kernel: ftrace: allocating 34475 entries in 135 pages Feb 9 19:21:49.984499 kernel: ftrace: allocated 135 pages with 4 groups Feb 9 19:21:49.984508 kernel: rcu: Hierarchical RCU implementation. Feb 9 19:21:49.984516 kernel: rcu: RCU event tracing is enabled. Feb 9 19:21:49.984524 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 9 19:21:49.984532 kernel: Rude variant of Tasks RCU enabled. Feb 9 19:21:49.984540 kernel: Tracing variant of Tasks RCU enabled. Feb 9 19:21:49.984548 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 9 19:21:49.984555 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 9 19:21:49.984563 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Feb 9 19:21:49.984571 kernel: Console: colour VGA+ 80x25 Feb 9 19:21:49.984580 kernel: printk: console [tty0] enabled Feb 9 19:21:49.984588 kernel: printk: console [ttyS0] enabled Feb 9 19:21:49.984595 kernel: ACPI: Core revision 20210730 Feb 9 19:21:49.984603 kernel: APIC: Switch to symmetric I/O mode setup Feb 9 19:21:49.984611 kernel: x2apic enabled Feb 9 19:21:49.984619 kernel: Switched APIC routing to physical x2apic. Feb 9 19:21:49.984626 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Feb 9 19:21:49.984634 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Feb 9 19:21:49.984641 kernel: Calibrating delay loop (skipped) preset value.. 3992.49 BogoMIPS (lpj=1996249) Feb 9 19:21:49.984649 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Feb 9 19:21:49.984658 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Feb 9 19:21:49.984666 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 9 19:21:49.984674 kernel: Spectre V2 : Mitigation: Retpolines Feb 9 19:21:49.984681 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 9 19:21:49.984689 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 9 19:21:49.984696 kernel: Speculative Store Bypass: Vulnerable Feb 9 19:21:49.984704 kernel: x86/fpu: x87 FPU will use FXSAVE Feb 9 19:21:49.984711 kernel: Freeing SMP alternatives memory: 32K Feb 9 19:21:49.984719 kernel: pid_max: default: 32768 minimum: 301 Feb 9 19:21:49.984728 kernel: LSM: Security Framework initializing Feb 9 19:21:49.984736 kernel: SELinux: Initializing. Feb 9 19:21:49.984744 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Feb 9 19:21:49.984751 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Feb 9 19:21:49.984759 kernel: smpboot: CPU0: AMD Intel Core i7 9xx (Nehalem Class Core i7) (family: 0x6, model: 0x1a, stepping: 0x3) Feb 9 19:21:49.984767 kernel: Performance Events: AMD PMU driver. Feb 9 19:21:49.984774 kernel: ... version: 0 Feb 9 19:21:49.984782 kernel: ... bit width: 48 Feb 9 19:21:49.984789 kernel: ... generic registers: 4 Feb 9 19:21:49.984805 kernel: ... value mask: 0000ffffffffffff Feb 9 19:21:49.984813 kernel: ... max period: 00007fffffffffff Feb 9 19:21:49.984822 kernel: ... fixed-purpose events: 0 Feb 9 19:21:49.984830 kernel: ... event mask: 000000000000000f Feb 9 19:21:49.984838 kernel: signal: max sigframe size: 1440 Feb 9 19:21:49.984846 kernel: rcu: Hierarchical SRCU implementation. Feb 9 19:21:49.984854 kernel: smp: Bringing up secondary CPUs ... Feb 9 19:21:49.984862 kernel: x86: Booting SMP configuration: Feb 9 19:21:49.984872 kernel: .... node #0, CPUs: #1 Feb 9 19:21:49.984880 kernel: kvm-clock: cpu 1, msr 7cfaa041, secondary cpu clock Feb 9 19:21:49.984888 kernel: kvm-guest: stealtime: cpu 1, msr 7dd1c0c0 Feb 9 19:21:49.984896 kernel: smp: Brought up 1 node, 2 CPUs Feb 9 19:21:49.984904 kernel: smpboot: Max logical packages: 2 Feb 9 19:21:49.984912 kernel: smpboot: Total of 2 processors activated (7984.99 BogoMIPS) Feb 9 19:21:49.984920 kernel: devtmpfs: initialized Feb 9 19:21:49.984927 kernel: x86/mm: Memory block size: 128MB Feb 9 19:21:49.984936 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 9 19:21:49.984946 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 9 19:21:49.984954 kernel: pinctrl core: initialized pinctrl subsystem Feb 9 19:21:49.984962 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 9 19:21:49.984969 kernel: audit: initializing netlink subsys (disabled) Feb 9 19:21:49.984977 kernel: audit: type=2000 audit(1707506509.521:1): state=initialized audit_enabled=0 res=1 Feb 9 19:21:49.984985 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 9 19:21:49.984993 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 9 19:21:49.985001 kernel: cpuidle: using governor menu Feb 9 19:21:49.985009 kernel: ACPI: bus type PCI registered Feb 9 19:21:49.985019 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 9 19:21:49.985027 kernel: dca service started, version 1.12.1 Feb 9 19:21:49.985035 kernel: PCI: Using configuration type 1 for base access Feb 9 19:21:49.985043 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 9 19:21:49.985051 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Feb 9 19:21:49.985069 kernel: ACPI: Added _OSI(Module Device) Feb 9 19:21:49.985077 kernel: ACPI: Added _OSI(Processor Device) Feb 9 19:21:49.985085 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 9 19:21:49.985093 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 9 19:21:49.985103 kernel: ACPI: Added _OSI(Linux-Dell-Video) Feb 9 19:21:49.985111 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Feb 9 19:21:49.985119 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Feb 9 19:21:49.985127 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 9 19:21:49.985135 kernel: ACPI: Interpreter enabled Feb 9 19:21:49.985143 kernel: ACPI: PM: (supports S0 S3 S5) Feb 9 19:21:49.985151 kernel: ACPI: Using IOAPIC for interrupt routing Feb 9 19:21:49.985159 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 9 19:21:49.985167 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Feb 9 19:21:49.985177 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 9 19:21:49.985356 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Feb 9 19:21:49.985443 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. Feb 9 19:21:49.985456 kernel: acpiphp: Slot [3] registered Feb 9 19:21:49.985464 kernel: acpiphp: Slot [4] registered Feb 9 19:21:49.985472 kernel: acpiphp: Slot [5] registered Feb 9 19:21:49.985480 kernel: acpiphp: Slot [6] registered Feb 9 19:21:49.985492 kernel: acpiphp: Slot [7] registered Feb 9 19:21:49.985500 kernel: acpiphp: Slot [8] registered Feb 9 19:21:49.985508 kernel: acpiphp: Slot [9] registered Feb 9 19:21:49.985515 kernel: acpiphp: Slot [10] registered Feb 9 19:21:49.985523 kernel: acpiphp: Slot [11] registered Feb 9 19:21:49.985531 kernel: acpiphp: Slot [12] registered Feb 9 19:21:49.985539 kernel: acpiphp: Slot [13] registered Feb 9 19:21:49.985547 kernel: acpiphp: Slot [14] registered Feb 9 19:21:49.985554 kernel: acpiphp: Slot [15] registered Feb 9 19:21:49.985562 kernel: acpiphp: Slot [16] registered Feb 9 19:21:49.985572 kernel: acpiphp: Slot [17] registered Feb 9 19:21:49.985580 kernel: acpiphp: Slot [18] registered Feb 9 19:21:49.985588 kernel: acpiphp: Slot [19] registered Feb 9 19:21:49.985595 kernel: acpiphp: Slot [20] registered Feb 9 19:21:49.985603 kernel: acpiphp: Slot [21] registered Feb 9 19:21:49.985611 kernel: acpiphp: Slot [22] registered Feb 9 19:21:49.985619 kernel: acpiphp: Slot [23] registered Feb 9 19:21:49.985627 kernel: acpiphp: Slot [24] registered Feb 9 19:21:49.985634 kernel: acpiphp: Slot [25] registered Feb 9 19:21:49.985651 kernel: acpiphp: Slot [26] registered Feb 9 19:21:49.985659 kernel: acpiphp: Slot [27] registered Feb 9 19:21:49.985667 kernel: acpiphp: Slot [28] registered Feb 9 19:21:49.985675 kernel: acpiphp: Slot [29] registered Feb 9 19:21:49.985683 kernel: acpiphp: Slot [30] registered Feb 9 19:21:49.985691 kernel: acpiphp: Slot [31] registered Feb 9 19:21:49.985699 kernel: PCI host bridge to bus 0000:00 Feb 9 19:21:49.985794 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 9 19:21:49.985869 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 9 19:21:49.985953 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 9 19:21:49.986026 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Feb 9 19:21:49.986122 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Feb 9 19:21:49.986196 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 9 19:21:49.986300 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Feb 9 19:21:49.986394 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Feb 9 19:21:49.986497 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Feb 9 19:21:49.986583 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc120-0xc12f] Feb 9 19:21:49.986664 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Feb 9 19:21:49.986751 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Feb 9 19:21:49.986834 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Feb 9 19:21:49.986916 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Feb 9 19:21:49.987004 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Feb 9 19:21:49.989176 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Feb 9 19:21:49.989275 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Feb 9 19:21:49.989409 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Feb 9 19:21:49.989503 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Feb 9 19:21:49.989585 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Feb 9 19:21:49.989667 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfeb90000-0xfeb90fff] Feb 9 19:21:49.989756 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfeb80000-0xfeb8ffff pref] Feb 9 19:21:49.989839 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Feb 9 19:21:49.989932 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Feb 9 19:21:49.990015 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc080-0xc0bf] Feb 9 19:21:49.992161 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfeb91000-0xfeb91fff] Feb 9 19:21:49.992262 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Feb 9 19:21:49.992351 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb00000-0xfeb7ffff pref] Feb 9 19:21:49.992464 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Feb 9 19:21:49.992552 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Feb 9 19:21:49.992634 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfeb92000-0xfeb92fff] Feb 9 19:21:49.992715 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Feb 9 19:21:49.992805 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 Feb 9 19:21:49.992890 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc0c0-0xc0ff] Feb 9 19:21:49.992971 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Feb 9 19:21:49.993132 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 Feb 9 19:21:49.993221 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc100-0xc11f] Feb 9 19:21:49.993302 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Feb 9 19:21:49.993314 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Feb 9 19:21:49.993323 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Feb 9 19:21:49.993332 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Feb 9 19:21:49.993340 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Feb 9 19:21:49.993348 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Feb 9 19:21:49.993360 kernel: iommu: Default domain type: Translated Feb 9 19:21:49.993368 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 9 19:21:49.993448 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Feb 9 19:21:49.993529 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Feb 9 19:21:49.993610 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Feb 9 19:21:49.993621 kernel: vgaarb: loaded Feb 9 19:21:49.993630 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 9 19:21:49.993638 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 9 19:21:49.993646 kernel: PTP clock support registered Feb 9 19:21:49.993657 kernel: PCI: Using ACPI for IRQ routing Feb 9 19:21:49.993665 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 9 19:21:49.993673 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Feb 9 19:21:49.993681 kernel: e820: reserve RAM buffer [mem 0x7ffdd000-0x7fffffff] Feb 9 19:21:49.993689 kernel: clocksource: Switched to clocksource kvm-clock Feb 9 19:21:49.993696 kernel: VFS: Disk quotas dquot_6.6.0 Feb 9 19:21:49.993704 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 9 19:21:49.993713 kernel: pnp: PnP ACPI init Feb 9 19:21:49.993796 kernel: pnp 00:03: [dma 2] Feb 9 19:21:49.993812 kernel: pnp: PnP ACPI: found 5 devices Feb 9 19:21:49.993821 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 9 19:21:49.993829 kernel: NET: Registered PF_INET protocol family Feb 9 19:21:49.993837 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 9 19:21:49.993845 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Feb 9 19:21:49.993853 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 9 19:21:49.993861 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 9 19:21:49.993869 kernel: TCP bind hash table entries: 16384 (order: 6, 262144 bytes, linear) Feb 9 19:21:49.993879 kernel: TCP: Hash tables configured (established 16384 bind 16384) Feb 9 19:21:49.993887 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Feb 9 19:21:49.993895 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Feb 9 19:21:49.993903 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 9 19:21:49.993911 kernel: NET: Registered PF_XDP protocol family Feb 9 19:21:49.993983 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 9 19:21:49.994073 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 9 19:21:49.994150 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 9 19:21:49.994220 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Feb 9 19:21:49.994295 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Feb 9 19:21:49.994377 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Feb 9 19:21:49.994458 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Feb 9 19:21:49.994539 kernel: pci 0000:00:01.0: Activating ISA DMA hang workarounds Feb 9 19:21:49.994551 kernel: PCI: CLS 0 bytes, default 64 Feb 9 19:21:49.994559 kernel: Initialise system trusted keyrings Feb 9 19:21:49.994568 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Feb 9 19:21:49.994579 kernel: Key type asymmetric registered Feb 9 19:21:49.994587 kernel: Asymmetric key parser 'x509' registered Feb 9 19:21:49.994595 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Feb 9 19:21:49.994603 kernel: io scheduler mq-deadline registered Feb 9 19:21:49.994612 kernel: io scheduler kyber registered Feb 9 19:21:49.994620 kernel: io scheduler bfq registered Feb 9 19:21:49.994628 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 9 19:21:49.994636 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Feb 9 19:21:49.994644 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Feb 9 19:21:49.994652 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Feb 9 19:21:49.994663 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Feb 9 19:21:49.994671 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 9 19:21:49.994679 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 9 19:21:49.994687 kernel: random: crng init done Feb 9 19:21:49.994695 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Feb 9 19:21:49.994704 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Feb 9 19:21:49.994711 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Feb 9 19:21:49.994720 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Feb 9 19:21:49.994814 kernel: rtc_cmos 00:04: RTC can wake from S4 Feb 9 19:21:49.994898 kernel: rtc_cmos 00:04: registered as rtc0 Feb 9 19:21:49.994975 kernel: rtc_cmos 00:04: setting system clock to 2024-02-09T19:21:49 UTC (1707506509) Feb 9 19:21:49.995050 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Feb 9 19:21:49.997134 kernel: NET: Registered PF_INET6 protocol family Feb 9 19:21:49.997145 kernel: Segment Routing with IPv6 Feb 9 19:21:49.997154 kernel: In-situ OAM (IOAM) with IPv6 Feb 9 19:21:49.997163 kernel: NET: Registered PF_PACKET protocol family Feb 9 19:21:49.997172 kernel: Key type dns_resolver registered Feb 9 19:21:49.997184 kernel: IPI shorthand broadcast: enabled Feb 9 19:21:49.997192 kernel: sched_clock: Marking stable (676808188, 115293581)->(814923859, -22822090) Feb 9 19:21:49.997201 kernel: registered taskstats version 1 Feb 9 19:21:49.997209 kernel: Loading compiled-in X.509 certificates Feb 9 19:21:49.997217 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.148-flatcar: 56154408a02b3bd349a9e9180c9bd837fd1d636a' Feb 9 19:21:49.997226 kernel: Key type .fscrypt registered Feb 9 19:21:49.997234 kernel: Key type fscrypt-provisioning registered Feb 9 19:21:49.997242 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 9 19:21:49.997252 kernel: ima: Allocated hash algorithm: sha1 Feb 9 19:21:49.997260 kernel: ima: No architecture policies found Feb 9 19:21:49.997268 kernel: Freeing unused kernel image (initmem) memory: 45496K Feb 9 19:21:49.997276 kernel: Write protecting the kernel read-only data: 28672k Feb 9 19:21:49.997284 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Feb 9 19:21:49.997292 kernel: Freeing unused kernel image (rodata/data gap) memory: 636K Feb 9 19:21:49.997301 kernel: Run /init as init process Feb 9 19:21:49.997309 kernel: with arguments: Feb 9 19:21:49.997316 kernel: /init Feb 9 19:21:49.997326 kernel: with environment: Feb 9 19:21:49.997334 kernel: HOME=/ Feb 9 19:21:49.997343 kernel: TERM=linux Feb 9 19:21:49.997351 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 9 19:21:49.997362 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 19:21:49.997373 systemd[1]: Detected virtualization kvm. Feb 9 19:21:49.997382 systemd[1]: Detected architecture x86-64. Feb 9 19:21:49.997391 systemd[1]: Running in initrd. Feb 9 19:21:49.997401 systemd[1]: No hostname configured, using default hostname. Feb 9 19:21:49.997410 systemd[1]: Hostname set to . Feb 9 19:21:49.997419 systemd[1]: Initializing machine ID from VM UUID. Feb 9 19:21:49.997428 systemd[1]: Queued start job for default target initrd.target. Feb 9 19:21:49.997436 systemd[1]: Started systemd-ask-password-console.path. Feb 9 19:21:49.997445 systemd[1]: Reached target cryptsetup.target. Feb 9 19:21:49.997453 systemd[1]: Reached target paths.target. Feb 9 19:21:49.997462 systemd[1]: Reached target slices.target. Feb 9 19:21:49.997472 systemd[1]: Reached target swap.target. Feb 9 19:21:49.997480 systemd[1]: Reached target timers.target. Feb 9 19:21:49.997489 systemd[1]: Listening on iscsid.socket. Feb 9 19:21:49.997498 systemd[1]: Listening on iscsiuio.socket. Feb 9 19:21:49.997506 systemd[1]: Listening on systemd-journald-audit.socket. Feb 9 19:21:49.997515 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 9 19:21:49.997524 systemd[1]: Listening on systemd-journald.socket. Feb 9 19:21:49.997534 systemd[1]: Listening on systemd-networkd.socket. Feb 9 19:21:49.997542 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 19:21:49.997551 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 19:21:49.997559 systemd[1]: Reached target sockets.target. Feb 9 19:21:49.997568 systemd[1]: Starting kmod-static-nodes.service... Feb 9 19:21:49.997585 systemd[1]: Finished network-cleanup.service. Feb 9 19:21:49.997596 systemd[1]: Starting systemd-fsck-usr.service... Feb 9 19:21:49.997607 systemd[1]: Starting systemd-journald.service... Feb 9 19:21:49.997616 systemd[1]: Starting systemd-modules-load.service... Feb 9 19:21:49.997625 systemd[1]: Starting systemd-resolved.service... Feb 9 19:21:49.997634 systemd[1]: Starting systemd-vconsole-setup.service... Feb 9 19:21:49.997643 systemd[1]: Finished kmod-static-nodes.service. Feb 9 19:21:49.997652 systemd[1]: Finished systemd-fsck-usr.service. Feb 9 19:21:49.997669 systemd-journald[185]: Journal started Feb 9 19:21:49.997719 systemd-journald[185]: Runtime Journal (/run/log/journal/a459ad3f69004c41accdb4fe5f5e6168) is 4.9M, max 39.5M, 34.5M free. Feb 9 19:21:49.987108 systemd-modules-load[186]: Inserted module 'overlay' Feb 9 19:21:49.991358 systemd-resolved[187]: Positive Trust Anchors: Feb 9 19:21:50.013654 systemd[1]: Started systemd-resolved.service. Feb 9 19:21:50.013674 kernel: audit: type=1130 audit(1707506510.008:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:21:50.008000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:21:49.991370 systemd-resolved[187]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 19:21:50.018933 systemd[1]: Started systemd-journald.service. Feb 9 19:21:50.018957 kernel: audit: type=1130 audit(1707506510.014:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:21:50.014000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:21:49.991406 systemd-resolved[187]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 19:21:49.994975 systemd-resolved[187]: Defaulting to hostname 'linux'. Feb 9 19:21:50.022000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:21:50.022816 systemd[1]: Finished systemd-vconsole-setup.service. Feb 9 19:21:50.042306 kernel: audit: type=1130 audit(1707506510.022:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:21:50.042355 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 9 19:21:50.042379 kernel: audit: type=1130 audit(1707506510.028:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:21:50.042401 kernel: Bridge firewalling registered Feb 9 19:21:50.028000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:21:50.028820 systemd[1]: Reached target nss-lookup.target. Feb 9 19:21:50.030083 systemd[1]: Starting dracut-cmdline-ask.service... Feb 9 19:21:50.034765 systemd-modules-load[186]: Inserted module 'br_netfilter' Feb 9 19:21:50.047111 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 9 19:21:50.052854 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 9 19:21:50.059363 kernel: audit: type=1130 audit(1707506510.055:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:21:50.055000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:21:50.055626 systemd[1]: Finished dracut-cmdline-ask.service. Feb 9 19:21:50.060000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:21:50.061291 systemd[1]: Starting dracut-cmdline.service... Feb 9 19:21:50.066423 kernel: audit: type=1130 audit(1707506510.060:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:21:50.072214 dracut-cmdline[203]: dracut-dracut-053 Feb 9 19:21:50.074629 dracut-cmdline[203]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=4dbf910aaff679d18007a871aba359cc2cf6cb85992bb7598afad40271debbd6 Feb 9 19:21:50.080121 kernel: SCSI subsystem initialized Feb 9 19:21:50.099523 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 9 19:21:50.099604 kernel: device-mapper: uevent: version 1.0.3 Feb 9 19:21:50.101757 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Feb 9 19:21:50.106447 systemd-modules-load[186]: Inserted module 'dm_multipath' Feb 9 19:21:50.108000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:21:50.108258 systemd[1]: Finished systemd-modules-load.service. Feb 9 19:21:50.114818 kernel: audit: type=1130 audit(1707506510.108:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:21:50.109774 systemd[1]: Starting systemd-sysctl.service... Feb 9 19:21:50.120000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:21:50.120391 systemd[1]: Finished systemd-sysctl.service. Feb 9 19:21:50.124604 kernel: audit: type=1130 audit(1707506510.120:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:21:50.142086 kernel: Loading iSCSI transport class v2.0-870. Feb 9 19:21:50.155093 kernel: iscsi: registered transport (tcp) Feb 9 19:21:50.180112 kernel: iscsi: registered transport (qla4xxx) Feb 9 19:21:50.180205 kernel: QLogic iSCSI HBA Driver Feb 9 19:21:50.219000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:21:50.218935 systemd[1]: Finished dracut-cmdline.service. Feb 9 19:21:50.226605 kernel: audit: type=1130 audit(1707506510.219:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:21:50.221767 systemd[1]: Starting dracut-pre-udev.service... Feb 9 19:21:50.302133 kernel: raid6: sse2x4 gen() 11664 MB/s Feb 9 19:21:50.319096 kernel: raid6: sse2x4 xor() 4824 MB/s Feb 9 19:21:50.336086 kernel: raid6: sse2x2 gen() 13983 MB/s Feb 9 19:21:50.353088 kernel: raid6: sse2x2 xor() 8733 MB/s Feb 9 19:21:50.370085 kernel: raid6: sse2x1 gen() 10773 MB/s Feb 9 19:21:50.387903 kernel: raid6: sse2x1 xor() 6825 MB/s Feb 9 19:21:50.387969 kernel: raid6: using algorithm sse2x2 gen() 13983 MB/s Feb 9 19:21:50.387999 kernel: raid6: .... xor() 8733 MB/s, rmw enabled Feb 9 19:21:50.388787 kernel: raid6: using ssse3x2 recovery algorithm Feb 9 19:21:50.405102 kernel: xor: measuring software checksum speed Feb 9 19:21:50.405169 kernel: prefetch64-sse : 17233 MB/sec Feb 9 19:21:50.407349 kernel: generic_sse : 16519 MB/sec Feb 9 19:21:50.407395 kernel: xor: using function: prefetch64-sse (17233 MB/sec) Feb 9 19:21:50.520124 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Feb 9 19:21:50.535564 systemd[1]: Finished dracut-pre-udev.service. Feb 9 19:21:50.536000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:21:50.537000 audit: BPF prog-id=7 op=LOAD Feb 9 19:21:50.538000 audit: BPF prog-id=8 op=LOAD Feb 9 19:21:50.539411 systemd[1]: Starting systemd-udevd.service... Feb 9 19:21:50.553332 systemd-udevd[385]: Using default interface naming scheme 'v252'. Feb 9 19:21:50.560000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:21:50.558202 systemd[1]: Started systemd-udevd.service. Feb 9 19:21:50.562925 systemd[1]: Starting dracut-pre-trigger.service... Feb 9 19:21:50.584759 dracut-pre-trigger[401]: rd.md=0: removing MD RAID activation Feb 9 19:21:50.628197 systemd[1]: Finished dracut-pre-trigger.service. Feb 9 19:21:50.629000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:21:50.630747 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 19:21:50.673764 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 19:21:50.676000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:21:50.756307 kernel: virtio_blk virtio2: [vda] 41943040 512-byte logical blocks (21.5 GB/20.0 GiB) Feb 9 19:21:50.768205 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 9 19:21:50.768284 kernel: GPT:17805311 != 41943039 Feb 9 19:21:50.768298 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 9 19:21:50.768311 kernel: GPT:17805311 != 41943039 Feb 9 19:21:50.768323 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 9 19:21:50.768726 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 9 19:21:50.789099 kernel: libata version 3.00 loaded. Feb 9 19:21:50.793246 kernel: ata_piix 0000:00:01.1: version 2.13 Feb 9 19:21:50.801088 kernel: scsi host0: ata_piix Feb 9 19:21:50.811084 kernel: scsi host1: ata_piix Feb 9 19:21:50.811292 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc120 irq 14 Feb 9 19:21:50.811307 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc128 irq 15 Feb 9 19:21:50.861149 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (437) Feb 9 19:21:50.878953 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 9 19:21:50.996702 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Feb 9 19:21:51.039532 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Feb 9 19:21:51.052144 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Feb 9 19:21:51.053502 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Feb 9 19:21:51.057853 systemd[1]: Starting disk-uuid.service... Feb 9 19:21:51.083121 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 9 19:21:51.085303 disk-uuid[461]: Primary Header is updated. Feb 9 19:21:51.085303 disk-uuid[461]: Secondary Entries is updated. Feb 9 19:21:51.085303 disk-uuid[461]: Secondary Header is updated. Feb 9 19:21:51.097128 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 9 19:21:52.113113 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 9 19:21:52.113554 disk-uuid[462]: The operation has completed successfully. Feb 9 19:21:52.186648 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 9 19:21:52.188000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:21:52.188000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:21:52.186929 systemd[1]: Finished disk-uuid.service. Feb 9 19:21:52.205465 systemd[1]: Starting verity-setup.service... Feb 9 19:21:52.234096 kernel: device-mapper: verity: sha256 using implementation "sha256-ssse3" Feb 9 19:21:52.331369 systemd[1]: Found device dev-mapper-usr.device. Feb 9 19:21:52.335737 systemd[1]: Mounting sysusr-usr.mount... Feb 9 19:21:52.342032 systemd[1]: Finished verity-setup.service. Feb 9 19:21:52.343000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:21:52.507162 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Feb 9 19:21:52.508781 systemd[1]: Mounted sysusr-usr.mount. Feb 9 19:21:52.510253 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Feb 9 19:21:52.511917 systemd[1]: Starting ignition-setup.service... Feb 9 19:21:52.514797 systemd[1]: Starting parse-ip-for-networkd.service... Feb 9 19:21:52.529435 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 9 19:21:52.529501 kernel: BTRFS info (device vda6): using free space tree Feb 9 19:21:52.529513 kernel: BTRFS info (device vda6): has skinny extents Feb 9 19:21:52.551849 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 9 19:21:52.566000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:21:52.566437 systemd[1]: Finished ignition-setup.service. Feb 9 19:21:52.568822 systemd[1]: Starting ignition-fetch-offline.service... Feb 9 19:21:52.651274 systemd[1]: Finished parse-ip-for-networkd.service. Feb 9 19:21:52.651000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:21:52.652000 audit: BPF prog-id=9 op=LOAD Feb 9 19:21:52.653403 systemd[1]: Starting systemd-networkd.service... Feb 9 19:21:52.677836 systemd-networkd[632]: lo: Link UP Feb 9 19:21:52.677849 systemd-networkd[632]: lo: Gained carrier Feb 9 19:21:52.678635 systemd-networkd[632]: Enumeration completed Feb 9 19:21:52.680000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:21:52.679089 systemd-networkd[632]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 19:21:52.679250 systemd[1]: Started systemd-networkd.service. Feb 9 19:21:52.680876 systemd[1]: Reached target network.target. Feb 9 19:21:52.682708 systemd[1]: Starting iscsiuio.service... Feb 9 19:21:52.683692 systemd-networkd[632]: eth0: Link UP Feb 9 19:21:52.691000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:21:52.683704 systemd-networkd[632]: eth0: Gained carrier Feb 9 19:21:52.689626 systemd[1]: Started iscsiuio.service. Feb 9 19:21:52.692112 systemd[1]: Starting iscsid.service... Feb 9 19:21:52.698755 iscsid[637]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Feb 9 19:21:52.698755 iscsid[637]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Feb 9 19:21:52.698755 iscsid[637]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Feb 9 19:21:52.698755 iscsid[637]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Feb 9 19:21:52.698755 iscsid[637]: If using hardware iscsi like qla4xxx this message can be ignored. Feb 9 19:21:52.698755 iscsid[637]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Feb 9 19:21:52.698755 iscsid[637]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Feb 9 19:21:52.701000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:21:52.696742 systemd[1]: Started iscsid.service. Feb 9 19:21:52.703644 systemd[1]: Starting dracut-initqueue.service... Feb 9 19:21:52.711228 systemd-networkd[632]: eth0: DHCPv4 address 172.24.4.101/24, gateway 172.24.4.1 acquired from 172.24.4.1 Feb 9 19:21:52.721000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:21:52.721499 systemd[1]: Finished dracut-initqueue.service. Feb 9 19:21:52.722126 systemd[1]: Reached target remote-fs-pre.target. Feb 9 19:21:52.722668 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 19:21:52.723660 systemd[1]: Reached target remote-fs.target. Feb 9 19:21:52.725352 systemd[1]: Starting dracut-pre-mount.service... Feb 9 19:21:52.734850 systemd[1]: Finished dracut-pre-mount.service. Feb 9 19:21:52.735000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:21:53.861152 ignition[554]: Ignition 2.14.0 Feb 9 19:21:53.861185 ignition[554]: Stage: fetch-offline Feb 9 19:21:53.861351 ignition[554]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:21:53.861409 ignition[554]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Feb 9 19:21:53.863822 ignition[554]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 9 19:21:53.864212 ignition[554]: parsed url from cmdline: "" Feb 9 19:21:53.867642 systemd[1]: Finished ignition-fetch-offline.service. Feb 9 19:21:53.869000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:21:53.864222 ignition[554]: no config URL provided Feb 9 19:21:53.864237 ignition[554]: reading system config file "/usr/lib/ignition/user.ign" Feb 9 19:21:53.871300 systemd[1]: Starting ignition-fetch.service... Feb 9 19:21:53.864269 ignition[554]: no config at "/usr/lib/ignition/user.ign" Feb 9 19:21:53.864282 ignition[554]: failed to fetch config: resource requires networking Feb 9 19:21:53.864952 ignition[554]: Ignition finished successfully Feb 9 19:21:53.892702 ignition[656]: Ignition 2.14.0 Feb 9 19:21:53.892722 ignition[656]: Stage: fetch Feb 9 19:21:53.892973 ignition[656]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:21:53.893014 ignition[656]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Feb 9 19:21:53.895427 ignition[656]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 9 19:21:53.895679 ignition[656]: parsed url from cmdline: "" Feb 9 19:21:53.895689 ignition[656]: no config URL provided Feb 9 19:21:53.895702 ignition[656]: reading system config file "/usr/lib/ignition/user.ign" Feb 9 19:21:53.895722 ignition[656]: no config at "/usr/lib/ignition/user.ign" Feb 9 19:21:53.898571 ignition[656]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Feb 9 19:21:53.898629 ignition[656]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Feb 9 19:21:53.902921 ignition[656]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Feb 9 19:21:54.088974 systemd-networkd[632]: eth0: Gained IPv6LL Feb 9 19:21:54.233913 ignition[656]: GET result: OK Feb 9 19:21:54.234818 ignition[656]: parsing config with SHA512: 8f22936c10cf9913ea8dc7d48dc2c304c5d90dba5ffbe858bf301f03a82192caee313818593e9ae2e0ca15ef691bf580ef06cfc7b7d3e8ce77ad291a2c575fd4 Feb 9 19:21:54.297496 unknown[656]: fetched base config from "system" Feb 9 19:21:54.299008 unknown[656]: fetched base config from "system" Feb 9 19:21:54.300193 unknown[656]: fetched user config from "openstack" Feb 9 19:21:54.301556 ignition[656]: fetch: fetch complete Feb 9 19:21:54.301571 ignition[656]: fetch: fetch passed Feb 9 19:21:54.301746 ignition[656]: Ignition finished successfully Feb 9 19:21:54.304835 systemd[1]: Finished ignition-fetch.service. Feb 9 19:21:54.318029 kernel: kauditd_printk_skb: 18 callbacks suppressed Feb 9 19:21:54.318114 kernel: audit: type=1130 audit(1707506514.306:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:21:54.306000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:21:54.318465 systemd[1]: Starting ignition-kargs.service... Feb 9 19:21:54.341767 ignition[662]: Ignition 2.14.0 Feb 9 19:21:54.341795 ignition[662]: Stage: kargs Feb 9 19:21:54.342047 ignition[662]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:21:54.342134 ignition[662]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Feb 9 19:21:54.344259 ignition[662]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 9 19:21:54.346910 ignition[662]: kargs: kargs passed Feb 9 19:21:54.347020 ignition[662]: Ignition finished successfully Feb 9 19:21:54.348366 systemd[1]: Finished ignition-kargs.service. Feb 9 19:21:54.349000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:21:54.351764 systemd[1]: Starting ignition-disks.service... Feb 9 19:21:54.354830 kernel: audit: type=1130 audit(1707506514.349:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:21:54.365001 ignition[668]: Ignition 2.14.0 Feb 9 19:21:54.365014 ignition[668]: Stage: disks Feb 9 19:21:54.365162 ignition[668]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:21:54.365185 ignition[668]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Feb 9 19:21:54.366186 ignition[668]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 9 19:21:54.367340 ignition[668]: disks: disks passed Feb 9 19:21:54.373121 kernel: audit: type=1130 audit(1707506514.368:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:21:54.368000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:21:54.368170 systemd[1]: Finished ignition-disks.service. Feb 9 19:21:54.367386 ignition[668]: Ignition finished successfully Feb 9 19:21:54.368937 systemd[1]: Reached target initrd-root-device.target. Feb 9 19:21:54.373541 systemd[1]: Reached target local-fs-pre.target. Feb 9 19:21:54.374545 systemd[1]: Reached target local-fs.target. Feb 9 19:21:54.375576 systemd[1]: Reached target sysinit.target. Feb 9 19:21:54.376582 systemd[1]: Reached target basic.target. Feb 9 19:21:54.378420 systemd[1]: Starting systemd-fsck-root.service... Feb 9 19:21:54.890047 systemd-fsck[675]: ROOT: clean, 602/1628000 files, 124051/1617920 blocks Feb 9 19:21:54.920843 systemd[1]: Finished systemd-fsck-root.service. Feb 9 19:21:54.932441 kernel: audit: type=1130 audit(1707506514.921:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:21:54.921000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:21:54.923568 systemd[1]: Mounting sysroot.mount... Feb 9 19:21:54.955146 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Feb 9 19:21:54.955726 systemd[1]: Mounted sysroot.mount. Feb 9 19:21:54.957207 systemd[1]: Reached target initrd-root-fs.target. Feb 9 19:21:54.962496 systemd[1]: Mounting sysroot-usr.mount... Feb 9 19:21:54.964768 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Feb 9 19:21:54.966645 systemd[1]: Starting flatcar-openstack-hostname.service... Feb 9 19:21:54.968214 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 9 19:21:54.968300 systemd[1]: Reached target ignition-diskful.target. Feb 9 19:21:54.977584 systemd[1]: Mounted sysroot-usr.mount. Feb 9 19:21:54.989629 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 9 19:21:54.994245 systemd[1]: Starting initrd-setup-root.service... Feb 9 19:21:55.010670 initrd-setup-root[687]: cut: /sysroot/etc/passwd: No such file or directory Feb 9 19:21:55.026672 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (682) Feb 9 19:21:55.040149 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 9 19:21:55.040251 kernel: BTRFS info (device vda6): using free space tree Feb 9 19:21:55.040278 kernel: BTRFS info (device vda6): has skinny extents Feb 9 19:21:55.046844 initrd-setup-root[711]: cut: /sysroot/etc/group: No such file or directory Feb 9 19:21:55.054728 initrd-setup-root[719]: cut: /sysroot/etc/shadow: No such file or directory Feb 9 19:21:55.065697 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 9 19:21:55.067705 initrd-setup-root[729]: cut: /sysroot/etc/gshadow: No such file or directory Feb 9 19:21:55.191029 systemd[1]: Finished initrd-setup-root.service. Feb 9 19:21:55.192000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:21:55.194452 systemd[1]: Starting ignition-mount.service... Feb 9 19:21:55.203385 kernel: audit: type=1130 audit(1707506515.192:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:21:55.207396 systemd[1]: Starting sysroot-boot.service... Feb 9 19:21:55.227052 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Feb 9 19:21:55.227512 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Feb 9 19:21:55.252863 ignition[750]: INFO : Ignition 2.14.0 Feb 9 19:21:55.253719 ignition[750]: INFO : Stage: mount Feb 9 19:21:55.254322 ignition[750]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:21:55.255022 ignition[750]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Feb 9 19:21:55.257792 ignition[750]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 9 19:21:55.259694 ignition[750]: INFO : mount: mount passed Feb 9 19:21:55.260286 ignition[750]: INFO : Ignition finished successfully Feb 9 19:21:55.262000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:21:55.261605 systemd[1]: Finished ignition-mount.service. Feb 9 19:21:55.266096 kernel: audit: type=1130 audit(1707506515.262:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:21:55.273456 systemd[1]: Finished sysroot-boot.service. Feb 9 19:21:55.277743 kernel: audit: type=1130 audit(1707506515.273:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:21:55.273000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:21:55.290617 coreos-metadata[681]: Feb 09 19:21:55.290 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Feb 9 19:21:55.310433 coreos-metadata[681]: Feb 09 19:21:55.310 INFO Fetch successful Feb 9 19:21:55.310996 coreos-metadata[681]: Feb 09 19:21:55.310 INFO wrote hostname ci-3510-3-2-c-009518a0f7.novalocal to /sysroot/etc/hostname Feb 9 19:21:55.315323 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Feb 9 19:21:55.315432 systemd[1]: Finished flatcar-openstack-hostname.service. Feb 9 19:21:55.316000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:21:55.316000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:21:55.317908 systemd[1]: Starting ignition-files.service... Feb 9 19:21:55.325752 kernel: audit: type=1130 audit(1707506515.316:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:21:55.325778 kernel: audit: type=1131 audit(1707506515.316:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:21:55.328238 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 9 19:21:55.355129 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (758) Feb 9 19:21:55.406458 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 9 19:21:55.406580 kernel: BTRFS info (device vda6): using free space tree Feb 9 19:21:55.406609 kernel: BTRFS info (device vda6): has skinny extents Feb 9 19:21:55.447231 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 9 19:21:55.471011 ignition[777]: INFO : Ignition 2.14.0 Feb 9 19:21:55.471011 ignition[777]: INFO : Stage: files Feb 9 19:21:55.473661 ignition[777]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:21:55.473661 ignition[777]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Feb 9 19:21:55.473661 ignition[777]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 9 19:21:55.480892 ignition[777]: DEBUG : files: compiled without relabeling support, skipping Feb 9 19:21:55.483391 ignition[777]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 9 19:21:55.483391 ignition[777]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 9 19:21:55.502644 ignition[777]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 9 19:21:55.504922 ignition[777]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 9 19:21:55.506725 ignition[777]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 9 19:21:55.505689 unknown[777]: wrote ssh authorized keys file for user: core Feb 9 19:21:55.510200 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.1.1.tgz" Feb 9 19:21:55.510200 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-amd64-v1.1.1.tgz: attempt #1 Feb 9 19:21:55.973253 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 9 19:21:56.675296 ignition[777]: DEBUG : files: createFilesystemsFiles: createFiles: op(3): file matches expected sum of: 4d0ed0abb5951b9cf83cba938ef84bdc5b681f4ac869da8143974f6a53a3ff30c666389fa462b9d14d30af09bf03f6cdf77598c572f8fb3ea00cecdda467a48d Feb 9 19:21:56.675296 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.1.1.tgz" Feb 9 19:21:56.675296 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/crictl-v1.26.0-linux-amd64.tar.gz" Feb 9 19:21:56.675296 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.26.0/crictl-v1.26.0-linux-amd64.tar.gz: attempt #1 Feb 9 19:21:57.065999 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 9 19:21:57.508494 ignition[777]: DEBUG : files: createFilesystemsFiles: createFiles: op(4): file matches expected sum of: a3a2c02a90b008686c20babaf272e703924db2a3e2a0d4e2a7c81d994cbc68c47458a4a354ecc243af095b390815c7f203348b9749351ae817bd52a522300449 Feb 9 19:21:57.508494 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/crictl-v1.26.0-linux-amd64.tar.gz" Feb 9 19:21:57.542997 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/kubeadm" Feb 9 19:21:57.545040 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubeadm: attempt #1 Feb 9 19:21:57.721280 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Feb 9 19:21:58.622472 ignition[777]: DEBUG : files: createFilesystemsFiles: createFiles: op(5): file matches expected sum of: 1c324cd645a7bf93d19d24c87498d9a17878eb1cc927e2680200ffeab2f85051ddec47d85b79b8e774042dc6726299ad3d7caf52c060701f00deba30dc33f660 Feb 9 19:21:58.626715 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/kubeadm" Feb 9 19:21:58.629170 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/bin/kubelet" Feb 9 19:21:58.629170 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubelet: attempt #1 Feb 9 19:21:58.769245 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Feb 9 19:22:00.986107 ignition[777]: DEBUG : files: createFilesystemsFiles: createFiles: op(6): file matches expected sum of: 40daf2a9b9e666c14b10e627da931bd79978628b1f23ef6429c1cb4fcba261f86ccff440c0dbb0070ee760fe55772b4fd279c4582dfbb17fa30bc94b7f00126b Feb 9 19:22:00.986107 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/bin/kubelet" Feb 9 19:22:00.986107 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/install.sh" Feb 9 19:22:00.993792 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/install.sh" Feb 9 19:22:00.993792 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/docker/daemon.json" Feb 9 19:22:00.993792 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/docker/daemon.json" Feb 9 19:22:01.035193 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 9 19:22:01.035193 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 9 19:22:01.039311 ignition[777]: INFO : files: op(a): [started] processing unit "coreos-metadata-sshkeys@.service" Feb 9 19:22:01.083156 ignition[777]: INFO : files: op(a): op(b): [started] writing systemd drop-in "20-clct-provider-override.conf" at "/sysroot/etc/systemd/system/coreos-metadata-sshkeys@.service.d/20-clct-provider-override.conf" Feb 9 19:22:01.086106 ignition[777]: INFO : files: op(a): op(b): [finished] writing systemd drop-in "20-clct-provider-override.conf" at "/sysroot/etc/systemd/system/coreos-metadata-sshkeys@.service.d/20-clct-provider-override.conf" Feb 9 19:22:01.086106 ignition[777]: INFO : files: op(a): [finished] processing unit "coreos-metadata-sshkeys@.service" Feb 9 19:22:01.086106 ignition[777]: INFO : files: op(c): [started] processing unit "prepare-cni-plugins.service" Feb 9 19:22:01.086106 ignition[777]: INFO : files: op(c): op(d): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 9 19:22:01.086106 ignition[777]: INFO : files: op(c): op(d): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 9 19:22:01.086106 ignition[777]: INFO : files: op(c): [finished] processing unit "prepare-cni-plugins.service" Feb 9 19:22:01.086106 ignition[777]: INFO : files: op(e): [started] processing unit "prepare-critools.service" Feb 9 19:22:01.086106 ignition[777]: INFO : files: op(e): op(f): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 9 19:22:01.086106 ignition[777]: INFO : files: op(e): op(f): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 9 19:22:01.086106 ignition[777]: INFO : files: op(e): [finished] processing unit "prepare-critools.service" Feb 9 19:22:01.086106 ignition[777]: INFO : files: op(10): [started] processing unit "coreos-metadata.service" Feb 9 19:22:01.086106 ignition[777]: INFO : files: op(10): op(11): [started] writing systemd drop-in "20-clct-provider-override.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/20-clct-provider-override.conf" Feb 9 19:22:01.086106 ignition[777]: INFO : files: op(10): op(11): [finished] writing systemd drop-in "20-clct-provider-override.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/20-clct-provider-override.conf" Feb 9 19:22:01.086106 ignition[777]: INFO : files: op(10): [finished] processing unit "coreos-metadata.service" Feb 9 19:22:01.086106 ignition[777]: INFO : files: op(12): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Feb 9 19:22:01.086106 ignition[777]: INFO : files: op(12): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Feb 9 19:22:01.086106 ignition[777]: INFO : files: op(13): [started] setting preset to enabled for "prepare-cni-plugins.service" Feb 9 19:22:01.121450 ignition[777]: INFO : files: op(13): [finished] setting preset to enabled for "prepare-cni-plugins.service" Feb 9 19:22:01.121450 ignition[777]: INFO : files: op(14): [started] setting preset to enabled for "prepare-critools.service" Feb 9 19:22:01.121450 ignition[777]: INFO : files: op(14): [finished] setting preset to enabled for "prepare-critools.service" Feb 9 19:22:01.130767 ignition[777]: INFO : files: createResultFile: createFiles: op(15): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 9 19:22:01.133156 ignition[777]: INFO : files: createResultFile: createFiles: op(15): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 9 19:22:01.133156 ignition[777]: INFO : files: files passed Feb 9 19:22:01.133156 ignition[777]: INFO : Ignition finished successfully Feb 9 19:22:01.135822 systemd[1]: Finished ignition-files.service. Feb 9 19:22:01.156138 kernel: audit: type=1130 audit(1707506521.141:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:22:01.141000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:22:01.145243 systemd[1]: Starting initrd-setup-root-after-ignition.service... Feb 9 19:22:01.152983 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Feb 9 19:22:01.154724 systemd[1]: Starting ignition-quench.service... Feb 9 19:22:01.216000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:22:01.215334 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 9 19:22:01.234120 kernel: audit: type=1130 audit(1707506521.216:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:22:01.234168 kernel: audit: type=1131 audit(1707506521.216:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:22:01.216000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:22:01.215530 systemd[1]: Finished ignition-quench.service. Feb 9 19:22:01.251774 initrd-setup-root-after-ignition[802]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 9 19:22:01.253143 systemd[1]: Finished initrd-setup-root-after-ignition.service. Feb 9 19:22:01.265909 kernel: audit: type=1130 audit(1707506521.255:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:22:01.255000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:22:01.255878 systemd[1]: Reached target ignition-complete.target. Feb 9 19:22:01.268518 systemd[1]: Starting initrd-parse-etc.service... Feb 9 19:22:01.302668 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 9 19:22:01.302888 systemd[1]: Finished initrd-parse-etc.service. Feb 9 19:22:01.322807 kernel: audit: type=1130 audit(1707506521.305:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:22:01.322864 kernel: audit: type=1131 audit(1707506521.305:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:22:01.305000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:22:01.305000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:22:01.305582 systemd[1]: Reached target initrd-fs.target. Feb 9 19:22:01.323757 systemd[1]: Reached target initrd.target. Feb 9 19:22:01.325853 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Feb 9 19:22:01.327697 systemd[1]: Starting dracut-pre-pivot.service... Feb 9 19:22:01.358361 systemd[1]: Finished dracut-pre-pivot.service. Feb 9 19:22:01.359000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:22:01.369224 kernel: audit: type=1130 audit(1707506521.359:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:22:01.361803 systemd[1]: Starting initrd-cleanup.service... Feb 9 19:22:01.388617 systemd[1]: Stopped target nss-lookup.target. Feb 9 19:22:01.389726 systemd[1]: Stopped target remote-cryptsetup.target. Feb 9 19:22:01.391777 systemd[1]: Stopped target timers.target. Feb 9 19:22:01.403817 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 9 19:22:01.404166 systemd[1]: Stopped dracut-pre-pivot.service. Feb 9 19:22:01.415404 kernel: audit: type=1131 audit(1707506521.406:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:22:01.406000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:22:01.406630 systemd[1]: Stopped target initrd.target. Feb 9 19:22:01.416392 systemd[1]: Stopped target basic.target. Feb 9 19:22:01.418306 systemd[1]: Stopped target ignition-complete.target. Feb 9 19:22:01.420208 systemd[1]: Stopped target ignition-diskful.target. Feb 9 19:22:01.422206 systemd[1]: Stopped target initrd-root-device.target. Feb 9 19:22:01.424282 systemd[1]: Stopped target remote-fs.target. Feb 9 19:22:01.426246 systemd[1]: Stopped target remote-fs-pre.target. Feb 9 19:22:01.428211 systemd[1]: Stopped target sysinit.target. Feb 9 19:22:01.430172 systemd[1]: Stopped target local-fs.target. Feb 9 19:22:01.432145 systemd[1]: Stopped target local-fs-pre.target. Feb 9 19:22:01.434037 systemd[1]: Stopped target swap.target. Feb 9 19:22:01.435879 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 9 19:22:01.446719 kernel: audit: type=1131 audit(1707506521.437:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:22:01.437000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:22:01.436191 systemd[1]: Stopped dracut-pre-mount.service. Feb 9 19:22:01.438034 systemd[1]: Stopped target cryptsetup.target. Feb 9 19:22:01.457758 kernel: audit: type=1131 audit(1707506521.449:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:22:01.449000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:22:01.447686 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 9 19:22:01.458000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:22:01.447946 systemd[1]: Stopped dracut-initqueue.service. Feb 9 19:22:01.449858 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 9 19:22:01.460000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:22:01.450171 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Feb 9 19:22:01.458940 systemd[1]: ignition-files.service: Deactivated successfully. Feb 9 19:22:01.459295 systemd[1]: Stopped ignition-files.service. Feb 9 19:22:01.462182 systemd[1]: Stopping ignition-mount.service... Feb 9 19:22:01.467000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:22:01.469000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:22:01.470000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:22:01.472226 iscsid[637]: iscsid shutting down. Feb 9 19:22:01.463867 systemd[1]: Stopping iscsid.service... Feb 9 19:22:01.466765 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 9 19:22:01.466971 systemd[1]: Stopped kmod-static-nodes.service. Feb 9 19:22:01.468389 systemd[1]: Stopping sysroot-boot.service... Feb 9 19:22:01.468829 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 9 19:22:01.476000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:22:01.468975 systemd[1]: Stopped systemd-udev-trigger.service. Feb 9 19:22:01.469567 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 9 19:22:01.469680 systemd[1]: Stopped dracut-pre-trigger.service. Feb 9 19:22:01.471954 systemd[1]: iscsid.service: Deactivated successfully. Feb 9 19:22:01.474972 systemd[1]: Stopped iscsid.service. Feb 9 19:22:01.482000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:22:01.483000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:22:01.484000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:22:01.479719 systemd[1]: Stopping iscsiuio.service... Feb 9 19:22:01.480909 systemd[1]: iscsiuio.service: Deactivated successfully. Feb 9 19:22:01.481400 systemd[1]: Stopped iscsiuio.service. Feb 9 19:22:01.483047 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 9 19:22:01.483213 systemd[1]: Finished initrd-cleanup.service. Feb 9 19:22:01.502113 ignition[815]: INFO : Ignition 2.14.0 Feb 9 19:22:01.502113 ignition[815]: INFO : Stage: umount Feb 9 19:22:01.502113 ignition[815]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:22:01.502113 ignition[815]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Feb 9 19:22:01.502113 ignition[815]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 9 19:22:01.506000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:22:01.507000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:22:01.507000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:22:01.508000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:22:01.508000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:22:01.511286 ignition[815]: INFO : umount: umount passed Feb 9 19:22:01.511286 ignition[815]: INFO : Ignition finished successfully Feb 9 19:22:01.505836 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 9 19:22:01.505940 systemd[1]: Stopped ignition-mount.service. Feb 9 19:22:01.515000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:22:01.506681 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 9 19:22:01.506721 systemd[1]: Stopped ignition-disks.service. Feb 9 19:22:01.507226 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 9 19:22:01.507262 systemd[1]: Stopped ignition-kargs.service. Feb 9 19:22:01.507716 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 9 19:22:01.507750 systemd[1]: Stopped ignition-fetch.service. Feb 9 19:22:01.508217 systemd[1]: Stopped target network.target. Feb 9 19:22:01.521000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:22:01.508597 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 9 19:22:01.508634 systemd[1]: Stopped ignition-fetch-offline.service. Feb 9 19:22:01.509132 systemd[1]: Stopped target paths.target. Feb 9 19:22:01.509506 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 9 19:22:01.510230 systemd[1]: Stopped systemd-ask-password-console.path. Feb 9 19:22:01.510715 systemd[1]: Stopped target slices.target. Feb 9 19:22:01.511838 systemd[1]: Stopped target sockets.target. Feb 9 19:22:01.529000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:22:01.512822 systemd[1]: iscsid.socket: Deactivated successfully. Feb 9 19:22:01.530000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:22:01.512852 systemd[1]: Closed iscsid.socket. Feb 9 19:22:01.531000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:22:01.513782 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 9 19:22:01.513813 systemd[1]: Closed iscsiuio.socket. Feb 9 19:22:01.514315 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 9 19:22:01.514352 systemd[1]: Stopped ignition-setup.service. Feb 9 19:22:01.515333 systemd[1]: Stopping systemd-networkd.service... Feb 9 19:22:01.516026 systemd[1]: Stopping systemd-resolved.service... Feb 9 19:22:01.518135 systemd-networkd[632]: eth0: DHCPv6 lease lost Feb 9 19:22:01.535000 audit: BPF prog-id=9 op=UNLOAD Feb 9 19:22:01.535000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:22:01.519232 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 9 19:22:01.520544 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 9 19:22:01.539000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:22:01.520638 systemd[1]: Stopped systemd-networkd.service. Feb 9 19:22:01.523382 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 9 19:22:01.541000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:22:01.523714 systemd[1]: Closed systemd-networkd.socket. Feb 9 19:22:01.526209 systemd[1]: Stopping network-cleanup.service... Feb 9 19:22:01.543000 audit: BPF prog-id=6 op=UNLOAD Feb 9 19:22:01.528786 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 9 19:22:01.528865 systemd[1]: Stopped parse-ip-for-networkd.service. Feb 9 19:22:01.529753 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 9 19:22:01.545000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:22:01.529810 systemd[1]: Stopped systemd-sysctl.service. Feb 9 19:22:01.546000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:22:01.530948 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 9 19:22:01.547000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:22:01.530988 systemd[1]: Stopped systemd-modules-load.service. Feb 9 19:22:01.548000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:22:01.531988 systemd[1]: Stopping systemd-udevd.service... Feb 9 19:22:01.533721 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 9 19:22:01.535188 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 9 19:22:01.535297 systemd[1]: Stopped systemd-resolved.service. Feb 9 19:22:01.538233 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 9 19:22:01.538414 systemd[1]: Stopped systemd-udevd.service. Feb 9 19:22:01.540433 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 9 19:22:01.540553 systemd[1]: Stopped sysroot-boot.service. Feb 9 19:22:01.542363 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 9 19:22:01.542405 systemd[1]: Closed systemd-udevd-control.socket. Feb 9 19:22:01.543020 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 9 19:22:01.543047 systemd[1]: Closed systemd-udevd-kernel.socket. Feb 9 19:22:01.545040 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 9 19:22:01.545161 systemd[1]: Stopped dracut-pre-udev.service. Feb 9 19:22:01.560000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:22:01.545990 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 9 19:22:01.546027 systemd[1]: Stopped dracut-cmdline.service. Feb 9 19:22:01.562000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:22:01.546973 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 9 19:22:01.563000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:22:01.563000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:22:01.547011 systemd[1]: Stopped dracut-cmdline-ask.service. Feb 9 19:22:01.547928 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 9 19:22:01.547968 systemd[1]: Stopped initrd-setup-root.service. Feb 9 19:22:01.549789 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Feb 9 19:22:01.560033 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 9 19:22:01.560144 systemd[1]: Stopped systemd-vconsole-setup.service. Feb 9 19:22:01.561485 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 9 19:22:01.561587 systemd[1]: Stopped network-cleanup.service. Feb 9 19:22:01.562630 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 9 19:22:01.562708 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Feb 9 19:22:01.563651 systemd[1]: Reached target initrd-switch-root.target. Feb 9 19:22:01.565171 systemd[1]: Starting initrd-switch-root.service... Feb 9 19:22:01.584815 systemd[1]: Switching root. Feb 9 19:22:01.607118 systemd-journald[185]: Journal stopped Feb 9 19:22:06.385373 systemd-journald[185]: Received SIGTERM from PID 1 (n/a). Feb 9 19:22:06.385443 kernel: SELinux: Class mctp_socket not defined in policy. Feb 9 19:22:06.385463 kernel: SELinux: Class anon_inode not defined in policy. Feb 9 19:22:06.385475 kernel: SELinux: the above unknown classes and permissions will be allowed Feb 9 19:22:06.385487 kernel: SELinux: policy capability network_peer_controls=1 Feb 9 19:22:06.385498 kernel: SELinux: policy capability open_perms=1 Feb 9 19:22:06.385510 kernel: SELinux: policy capability extended_socket_class=1 Feb 9 19:22:06.385521 kernel: SELinux: policy capability always_check_network=0 Feb 9 19:22:06.385535 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 9 19:22:06.385548 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 9 19:22:06.385560 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 9 19:22:06.385574 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 9 19:22:06.385587 systemd[1]: Successfully loaded SELinux policy in 93.347ms. Feb 9 19:22:06.385606 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 20.423ms. Feb 9 19:22:06.385620 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 19:22:06.385633 systemd[1]: Detected virtualization kvm. Feb 9 19:22:06.385645 systemd[1]: Detected architecture x86-64. Feb 9 19:22:06.385657 systemd[1]: Detected first boot. Feb 9 19:22:06.385670 systemd[1]: Hostname set to . Feb 9 19:22:06.385686 systemd[1]: Initializing machine ID from VM UUID. Feb 9 19:22:06.385698 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Feb 9 19:22:06.385710 systemd[1]: Populated /etc with preset unit settings. Feb 9 19:22:06.385723 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 19:22:06.385736 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 19:22:06.385750 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 19:22:06.385765 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 9 19:22:06.385778 systemd[1]: Stopped initrd-switch-root.service. Feb 9 19:22:06.385790 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 9 19:22:06.385803 systemd[1]: Created slice system-addon\x2dconfig.slice. Feb 9 19:22:06.385815 systemd[1]: Created slice system-addon\x2drun.slice. Feb 9 19:22:06.385827 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Feb 9 19:22:06.385839 systemd[1]: Created slice system-getty.slice. Feb 9 19:22:06.385852 systemd[1]: Created slice system-modprobe.slice. Feb 9 19:22:06.385864 systemd[1]: Created slice system-serial\x2dgetty.slice. Feb 9 19:22:06.385878 systemd[1]: Created slice system-system\x2dcloudinit.slice. Feb 9 19:22:06.385891 systemd[1]: Created slice system-systemd\x2dfsck.slice. Feb 9 19:22:06.385902 systemd[1]: Created slice user.slice. Feb 9 19:22:06.385914 systemd[1]: Started systemd-ask-password-console.path. Feb 9 19:22:06.385926 systemd[1]: Started systemd-ask-password-wall.path. Feb 9 19:22:06.385939 systemd[1]: Set up automount boot.automount. Feb 9 19:22:06.385952 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Feb 9 19:22:06.385966 systemd[1]: Stopped target initrd-switch-root.target. Feb 9 19:22:06.385979 systemd[1]: Stopped target initrd-fs.target. Feb 9 19:22:06.385994 systemd[1]: Stopped target initrd-root-fs.target. Feb 9 19:22:06.386006 systemd[1]: Reached target integritysetup.target. Feb 9 19:22:06.386021 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 19:22:06.386033 systemd[1]: Reached target remote-fs.target. Feb 9 19:22:06.386045 systemd[1]: Reached target slices.target. Feb 9 19:22:06.386072 systemd[1]: Reached target swap.target. Feb 9 19:22:06.386086 systemd[1]: Reached target torcx.target. Feb 9 19:22:06.386102 systemd[1]: Reached target veritysetup.target. Feb 9 19:22:06.386115 systemd[1]: Listening on systemd-coredump.socket. Feb 9 19:22:06.386128 systemd[1]: Listening on systemd-initctl.socket. Feb 9 19:22:06.386140 systemd[1]: Listening on systemd-networkd.socket. Feb 9 19:22:06.386152 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 19:22:06.386163 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 19:22:06.386175 systemd[1]: Listening on systemd-userdbd.socket. Feb 9 19:22:06.386187 systemd[1]: Mounting dev-hugepages.mount... Feb 9 19:22:06.386199 systemd[1]: Mounting dev-mqueue.mount... Feb 9 19:22:06.386213 systemd[1]: Mounting media.mount... Feb 9 19:22:06.386225 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 9 19:22:06.386237 systemd[1]: Mounting sys-kernel-debug.mount... Feb 9 19:22:06.386250 systemd[1]: Mounting sys-kernel-tracing.mount... Feb 9 19:22:06.386262 systemd[1]: Mounting tmp.mount... Feb 9 19:22:06.386274 systemd[1]: Starting flatcar-tmpfiles.service... Feb 9 19:22:06.386286 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Feb 9 19:22:06.386306 systemd[1]: Starting kmod-static-nodes.service... Feb 9 19:22:06.386318 systemd[1]: Starting modprobe@configfs.service... Feb 9 19:22:06.386337 systemd[1]: Starting modprobe@dm_mod.service... Feb 9 19:22:06.386350 systemd[1]: Starting modprobe@drm.service... Feb 9 19:22:06.386362 systemd[1]: Starting modprobe@efi_pstore.service... Feb 9 19:22:06.386374 systemd[1]: Starting modprobe@fuse.service... Feb 9 19:22:06.386386 systemd[1]: Starting modprobe@loop.service... Feb 9 19:22:06.386399 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 9 19:22:06.386411 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 9 19:22:06.386423 systemd[1]: Stopped systemd-fsck-root.service. Feb 9 19:22:06.386435 kernel: kauditd_printk_skb: 64 callbacks suppressed Feb 9 19:22:06.386449 kernel: audit: type=1131 audit(1707506526.243:105): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:22:06.386461 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 9 19:22:06.386474 systemd[1]: Stopped systemd-fsck-usr.service. Feb 9 19:22:06.386487 kernel: audit: type=1131 audit(1707506526.253:106): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:22:06.386499 systemd[1]: Stopped systemd-journald.service. Feb 9 19:22:06.386511 kernel: audit: type=1130 audit(1707506526.266:107): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:22:06.386523 kernel: audit: type=1131 audit(1707506526.266:108): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:22:06.386537 systemd[1]: Starting systemd-journald.service... Feb 9 19:22:06.386548 kernel: audit: type=1334 audit(1707506526.276:109): prog-id=18 op=LOAD Feb 9 19:22:06.386565 kernel: audit: type=1334 audit(1707506526.276:110): prog-id=19 op=LOAD Feb 9 19:22:06.386576 systemd[1]: Starting systemd-modules-load.service... Feb 9 19:22:06.386588 kernel: audit: type=1334 audit(1707506526.276:111): prog-id=20 op=LOAD Feb 9 19:22:06.386599 kernel: audit: type=1334 audit(1707506526.277:112): prog-id=16 op=UNLOAD Feb 9 19:22:06.386610 kernel: audit: type=1334 audit(1707506526.277:113): prog-id=17 op=UNLOAD Feb 9 19:22:06.386621 systemd[1]: Starting systemd-network-generator.service... Feb 9 19:22:06.386634 systemd[1]: Starting systemd-remount-fs.service... Feb 9 19:22:06.386648 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 19:22:06.386660 systemd[1]: verity-setup.service: Deactivated successfully. Feb 9 19:22:06.386678 systemd[1]: Stopped verity-setup.service. Feb 9 19:22:06.386691 kernel: audit: type=1131 audit(1707506526.313:114): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:22:06.386703 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 9 19:22:06.386714 systemd[1]: Mounted dev-hugepages.mount. Feb 9 19:22:06.386726 systemd[1]: Mounted dev-mqueue.mount. Feb 9 19:22:06.386738 systemd[1]: Mounted media.mount. Feb 9 19:22:06.386750 kernel: fuse: init (API version 7.34) Feb 9 19:22:06.386763 systemd[1]: Mounted sys-kernel-debug.mount. Feb 9 19:22:06.386775 systemd[1]: Mounted sys-kernel-tracing.mount. Feb 9 19:22:06.386787 kernel: loop: module loaded Feb 9 19:22:06.386798 systemd[1]: Mounted tmp.mount. Feb 9 19:22:06.386810 systemd[1]: Finished kmod-static-nodes.service. Feb 9 19:22:06.386822 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 9 19:22:06.386840 systemd[1]: Finished modprobe@configfs.service. Feb 9 19:22:06.386852 systemd[1]: Finished flatcar-tmpfiles.service. Feb 9 19:22:06.386865 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 9 19:22:06.386879 systemd[1]: Finished modprobe@dm_mod.service. Feb 9 19:22:06.386892 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 9 19:22:06.386904 systemd[1]: Finished modprobe@drm.service. Feb 9 19:22:06.386916 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 9 19:22:06.386929 systemd[1]: Finished modprobe@efi_pstore.service. Feb 9 19:22:06.386942 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 9 19:22:06.386954 systemd[1]: Finished modprobe@fuse.service. Feb 9 19:22:06.386967 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 9 19:22:06.386981 systemd[1]: Finished modprobe@loop.service. Feb 9 19:22:06.386992 systemd[1]: Finished systemd-modules-load.service. Feb 9 19:22:06.387005 systemd[1]: Finished systemd-network-generator.service. Feb 9 19:22:06.387018 systemd[1]: Finished systemd-remount-fs.service. Feb 9 19:22:06.387030 systemd[1]: Reached target network-pre.target. Feb 9 19:22:06.387045 systemd-journald[921]: Journal started Feb 9 19:22:06.387120 systemd-journald[921]: Runtime Journal (/run/log/journal/a459ad3f69004c41accdb4fe5f5e6168) is 4.9M, max 39.5M, 34.5M free. Feb 9 19:22:01.988000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 9 19:22:02.119000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 9 19:22:02.119000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 9 19:22:02.119000 audit: BPF prog-id=10 op=LOAD Feb 9 19:22:02.119000 audit: BPF prog-id=10 op=UNLOAD Feb 9 19:22:02.119000 audit: BPF prog-id=11 op=LOAD Feb 9 19:22:02.119000 audit: BPF prog-id=11 op=UNLOAD Feb 9 19:22:02.290000 audit[847]: AVC avc: denied { associate } for pid=847 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Feb 9 19:22:02.290000 audit[847]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c0001178d2 a1=c00002ae40 a2=c000029100 a3=32 items=0 ppid=830 pid=847 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:22:02.290000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 9 19:22:02.292000 audit[847]: AVC avc: denied { associate } for pid=847 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Feb 9 19:22:02.292000 audit[847]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c0001179a9 a2=1ed a3=0 items=2 ppid=830 pid=847 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:22:02.292000 audit: CWD cwd="/" Feb 9 19:22:02.292000 audit: PATH item=0 name=(null) inode=2 dev=00:1a mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:22:02.292000 audit: PATH item=1 name=(null) inode=3 dev=00:1a mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:22:02.292000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 9 19:22:06.099000 audit: BPF prog-id=12 op=LOAD Feb 9 19:22:06.099000 audit: BPF prog-id=3 op=UNLOAD Feb 9 19:22:06.099000 audit: BPF prog-id=13 op=LOAD Feb 9 19:22:06.099000 audit: BPF prog-id=14 op=LOAD Feb 9 19:22:06.099000 audit: BPF prog-id=4 op=UNLOAD Feb 9 19:22:06.099000 audit: BPF prog-id=5 op=UNLOAD Feb 9 19:22:06.101000 audit: BPF prog-id=15 op=LOAD Feb 9 19:22:06.101000 audit: BPF prog-id=12 op=UNLOAD Feb 9 19:22:06.101000 audit: BPF prog-id=16 op=LOAD Feb 9 19:22:06.101000 audit: BPF prog-id=17 op=LOAD Feb 9 19:22:06.101000 audit: BPF prog-id=13 op=UNLOAD Feb 9 19:22:06.101000 audit: BPF prog-id=14 op=UNLOAD Feb 9 19:22:06.102000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:22:06.106000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:22:06.106000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:22:06.114000 audit: BPF prog-id=15 op=UNLOAD Feb 9 19:22:06.243000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:22:06.253000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:22:06.266000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:22:06.266000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:22:06.276000 audit: BPF prog-id=18 op=LOAD Feb 9 19:22:06.276000 audit: BPF prog-id=19 op=LOAD Feb 9 19:22:06.276000 audit: BPF prog-id=20 op=LOAD Feb 9 19:22:06.277000 audit: BPF prog-id=16 op=UNLOAD Feb 9 19:22:06.277000 audit: BPF prog-id=17 op=UNLOAD Feb 9 19:22:06.313000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:22:06.347000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:22:06.353000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:22:06.353000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:22:06.355000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:22:06.362000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:22:06.362000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:22:06.365000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:22:06.365000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:22:06.369000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:22:06.369000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:22:06.372000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:22:06.372000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:22:06.375000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:22:06.375000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:22:06.377000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:22:06.379000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:22:06.381000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:22:06.383000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 9 19:22:06.383000 audit[921]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=4 a1=7ffc5f0a2400 a2=4000 a3=7ffc5f0a249c items=0 ppid=1 pid=921 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:22:06.383000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Feb 9 19:22:02.286793 /usr/lib/systemd/system-generators/torcx-generator[847]: time="2024-02-09T19:22:02Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 19:22:06.097050 systemd[1]: Queued start job for default target multi-user.target. Feb 9 19:22:02.287854 /usr/lib/systemd/system-generators/torcx-generator[847]: time="2024-02-09T19:22:02Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 9 19:22:06.097081 systemd[1]: Unnecessary job was removed for dev-vda6.device. Feb 9 19:22:02.287875 /usr/lib/systemd/system-generators/torcx-generator[847]: time="2024-02-09T19:22:02Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 9 19:22:06.102589 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 9 19:22:02.287931 /usr/lib/systemd/system-generators/torcx-generator[847]: time="2024-02-09T19:22:02Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Feb 9 19:22:02.287943 /usr/lib/systemd/system-generators/torcx-generator[847]: time="2024-02-09T19:22:02Z" level=debug msg="skipped missing lower profile" missing profile=oem Feb 9 19:22:02.287974 /usr/lib/systemd/system-generators/torcx-generator[847]: time="2024-02-09T19:22:02Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Feb 9 19:22:02.287989 /usr/lib/systemd/system-generators/torcx-generator[847]: time="2024-02-09T19:22:02Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Feb 9 19:22:02.288227 /usr/lib/systemd/system-generators/torcx-generator[847]: time="2024-02-09T19:22:02Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Feb 9 19:22:02.288265 /usr/lib/systemd/system-generators/torcx-generator[847]: time="2024-02-09T19:22:02Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 9 19:22:02.288280 /usr/lib/systemd/system-generators/torcx-generator[847]: time="2024-02-09T19:22:02Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 9 19:22:02.289288 /usr/lib/systemd/system-generators/torcx-generator[847]: time="2024-02-09T19:22:02Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Feb 9 19:22:02.289327 /usr/lib/systemd/system-generators/torcx-generator[847]: time="2024-02-09T19:22:02Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Feb 9 19:22:02.289348 /usr/lib/systemd/system-generators/torcx-generator[847]: time="2024-02-09T19:22:02Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.2: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.2 Feb 9 19:22:02.289365 /usr/lib/systemd/system-generators/torcx-generator[847]: time="2024-02-09T19:22:02Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Feb 9 19:22:02.289384 /usr/lib/systemd/system-generators/torcx-generator[847]: time="2024-02-09T19:22:02Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.2: no such file or directory" path=/var/lib/torcx/store/3510.3.2 Feb 9 19:22:02.289400 /usr/lib/systemd/system-generators/torcx-generator[847]: time="2024-02-09T19:22:02Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Feb 9 19:22:05.564517 /usr/lib/systemd/system-generators/torcx-generator[847]: time="2024-02-09T19:22:05Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 19:22:05.565368 /usr/lib/systemd/system-generators/torcx-generator[847]: time="2024-02-09T19:22:05Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 19:22:05.565759 /usr/lib/systemd/system-generators/torcx-generator[847]: time="2024-02-09T19:22:05Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 19:22:05.566801 /usr/lib/systemd/system-generators/torcx-generator[847]: time="2024-02-09T19:22:05Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 19:22:05.566963 /usr/lib/systemd/system-generators/torcx-generator[847]: time="2024-02-09T19:22:05Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Feb 9 19:22:05.567227 /usr/lib/systemd/system-generators/torcx-generator[847]: time="2024-02-09T19:22:05Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Feb 9 19:22:06.397097 systemd[1]: Mounting sys-fs-fuse-connections.mount... Feb 9 19:22:06.399088 systemd[1]: Mounting sys-kernel-config.mount... Feb 9 19:22:06.404209 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 9 19:22:06.410086 systemd[1]: Starting systemd-hwdb-update.service... Feb 9 19:22:06.417102 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 9 19:22:06.420106 systemd[1]: Starting systemd-random-seed.service... Feb 9 19:22:06.422098 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Feb 9 19:22:06.425174 systemd[1]: Starting systemd-sysctl.service... Feb 9 19:22:06.429094 systemd[1]: Starting systemd-sysusers.service... Feb 9 19:22:06.434000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:22:06.435108 systemd[1]: Started systemd-journald.service. Feb 9 19:22:06.435657 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 19:22:06.436000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:22:06.436297 systemd[1]: Mounted sys-fs-fuse-connections.mount. Feb 9 19:22:06.436847 systemd[1]: Mounted sys-kernel-config.mount. Feb 9 19:22:06.439742 systemd[1]: Starting systemd-journal-flush.service... Feb 9 19:22:06.443139 systemd[1]: Starting systemd-udev-settle.service... Feb 9 19:22:06.454872 systemd[1]: Finished systemd-random-seed.service. Feb 9 19:22:06.455000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:22:06.455641 systemd[1]: Reached target first-boot-complete.target. Feb 9 19:22:06.458642 udevadm[956]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 9 19:22:06.459108 systemd-journald[921]: Time spent on flushing to /var/log/journal/a459ad3f69004c41accdb4fe5f5e6168 is 44.810ms for 1140 entries. Feb 9 19:22:06.459108 systemd-journald[921]: System Journal (/var/log/journal/a459ad3f69004c41accdb4fe5f5e6168) is 8.0M, max 584.8M, 576.8M free. Feb 9 19:22:06.582508 systemd-journald[921]: Received client request to flush runtime journal. Feb 9 19:22:06.490000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:22:06.582000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:22:06.489732 systemd[1]: Finished systemd-sysctl.service. Feb 9 19:22:06.581637 systemd[1]: Finished systemd-sysusers.service. Feb 9 19:22:06.584000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:22:06.583636 systemd[1]: Finished systemd-journal-flush.service. Feb 9 19:22:07.118635 systemd[1]: Finished systemd-hwdb-update.service. Feb 9 19:22:07.119000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:22:07.120000 audit: BPF prog-id=21 op=LOAD Feb 9 19:22:07.121000 audit: BPF prog-id=22 op=LOAD Feb 9 19:22:07.121000 audit: BPF prog-id=7 op=UNLOAD Feb 9 19:22:07.121000 audit: BPF prog-id=8 op=UNLOAD Feb 9 19:22:07.122231 systemd[1]: Starting systemd-udevd.service... Feb 9 19:22:07.166919 systemd-udevd[959]: Using default interface naming scheme 'v252'. Feb 9 19:22:07.235000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:22:07.234548 systemd[1]: Started systemd-udevd.service. Feb 9 19:22:07.240000 audit: BPF prog-id=23 op=LOAD Feb 9 19:22:07.245370 systemd[1]: Starting systemd-networkd.service... Feb 9 19:22:07.269000 audit: BPF prog-id=24 op=LOAD Feb 9 19:22:07.270000 audit: BPF prog-id=25 op=LOAD Feb 9 19:22:07.270000 audit: BPF prog-id=26 op=LOAD Feb 9 19:22:07.272557 systemd[1]: Starting systemd-userdbd.service... Feb 9 19:22:07.320848 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Feb 9 19:22:07.326000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:22:07.325969 systemd[1]: Started systemd-userdbd.service. Feb 9 19:22:07.405102 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Feb 9 19:22:07.411616 kernel: ACPI: button: Power Button [PWRF] Feb 9 19:22:07.437538 systemd-networkd[969]: lo: Link UP Feb 9 19:22:07.437551 systemd-networkd[969]: lo: Gained carrier Feb 9 19:22:07.439000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:22:07.439144 systemd-networkd[969]: Enumeration completed Feb 9 19:22:07.439309 systemd[1]: Started systemd-networkd.service. Feb 9 19:22:07.440419 systemd-networkd[969]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 19:22:07.443120 systemd-networkd[969]: eth0: Link UP Feb 9 19:22:07.443128 systemd-networkd[969]: eth0: Gained carrier Feb 9 19:22:07.446327 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 9 19:22:07.454231 systemd-networkd[969]: eth0: DHCPv4 address 172.24.4.101/24, gateway 172.24.4.1 acquired from 172.24.4.1 Feb 9 19:22:07.440000 audit[973]: AVC avc: denied { confidentiality } for pid=973 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 9 19:22:07.440000 audit[973]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55ff2e2f8920 a1=32194 a2=7f3077ec7bc5 a3=5 items=108 ppid=959 pid=973 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:22:07.440000 audit: CWD cwd="/" Feb 9 19:22:07.440000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:22:07.440000 audit: PATH item=1 name=(null) inode=14455 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:22:07.440000 audit: PATH item=2 name=(null) inode=14455 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:22:07.440000 audit: PATH item=3 name=(null) inode=14456 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:22:07.440000 audit: PATH item=4 name=(null) inode=14455 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:22:07.440000 audit: PATH item=5 name=(null) inode=14457 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:22:07.440000 audit: PATH item=6 name=(null) inode=14455 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:22:07.440000 audit: PATH item=7 name=(null) inode=14458 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:22:07.440000 audit: PATH item=8 name=(null) inode=14458 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:22:07.440000 audit: PATH item=9 name=(null) inode=14459 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:22:07.440000 audit: PATH item=10 name=(null) inode=14458 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:22:07.440000 audit: PATH item=11 name=(null) inode=14460 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:22:07.440000 audit: PATH item=12 name=(null) inode=14458 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:22:07.440000 audit: PATH item=13 name=(null) inode=14461 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:22:07.440000 audit: PATH item=14 name=(null) inode=14458 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:22:07.440000 audit: PATH item=15 name=(null) inode=14462 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:22:07.440000 audit: PATH item=16 name=(null) inode=14458 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:22:07.440000 audit: PATH item=17 name=(null) inode=14463 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:22:07.440000 audit: PATH item=18 name=(null) inode=14455 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:22:07.440000 audit: PATH item=19 name=(null) inode=14464 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:22:07.440000 audit: PATH item=20 name=(null) inode=14464 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:22:07.440000 audit: PATH item=21 name=(null) inode=14465 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:22:07.440000 audit: PATH item=22 name=(null) inode=14464 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:22:07.440000 audit: PATH item=23 name=(null) inode=14466 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:22:07.440000 audit: PATH item=24 name=(null) inode=14464 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:22:07.440000 audit: PATH item=25 name=(null) inode=14467 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:22:07.440000 audit: PATH item=26 name=(null) inode=14464 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:22:07.440000 audit: PATH item=27 name=(null) inode=14468 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:22:07.440000 audit: PATH item=28 name=(null) inode=14464 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:22:07.440000 audit: PATH item=29 name=(null) inode=14469 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:22:07.440000 audit: PATH item=30 name=(null) inode=14455 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:22:07.440000 audit: PATH item=31 name=(null) inode=14470 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:22:07.440000 audit: PATH item=32 name=(null) inode=14470 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:22:07.440000 audit: PATH item=33 name=(null) inode=14471 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:22:07.440000 audit: PATH item=34 name=(null) inode=14470 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:22:07.440000 audit: PATH item=35 name=(null) inode=14472 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:22:07.440000 audit: PATH item=36 name=(null) inode=14470 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:22:07.440000 audit: PATH item=37 name=(null) inode=14473 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:22:07.440000 audit: PATH item=38 name=(null) inode=14470 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:22:07.440000 audit: PATH item=39 name=(null) inode=14474 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:22:07.440000 audit: PATH item=40 name=(null) inode=14470 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:22:07.440000 audit: PATH item=41 name=(null) inode=14475 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:22:07.440000 audit: PATH item=42 name=(null) inode=14455 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:22:07.440000 audit: PATH item=43 name=(null) inode=14476 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:22:07.440000 audit: PATH item=44 name=(null) inode=14476 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:22:07.440000 audit: PATH item=45 name=(null) inode=14477 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:22:07.440000 audit: PATH item=46 name=(null) inode=14476 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:22:07.440000 audit: PATH item=47 name=(null) inode=14478 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:22:07.440000 audit: PATH item=48 name=(null) inode=14476 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:22:07.440000 audit: PATH item=49 name=(null) inode=14479 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:22:07.440000 audit: PATH item=50 name=(null) inode=14476 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:22:07.440000 audit: PATH item=51 name=(null) inode=14480 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:22:07.440000 audit: PATH item=52 name=(null) inode=14476 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:22:07.440000 audit: PATH item=53 name=(null) inode=14481 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:22:07.440000 audit: PATH item=54 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:22:07.440000 audit: PATH item=55 name=(null) inode=14482 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:22:07.440000 audit: PATH item=56 name=(null) inode=14482 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:22:07.440000 audit: PATH item=57 name=(null) inode=14483 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:22:07.440000 audit: PATH item=58 name=(null) inode=14482 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:22:07.440000 audit: PATH item=59 name=(null) inode=14484 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:22:07.440000 audit: PATH item=60 name=(null) inode=14482 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:22:07.440000 audit: PATH item=61 name=(null) inode=14485 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:22:07.440000 audit: PATH item=62 name=(null) inode=14485 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:22:07.440000 audit: PATH item=63 name=(null) inode=14486 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:22:07.440000 audit: PATH item=64 name=(null) inode=14485 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:22:07.440000 audit: PATH item=65 name=(null) inode=14487 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:22:07.440000 audit: PATH item=66 name=(null) inode=14485 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:22:07.440000 audit: PATH item=67 name=(null) inode=14488 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:22:07.440000 audit: PATH item=68 name=(null) inode=14485 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:22:07.440000 audit: PATH item=69 name=(null) inode=14489 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:22:07.440000 audit: PATH item=70 name=(null) inode=14485 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:22:07.440000 audit: PATH item=71 name=(null) inode=14490 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:22:07.440000 audit: PATH item=72 name=(null) inode=14482 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:22:07.440000 audit: PATH item=73 name=(null) inode=14491 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:22:07.440000 audit: PATH item=74 name=(null) inode=14491 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:22:07.440000 audit: PATH item=75 name=(null) inode=14492 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:22:07.440000 audit: PATH item=76 name=(null) inode=14491 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:22:07.440000 audit: PATH item=77 name=(null) inode=14493 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:22:07.440000 audit: PATH item=78 name=(null) inode=14491 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:22:07.440000 audit: PATH item=79 name=(null) inode=14494 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:22:07.440000 audit: PATH item=80 name=(null) inode=14491 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:22:07.440000 audit: PATH item=81 name=(null) inode=14495 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:22:07.440000 audit: PATH item=82 name=(null) inode=14491 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:22:07.440000 audit: PATH item=83 name=(null) inode=14496 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:22:07.440000 audit: PATH item=84 name=(null) inode=14482 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:22:07.440000 audit: PATH item=85 name=(null) inode=14497 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:22:07.440000 audit: PATH item=86 name=(null) inode=14497 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:22:07.440000 audit: PATH item=87 name=(null) inode=14498 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:22:07.440000 audit: PATH item=88 name=(null) inode=14497 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:22:07.440000 audit: PATH item=89 name=(null) inode=14499 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:22:07.440000 audit: PATH item=90 name=(null) inode=14497 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:22:07.440000 audit: PATH item=91 name=(null) inode=14500 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:22:07.440000 audit: PATH item=92 name=(null) inode=14497 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:22:07.440000 audit: PATH item=93 name=(null) inode=14501 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:22:07.440000 audit: PATH item=94 name=(null) inode=14497 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:22:07.440000 audit: PATH item=95 name=(null) inode=14502 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:22:07.440000 audit: PATH item=96 name=(null) inode=14482 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:22:07.440000 audit: PATH item=97 name=(null) inode=14503 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:22:07.440000 audit: PATH item=98 name=(null) inode=14503 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:22:07.440000 audit: PATH item=99 name=(null) inode=14504 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:22:07.440000 audit: PATH item=100 name=(null) inode=14503 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:22:07.440000 audit: PATH item=101 name=(null) inode=14505 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:22:07.440000 audit: PATH item=102 name=(null) inode=14503 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:22:07.440000 audit: PATH item=103 name=(null) inode=14506 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:22:07.440000 audit: PATH item=104 name=(null) inode=14503 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:22:07.440000 audit: PATH item=105 name=(null) inode=14507 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:22:07.440000 audit: PATH item=106 name=(null) inode=14503 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:22:07.440000 audit: PATH item=107 name=(null) inode=14508 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:22:07.440000 audit: PROCTITLE proctitle="(udev-worker)" Feb 9 19:22:07.480125 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Feb 9 19:22:07.482091 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Feb 9 19:22:07.492128 kernel: mousedev: PS/2 mouse device common for all mice Feb 9 19:22:07.538540 systemd[1]: Finished systemd-udev-settle.service. Feb 9 19:22:07.538000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:22:07.540583 systemd[1]: Starting lvm2-activation-early.service... Feb 9 19:22:07.566666 lvm[988]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 19:22:07.597225 systemd[1]: Finished lvm2-activation-early.service. Feb 9 19:22:07.597000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:22:07.597812 systemd[1]: Reached target cryptsetup.target. Feb 9 19:22:07.599507 systemd[1]: Starting lvm2-activation.service... Feb 9 19:22:07.604143 lvm[989]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 19:22:07.627552 systemd[1]: Finished lvm2-activation.service. Feb 9 19:22:07.628000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:22:07.628967 systemd[1]: Reached target local-fs-pre.target. Feb 9 19:22:07.630161 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 9 19:22:07.630229 systemd[1]: Reached target local-fs.target. Feb 9 19:22:07.631322 systemd[1]: Reached target machines.target. Feb 9 19:22:07.635042 systemd[1]: Starting ldconfig.service... Feb 9 19:22:07.637319 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Feb 9 19:22:07.637420 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 19:22:07.641892 systemd[1]: Starting systemd-boot-update.service... Feb 9 19:22:07.650190 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Feb 9 19:22:07.652829 systemd[1]: Starting systemd-machine-id-commit.service... Feb 9 19:22:07.653445 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Feb 9 19:22:07.653527 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Feb 9 19:22:07.655112 systemd[1]: Starting systemd-tmpfiles-setup.service... Feb 9 19:22:07.655929 systemd[1]: boot.automount: Got automount request for /boot, triggered by 991 (bootctl) Feb 9 19:22:07.657619 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Feb 9 19:22:07.753706 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Feb 9 19:22:07.754000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:22:08.074910 systemd-tmpfiles[994]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Feb 9 19:22:08.106117 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 9 19:22:08.107829 systemd[1]: Finished systemd-machine-id-commit.service. Feb 9 19:22:08.109000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:22:08.109527 systemd-tmpfiles[994]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 9 19:22:08.136235 systemd-tmpfiles[994]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 9 19:22:08.272379 systemd-fsck[1000]: fsck.fat 4.2 (2021-01-31) Feb 9 19:22:08.272379 systemd-fsck[1000]: /dev/vda1: 789 files, 115339/258078 clusters Feb 9 19:22:08.283010 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Feb 9 19:22:08.284000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:22:08.287855 systemd[1]: Mounting boot.mount... Feb 9 19:22:08.316976 systemd[1]: Mounted boot.mount. Feb 9 19:22:08.339826 systemd[1]: Finished systemd-boot-update.service. Feb 9 19:22:08.340000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:22:08.417970 systemd[1]: Finished systemd-tmpfiles-setup.service. Feb 9 19:22:08.418000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:22:08.420029 systemd[1]: Starting audit-rules.service... Feb 9 19:22:08.421467 systemd[1]: Starting clean-ca-certificates.service... Feb 9 19:22:08.424000 audit: BPF prog-id=27 op=LOAD Feb 9 19:22:08.428000 audit: BPF prog-id=28 op=LOAD Feb 9 19:22:08.422987 systemd[1]: Starting systemd-journal-catalog-update.service... Feb 9 19:22:08.425577 systemd[1]: Starting systemd-resolved.service... Feb 9 19:22:08.429591 systemd[1]: Starting systemd-timesyncd.service... Feb 9 19:22:08.432735 systemd[1]: Starting systemd-update-utmp.service... Feb 9 19:22:08.444854 systemd[1]: Finished clean-ca-certificates.service. Feb 9 19:22:08.445000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:22:08.445510 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 9 19:22:08.453000 audit[1008]: SYSTEM_BOOT pid=1008 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Feb 9 19:22:08.457153 systemd[1]: Finished systemd-update-utmp.service. Feb 9 19:22:08.457000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:22:08.491378 systemd[1]: Finished systemd-journal-catalog-update.service. Feb 9 19:22:08.491000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:22:08.527853 augenrules[1023]: No rules Feb 9 19:22:08.527000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 9 19:22:08.527000 audit[1023]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd31265a10 a2=420 a3=0 items=0 ppid=1003 pid=1023 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:22:08.527000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Feb 9 19:22:08.528313 systemd[1]: Finished audit-rules.service. Feb 9 19:22:08.541440 systemd[1]: Started systemd-timesyncd.service. Feb 9 19:22:08.542165 systemd[1]: Reached target time-set.target. Feb 9 19:22:08.558974 systemd-resolved[1006]: Positive Trust Anchors: Feb 9 19:22:08.558994 systemd-resolved[1006]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 19:22:08.559030 systemd-resolved[1006]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 19:22:09.189525 systemd-timesyncd[1007]: Contacted time server 217.182.137.208:123 (0.flatcar.pool.ntp.org). Feb 9 19:22:09.189834 systemd-timesyncd[1007]: Initial clock synchronization to Fri 2024-02-09 19:22:09.189415 UTC. Feb 9 19:22:09.193022 systemd-resolved[1006]: Using system hostname 'ci-3510-3-2-c-009518a0f7.novalocal'. Feb 9 19:22:09.194617 systemd[1]: Started systemd-resolved.service. Feb 9 19:22:09.195230 systemd[1]: Reached target network.target. Feb 9 19:22:09.195668 systemd[1]: Reached target nss-lookup.target. Feb 9 19:22:09.419796 ldconfig[990]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 9 19:22:09.422424 systemd-networkd[969]: eth0: Gained IPv6LL Feb 9 19:22:09.441755 systemd[1]: Finished ldconfig.service. Feb 9 19:22:09.445868 systemd[1]: Starting systemd-update-done.service... Feb 9 19:22:09.459978 systemd[1]: Finished systemd-update-done.service. Feb 9 19:22:09.461327 systemd[1]: Reached target sysinit.target. Feb 9 19:22:09.462620 systemd[1]: Started motdgen.path. Feb 9 19:22:09.463708 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Feb 9 19:22:09.465455 systemd[1]: Started logrotate.timer. Feb 9 19:22:09.466662 systemd[1]: Started mdadm.timer. Feb 9 19:22:09.467687 systemd[1]: Started systemd-tmpfiles-clean.timer. Feb 9 19:22:09.468802 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 9 19:22:09.468910 systemd[1]: Reached target paths.target. Feb 9 19:22:09.469965 systemd[1]: Reached target timers.target. Feb 9 19:22:09.471622 systemd[1]: Listening on dbus.socket. Feb 9 19:22:09.475065 systemd[1]: Starting docker.socket... Feb 9 19:22:09.482193 systemd[1]: Listening on sshd.socket. Feb 9 19:22:09.483491 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 19:22:09.484453 systemd[1]: Listening on docker.socket. Feb 9 19:22:09.485865 systemd[1]: Reached target sockets.target. Feb 9 19:22:09.495827 systemd[1]: Reached target basic.target. Feb 9 19:22:09.497077 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 19:22:09.497147 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 19:22:09.499410 systemd[1]: Starting containerd.service... Feb 9 19:22:09.502633 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Feb 9 19:22:09.508532 systemd[1]: Starting dbus.service... Feb 9 19:22:09.512627 systemd[1]: Starting enable-oem-cloudinit.service... Feb 9 19:22:09.517093 systemd[1]: Starting extend-filesystems.service... Feb 9 19:22:09.521627 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Feb 9 19:22:09.525411 systemd[1]: Starting motdgen.service... Feb 9 19:22:09.531087 systemd[1]: Starting prepare-cni-plugins.service... Feb 9 19:22:09.534421 systemd[1]: Starting prepare-critools.service... Feb 9 19:22:09.539943 systemd[1]: Starting ssh-key-proc-cmdline.service... Feb 9 19:22:09.545690 systemd[1]: Starting sshd-keygen.service... Feb 9 19:22:09.558198 jq[1037]: false Feb 9 19:22:09.557322 systemd[1]: Starting systemd-logind.service... Feb 9 19:22:09.558279 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 19:22:09.558386 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 9 19:22:09.559195 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 9 19:22:09.561210 systemd[1]: Starting update-engine.service... Feb 9 19:22:09.563977 systemd[1]: Starting update-ssh-keys-after-ignition.service... Feb 9 19:22:09.570633 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 9 19:22:09.570834 systemd[1]: Finished ssh-key-proc-cmdline.service. Feb 9 19:22:09.582593 jq[1054]: true Feb 9 19:22:09.596062 tar[1056]: ./ Feb 9 19:22:09.596062 tar[1056]: ./macvlan Feb 9 19:22:09.599617 extend-filesystems[1038]: Found vda Feb 9 19:22:09.600558 systemd[1]: Created slice system-sshd.slice. Feb 9 19:22:09.603157 extend-filesystems[1038]: Found vda1 Feb 9 19:22:09.604165 tar[1057]: crictl Feb 9 19:22:09.609370 extend-filesystems[1038]: Found vda2 Feb 9 19:22:09.610556 extend-filesystems[1038]: Found vda3 Feb 9 19:22:09.614620 extend-filesystems[1038]: Found usr Feb 9 19:22:09.614726 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 9 19:22:09.614912 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Feb 9 19:22:09.616682 extend-filesystems[1038]: Found vda4 Feb 9 19:22:09.618034 extend-filesystems[1038]: Found vda6 Feb 9 19:22:09.618773 extend-filesystems[1038]: Found vda7 Feb 9 19:22:09.619421 extend-filesystems[1038]: Found vda9 Feb 9 19:22:09.620032 extend-filesystems[1038]: Checking size of /dev/vda9 Feb 9 19:22:09.620592 jq[1062]: true Feb 9 19:22:09.651159 dbus-daemon[1034]: [system] SELinux support is enabled Feb 9 19:22:09.651634 systemd[1]: Started dbus.service. Feb 9 19:22:09.654280 systemd[1]: motdgen.service: Deactivated successfully. Feb 9 19:22:09.654459 systemd[1]: Finished motdgen.service. Feb 9 19:22:09.655037 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 9 19:22:09.655064 systemd[1]: Reached target system-config.target. Feb 9 19:22:09.655519 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 9 19:22:09.655540 systemd[1]: Reached target user-config.target. Feb 9 19:22:09.665512 extend-filesystems[1038]: Resized partition /dev/vda9 Feb 9 19:22:09.683934 extend-filesystems[1083]: resize2fs 1.46.5 (30-Dec-2021) Feb 9 19:22:09.729933 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 4635643 blocks Feb 9 19:22:09.750905 update_engine[1052]: I0209 19:22:09.747246 1052 main.cc:92] Flatcar Update Engine starting Feb 9 19:22:09.791639 update_engine[1052]: I0209 19:22:09.758136 1052 update_check_scheduler.cc:74] Next update check in 6m14s Feb 9 19:22:09.791790 tar[1056]: ./static Feb 9 19:22:09.791856 coreos-metadata[1033]: Feb 09 19:22:09.780 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Feb 9 19:22:09.758006 systemd[1]: Started update-engine.service. Feb 9 19:22:09.760431 systemd[1]: Started locksmithd.service. Feb 9 19:22:09.793479 systemd-logind[1049]: Watching system buttons on /dev/input/event1 (Power Button) Feb 9 19:22:09.796417 env[1061]: time="2024-02-09T19:22:09.795204977Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Feb 9 19:22:09.793504 systemd-logind[1049]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 9 19:22:09.794833 systemd-logind[1049]: New seat seat0. Feb 9 19:22:09.799944 coreos-metadata[1033]: Feb 09 19:22:09.799 INFO Fetch successful Feb 9 19:22:09.799944 coreos-metadata[1033]: Feb 09 19:22:09.799 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Feb 9 19:22:09.802562 systemd[1]: Started systemd-logind.service. Feb 9 19:22:09.815094 coreos-metadata[1033]: Feb 09 19:22:09.815 INFO Fetch successful Feb 9 19:22:09.830008 kernel: EXT4-fs (vda9): resized filesystem to 4635643 Feb 9 19:22:09.835863 systemd[1]: Finished update-ssh-keys-after-ignition.service. Feb 9 19:22:09.939811 extend-filesystems[1083]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 9 19:22:09.939811 extend-filesystems[1083]: old_desc_blocks = 1, new_desc_blocks = 3 Feb 9 19:22:09.939811 extend-filesystems[1083]: The filesystem on /dev/vda9 is now 4635643 (4k) blocks long. Feb 9 19:22:09.966074 bash[1090]: Updated "/home/core/.ssh/authorized_keys" Feb 9 19:22:09.966301 env[1061]: time="2024-02-09T19:22:09.853379303Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 9 19:22:09.966301 env[1061]: time="2024-02-09T19:22:09.947145701Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 9 19:22:09.966301 env[1061]: time="2024-02-09T19:22:09.952049138Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.148-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 9 19:22:09.966301 env[1061]: time="2024-02-09T19:22:09.952156900Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 9 19:22:09.966301 env[1061]: time="2024-02-09T19:22:09.952837997Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 19:22:09.966301 env[1061]: time="2024-02-09T19:22:09.952958674Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 9 19:22:09.966301 env[1061]: time="2024-02-09T19:22:09.953066306Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Feb 9 19:22:09.966301 env[1061]: time="2024-02-09T19:22:09.953134924Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 9 19:22:09.966301 env[1061]: time="2024-02-09T19:22:09.955096754Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 9 19:22:09.966301 env[1061]: time="2024-02-09T19:22:09.965229478Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 9 19:22:09.936554 unknown[1033]: wrote ssh authorized keys file for user: core Feb 9 19:22:09.970996 extend-filesystems[1038]: Resized filesystem in /dev/vda9 Feb 9 19:22:09.981944 env[1061]: time="2024-02-09T19:22:09.968022065Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 19:22:09.981944 env[1061]: time="2024-02-09T19:22:09.968076798Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 9 19:22:09.981944 env[1061]: time="2024-02-09T19:22:09.968222321Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Feb 9 19:22:09.981944 env[1061]: time="2024-02-09T19:22:09.968260222Z" level=info msg="metadata content store policy set" policy=shared Feb 9 19:22:09.940002 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 9 19:22:09.940190 systemd[1]: Finished extend-filesystems.service. Feb 9 19:22:09.987170 update-ssh-keys[1097]: Updated "/home/core/.ssh/authorized_keys" Feb 9 19:22:09.987380 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Feb 9 19:22:09.995939 env[1061]: time="2024-02-09T19:22:09.993661943Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 9 19:22:09.995939 env[1061]: time="2024-02-09T19:22:09.993746812Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 9 19:22:09.995939 env[1061]: time="2024-02-09T19:22:09.993820931Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 9 19:22:09.995939 env[1061]: time="2024-02-09T19:22:09.993984498Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 9 19:22:09.995939 env[1061]: time="2024-02-09T19:22:09.994115483Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 9 19:22:09.995939 env[1061]: time="2024-02-09T19:22:09.994161500Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 9 19:22:09.995939 env[1061]: time="2024-02-09T19:22:09.994234627Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 9 19:22:09.995939 env[1061]: time="2024-02-09T19:22:09.994274251Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 9 19:22:09.995939 env[1061]: time="2024-02-09T19:22:09.994309437Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Feb 9 19:22:09.995939 env[1061]: time="2024-02-09T19:22:09.994344403Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 9 19:22:09.995939 env[1061]: time="2024-02-09T19:22:09.994379188Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 9 19:22:09.995939 env[1061]: time="2024-02-09T19:22:09.994414654Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 9 19:22:09.995939 env[1061]: time="2024-02-09T19:22:09.994643504Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 9 19:22:09.995939 env[1061]: time="2024-02-09T19:22:09.994829713Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 9 19:22:09.996813 env[1061]: time="2024-02-09T19:22:09.995788080Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 9 19:22:09.996813 env[1061]: time="2024-02-09T19:22:09.995855086Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 9 19:22:09.999228 env[1061]: time="2024-02-09T19:22:09.997071307Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 9 19:22:09.999228 env[1061]: time="2024-02-09T19:22:09.997226157Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 9 19:22:09.999228 env[1061]: time="2024-02-09T19:22:09.997341584Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 9 19:22:09.999228 env[1061]: time="2024-02-09T19:22:09.997377822Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 9 19:22:09.999228 env[1061]: time="2024-02-09T19:22:09.997401396Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 9 19:22:09.999228 env[1061]: time="2024-02-09T19:22:09.997425311Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 9 19:22:09.999228 env[1061]: time="2024-02-09T19:22:09.997449316Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 9 19:22:09.999228 env[1061]: time="2024-02-09T19:22:09.997472529Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 9 19:22:09.999228 env[1061]: time="2024-02-09T19:22:09.997495893Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 9 19:22:09.999228 env[1061]: time="2024-02-09T19:22:09.997523114Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 9 19:22:09.999228 env[1061]: time="2024-02-09T19:22:09.997773354Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 9 19:22:09.999228 env[1061]: time="2024-02-09T19:22:09.997808750Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 9 19:22:09.999228 env[1061]: time="2024-02-09T19:22:09.997833486Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 9 19:22:09.999228 env[1061]: time="2024-02-09T19:22:09.997857672Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 9 19:22:09.999989 env[1061]: time="2024-02-09T19:22:09.997959763Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Feb 9 19:22:09.999989 env[1061]: time="2024-02-09T19:22:09.997988828Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 9 19:22:09.999989 env[1061]: time="2024-02-09T19:22:09.998024835Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Feb 9 19:22:09.999989 env[1061]: time="2024-02-09T19:22:09.998091410Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 9 19:22:10.000179 env[1061]: time="2024-02-09T19:22:09.998499125Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 9 19:22:10.000179 env[1061]: time="2024-02-09T19:22:09.998620993Z" level=info msg="Connect containerd service" Feb 9 19:22:10.000179 env[1061]: time="2024-02-09T19:22:09.998677439Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 9 19:22:10.003286 env[1061]: time="2024-02-09T19:22:10.000958858Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 9 19:22:10.004285 env[1061]: time="2024-02-09T19:22:10.004263406Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 9 19:22:10.012018 env[1061]: time="2024-02-09T19:22:10.011915838Z" level=info msg="Start subscribing containerd event" Feb 9 19:22:10.014068 env[1061]: time="2024-02-09T19:22:10.014047756Z" level=info msg="Start recovering state" Feb 9 19:22:10.014985 env[1061]: time="2024-02-09T19:22:10.014969254Z" level=info msg="Start event monitor" Feb 9 19:22:10.016934 env[1061]: time="2024-02-09T19:22:10.016916306Z" level=info msg="Start snapshots syncer" Feb 9 19:22:10.017043 env[1061]: time="2024-02-09T19:22:10.017027294Z" level=info msg="Start cni network conf syncer for default" Feb 9 19:22:10.017104 env[1061]: time="2024-02-09T19:22:10.017090793Z" level=info msg="Start streaming server" Feb 9 19:22:10.020971 env[1061]: time="2024-02-09T19:22:10.020948479Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 9 19:22:10.021209 systemd[1]: Started containerd.service. Feb 9 19:22:10.024045 env[1061]: time="2024-02-09T19:22:10.024003919Z" level=info msg="containerd successfully booted in 0.255554s" Feb 9 19:22:10.028142 tar[1056]: ./vlan Feb 9 19:22:10.106559 tar[1056]: ./portmap Feb 9 19:22:10.178611 tar[1056]: ./host-local Feb 9 19:22:10.234906 tar[1056]: ./vrf Feb 9 19:22:10.295278 tar[1056]: ./bridge Feb 9 19:22:10.360902 tar[1056]: ./tuning Feb 9 19:22:10.425088 tar[1056]: ./firewall Feb 9 19:22:10.493642 tar[1056]: ./host-device Feb 9 19:22:10.534867 tar[1056]: ./sbr Feb 9 19:22:10.589564 tar[1056]: ./loopback Feb 9 19:22:10.657436 tar[1056]: ./dhcp Feb 9 19:22:10.774398 tar[1056]: ./ptp Feb 9 19:22:10.812000 systemd[1]: Finished prepare-critools.service. Feb 9 19:22:10.831087 tar[1056]: ./ipvlan Feb 9 19:22:10.869264 tar[1056]: ./bandwidth Feb 9 19:22:10.918097 systemd[1]: Finished prepare-cni-plugins.service. Feb 9 19:22:10.922922 locksmithd[1093]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 9 19:22:11.866334 sshd_keygen[1065]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 9 19:22:11.898868 systemd[1]: Finished sshd-keygen.service. Feb 9 19:22:11.907283 systemd[1]: Starting issuegen.service... Feb 9 19:22:11.911276 systemd[1]: Started sshd@0-172.24.4.101:22-172.24.4.1:59128.service. Feb 9 19:22:11.913972 systemd[1]: issuegen.service: Deactivated successfully. Feb 9 19:22:11.914173 systemd[1]: Finished issuegen.service. Feb 9 19:22:11.916293 systemd[1]: Starting systemd-user-sessions.service... Feb 9 19:22:11.924998 systemd[1]: Finished systemd-user-sessions.service. Feb 9 19:22:11.927081 systemd[1]: Started getty@tty1.service. Feb 9 19:22:11.928759 systemd[1]: Started serial-getty@ttyS0.service. Feb 9 19:22:11.929499 systemd[1]: Reached target getty.target. Feb 9 19:22:11.930146 systemd[1]: Reached target multi-user.target. Feb 9 19:22:11.932033 systemd[1]: Starting systemd-update-utmp-runlevel.service... Feb 9 19:22:11.941341 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Feb 9 19:22:11.941518 systemd[1]: Finished systemd-update-utmp-runlevel.service. Feb 9 19:22:11.942249 systemd[1]: Startup finished in 905ms (kernel) + 12.039s (initrd) + 9.550s (userspace) = 22.496s. Feb 9 19:22:13.245701 sshd[1115]: Accepted publickey for core from 172.24.4.1 port 59128 ssh2: RSA SHA256:0cKtuwQ+yBp2KK/6KUCEpkWDg4c+XXZ9qW4sy+pe7oM Feb 9 19:22:13.251688 sshd[1115]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:22:13.293191 systemd-logind[1049]: New session 1 of user core. Feb 9 19:22:13.296297 systemd[1]: Created slice user-500.slice. Feb 9 19:22:13.300634 systemd[1]: Starting user-runtime-dir@500.service... Feb 9 19:22:13.323821 systemd[1]: Finished user-runtime-dir@500.service. Feb 9 19:22:13.328290 systemd[1]: Starting user@500.service... Feb 9 19:22:13.339516 (systemd)[1124]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:22:13.476276 systemd[1124]: Queued start job for default target default.target. Feb 9 19:22:13.477362 systemd[1124]: Reached target paths.target. Feb 9 19:22:13.477487 systemd[1124]: Reached target sockets.target. Feb 9 19:22:13.477590 systemd[1124]: Reached target timers.target. Feb 9 19:22:13.477694 systemd[1124]: Reached target basic.target. Feb 9 19:22:13.477923 systemd[1]: Started user@500.service. Feb 9 19:22:13.479012 systemd[1]: Started session-1.scope. Feb 9 19:22:13.479578 systemd[1124]: Reached target default.target. Feb 9 19:22:13.479812 systemd[1124]: Startup finished in 125ms. Feb 9 19:22:14.074340 systemd[1]: Started sshd@1-172.24.4.101:22-172.24.4.1:59134.service. Feb 9 19:22:15.771749 sshd[1133]: Accepted publickey for core from 172.24.4.1 port 59134 ssh2: RSA SHA256:0cKtuwQ+yBp2KK/6KUCEpkWDg4c+XXZ9qW4sy+pe7oM Feb 9 19:22:15.775427 sshd[1133]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:22:15.786188 systemd-logind[1049]: New session 2 of user core. Feb 9 19:22:15.788324 systemd[1]: Started session-2.scope. Feb 9 19:22:16.524192 sshd[1133]: pam_unix(sshd:session): session closed for user core Feb 9 19:22:16.531713 systemd[1]: sshd@1-172.24.4.101:22-172.24.4.1:59134.service: Deactivated successfully. Feb 9 19:22:16.533592 systemd[1]: session-2.scope: Deactivated successfully. Feb 9 19:22:16.535452 systemd-logind[1049]: Session 2 logged out. Waiting for processes to exit. Feb 9 19:22:16.538737 systemd[1]: Started sshd@2-172.24.4.101:22-172.24.4.1:33876.service. Feb 9 19:22:16.542053 systemd-logind[1049]: Removed session 2. Feb 9 19:22:17.881545 sshd[1139]: Accepted publickey for core from 172.24.4.1 port 33876 ssh2: RSA SHA256:0cKtuwQ+yBp2KK/6KUCEpkWDg4c+XXZ9qW4sy+pe7oM Feb 9 19:22:17.885092 sshd[1139]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:22:17.895000 systemd-logind[1049]: New session 3 of user core. Feb 9 19:22:17.896088 systemd[1]: Started session-3.scope. Feb 9 19:22:18.724178 sshd[1139]: pam_unix(sshd:session): session closed for user core Feb 9 19:22:18.732515 systemd[1]: Started sshd@3-172.24.4.101:22-172.24.4.1:33882.service. Feb 9 19:22:18.735041 systemd[1]: sshd@2-172.24.4.101:22-172.24.4.1:33876.service: Deactivated successfully. Feb 9 19:22:18.736623 systemd[1]: session-3.scope: Deactivated successfully. Feb 9 19:22:18.740232 systemd-logind[1049]: Session 3 logged out. Waiting for processes to exit. Feb 9 19:22:18.743322 systemd-logind[1049]: Removed session 3. Feb 9 19:22:19.967671 sshd[1145]: Accepted publickey for core from 172.24.4.1 port 33882 ssh2: RSA SHA256:0cKtuwQ+yBp2KK/6KUCEpkWDg4c+XXZ9qW4sy+pe7oM Feb 9 19:22:19.970650 sshd[1145]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:22:19.982577 systemd[1]: Started session-4.scope. Feb 9 19:22:19.983498 systemd-logind[1049]: New session 4 of user core. Feb 9 19:22:20.720739 sshd[1145]: pam_unix(sshd:session): session closed for user core Feb 9 19:22:20.730040 systemd[1]: Started sshd@4-172.24.4.101:22-172.24.4.1:33890.service. Feb 9 19:22:20.731359 systemd[1]: sshd@3-172.24.4.101:22-172.24.4.1:33882.service: Deactivated successfully. Feb 9 19:22:20.734109 systemd[1]: session-4.scope: Deactivated successfully. Feb 9 19:22:20.736539 systemd-logind[1049]: Session 4 logged out. Waiting for processes to exit. Feb 9 19:22:20.739247 systemd-logind[1049]: Removed session 4. Feb 9 19:22:22.344547 sshd[1151]: Accepted publickey for core from 172.24.4.1 port 33890 ssh2: RSA SHA256:0cKtuwQ+yBp2KK/6KUCEpkWDg4c+XXZ9qW4sy+pe7oM Feb 9 19:22:22.347758 sshd[1151]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:22:22.360021 systemd-logind[1049]: New session 5 of user core. Feb 9 19:22:22.363123 systemd[1]: Started session-5.scope. Feb 9 19:22:22.989040 sudo[1155]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 9 19:22:22.990356 sudo[1155]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 9 19:22:23.651491 systemd[1]: Reloading. Feb 9 19:22:23.808168 /usr/lib/systemd/system-generators/torcx-generator[1194]: time="2024-02-09T19:22:23Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 19:22:23.808199 /usr/lib/systemd/system-generators/torcx-generator[1194]: time="2024-02-09T19:22:23Z" level=info msg="torcx already run" Feb 9 19:22:23.875932 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 19:22:23.876153 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 19:22:23.899143 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 19:22:23.986472 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 9 19:22:24.000781 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 9 19:22:24.001347 systemd[1]: Reached target network-online.target. Feb 9 19:22:24.003309 systemd[1]: Started kubelet.service. Feb 9 19:22:24.016663 systemd[1]: Starting coreos-metadata.service... Feb 9 19:22:24.070406 coreos-metadata[1238]: Feb 09 19:22:24.069 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Feb 9 19:22:24.103930 kubelet[1232]: E0209 19:22:24.103824 1232 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 9 19:22:24.106267 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 19:22:24.106405 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 19:22:24.432705 coreos-metadata[1238]: Feb 09 19:22:24.432 INFO Fetch successful Feb 9 19:22:24.432705 coreos-metadata[1238]: Feb 09 19:22:24.432 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Feb 9 19:22:24.449217 coreos-metadata[1238]: Feb 09 19:22:24.449 INFO Fetch successful Feb 9 19:22:24.449217 coreos-metadata[1238]: Feb 09 19:22:24.449 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Feb 9 19:22:24.466158 coreos-metadata[1238]: Feb 09 19:22:24.466 INFO Fetch successful Feb 9 19:22:24.466554 coreos-metadata[1238]: Feb 09 19:22:24.466 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Feb 9 19:22:24.483543 coreos-metadata[1238]: Feb 09 19:22:24.483 INFO Fetch successful Feb 9 19:22:24.483543 coreos-metadata[1238]: Feb 09 19:22:24.483 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Feb 9 19:22:24.502086 coreos-metadata[1238]: Feb 09 19:22:24.502 INFO Fetch successful Feb 9 19:22:24.520302 systemd[1]: Finished coreos-metadata.service. Feb 9 19:22:25.243118 systemd[1]: Stopped kubelet.service. Feb 9 19:22:25.267114 systemd[1]: Reloading. Feb 9 19:22:25.386285 /usr/lib/systemd/system-generators/torcx-generator[1299]: time="2024-02-09T19:22:25Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 19:22:25.386319 /usr/lib/systemd/system-generators/torcx-generator[1299]: time="2024-02-09T19:22:25Z" level=info msg="torcx already run" Feb 9 19:22:25.458311 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 19:22:25.458332 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 19:22:25.481088 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 19:22:25.585151 systemd[1]: Started kubelet.service. Feb 9 19:22:25.670485 kubelet[1346]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 19:22:25.670485 kubelet[1346]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 19:22:25.670957 kubelet[1346]: I0209 19:22:25.670508 1346 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 9 19:22:25.671960 kubelet[1346]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 19:22:25.671960 kubelet[1346]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 19:22:26.170314 kubelet[1346]: I0209 19:22:26.170232 1346 server.go:412] "Kubelet version" kubeletVersion="v1.26.5" Feb 9 19:22:26.170314 kubelet[1346]: I0209 19:22:26.170260 1346 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 9 19:22:26.170763 kubelet[1346]: I0209 19:22:26.170469 1346 server.go:836] "Client rotation is on, will bootstrap in background" Feb 9 19:22:26.175862 kubelet[1346]: I0209 19:22:26.175817 1346 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 9 19:22:26.176029 kubelet[1346]: I0209 19:22:26.176004 1346 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 9 19:22:26.176105 kubelet[1346]: I0209 19:22:26.176072 1346 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 9 19:22:26.176105 kubelet[1346]: I0209 19:22:26.176091 1346 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 9 19:22:26.176105 kubelet[1346]: I0209 19:22:26.176103 1346 container_manager_linux.go:308] "Creating device plugin manager" Feb 9 19:22:26.176433 kubelet[1346]: I0209 19:22:26.176194 1346 state_mem.go:36] "Initialized new in-memory state store" Feb 9 19:22:26.177117 kubelet[1346]: I0209 19:22:26.177066 1346 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 19:22:26.186607 kubelet[1346]: I0209 19:22:26.186568 1346 kubelet.go:398] "Attempting to sync node with API server" Feb 9 19:22:26.186607 kubelet[1346]: I0209 19:22:26.186596 1346 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 9 19:22:26.186607 kubelet[1346]: I0209 19:22:26.186622 1346 kubelet.go:297] "Adding apiserver pod source" Feb 9 19:22:26.187014 kubelet[1346]: I0209 19:22:26.186638 1346 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 9 19:22:26.187222 kubelet[1346]: E0209 19:22:26.187188 1346 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:22:26.187398 kubelet[1346]: E0209 19:22:26.187269 1346 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:22:26.188726 kubelet[1346]: I0209 19:22:26.188696 1346 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 9 19:22:26.189051 kubelet[1346]: W0209 19:22:26.189020 1346 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 9 19:22:26.189465 kubelet[1346]: I0209 19:22:26.189431 1346 server.go:1186] "Started kubelet" Feb 9 19:22:26.191657 kubelet[1346]: E0209 19:22:26.191582 1346 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 9 19:22:26.191918 kubelet[1346]: E0209 19:22:26.191851 1346 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 9 19:22:26.193220 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Feb 9 19:22:26.193410 kubelet[1346]: I0209 19:22:26.193275 1346 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 9 19:22:26.199758 kubelet[1346]: I0209 19:22:26.199724 1346 server.go:161] "Starting to listen" address="0.0.0.0" port=10250 Feb 9 19:22:26.201755 kubelet[1346]: I0209 19:22:26.201719 1346 server.go:451] "Adding debug handlers to kubelet server" Feb 9 19:22:26.207700 kubelet[1346]: I0209 19:22:26.205489 1346 volume_manager.go:293] "Starting Kubelet Volume Manager" Feb 9 19:22:26.207700 kubelet[1346]: I0209 19:22:26.206185 1346 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 9 19:22:26.217444 kubelet[1346]: W0209 19:22:26.217274 1346 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "172.24.4.101" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 9 19:22:26.227806 kubelet[1346]: E0209 19:22:26.227695 1346 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "172.24.4.101" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 9 19:22:26.227806 kubelet[1346]: W0209 19:22:26.217840 1346 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 9 19:22:26.227806 kubelet[1346]: E0209 19:22:26.227730 1346 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 9 19:22:26.227806 kubelet[1346]: W0209 19:22:26.217911 1346 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 9 19:22:26.227806 kubelet[1346]: E0209 19:22:26.227743 1346 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 9 19:22:26.232869 kubelet[1346]: E0209 19:22:26.232743 1346 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.101.17b2482831461186", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.101", UID:"172.24.4.101", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.101"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 22, 26, 189406598, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 22, 26, 189406598, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:22:26.233173 kubelet[1346]: E0209 19:22:26.233068 1346 controller.go:146] failed to ensure lease exists, will retry in 200ms, error: leases.coordination.k8s.io "172.24.4.101" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 9 19:22:26.234998 kubelet[1346]: I0209 19:22:26.234808 1346 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 9 19:22:26.234998 kubelet[1346]: I0209 19:22:26.234829 1346 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 9 19:22:26.234998 kubelet[1346]: I0209 19:22:26.234844 1346 state_mem.go:36] "Initialized new in-memory state store" Feb 9 19:22:26.237738 kubelet[1346]: E0209 19:22:26.237639 1346 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.101.17b24828316ad7dc", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.101", UID:"172.24.4.101", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"InvalidDiskCapacity", Message:"invalid capacity 0 on image filesystem", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.101"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 22, 26, 191816668, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 22, 26, 191816668, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:22:26.240202 kubelet[1346]: I0209 19:22:26.240155 1346 policy_none.go:49] "None policy: Start" Feb 9 19:22:26.240393 kubelet[1346]: E0209 19:22:26.240028 1346 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.101.17b2482833ebd927", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.101", UID:"172.24.4.101", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.24.4.101 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.101"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 22, 26, 233825575, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 22, 26, 233825575, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:22:26.241526 kubelet[1346]: I0209 19:22:26.241493 1346 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 9 19:22:26.241526 kubelet[1346]: I0209 19:22:26.241517 1346 state_mem.go:35] "Initializing new in-memory state store" Feb 9 19:22:26.242189 kubelet[1346]: E0209 19:22:26.242106 1346 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.101.17b2482833ebee2b", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.101", UID:"172.24.4.101", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.24.4.101 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.101"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 22, 26, 233830955, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 22, 26, 233830955, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:22:26.243938 kubelet[1346]: E0209 19:22:26.243831 1346 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.101.17b2482833ebf98f", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.101", UID:"172.24.4.101", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.24.4.101 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.101"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 22, 26, 233833871, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 22, 26, 233833871, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:22:26.262926 systemd[1]: Created slice kubepods.slice. Feb 9 19:22:26.277867 systemd[1]: Created slice kubepods-burstable.slice. Feb 9 19:22:26.286157 systemd[1]: Created slice kubepods-besteffort.slice. Feb 9 19:22:26.295042 kubelet[1346]: I0209 19:22:26.295003 1346 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 9 19:22:26.295246 kubelet[1346]: I0209 19:22:26.295233 1346 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 9 19:22:26.296570 kubelet[1346]: E0209 19:22:26.296464 1346 eviction_manager.go:261] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172.24.4.101\" not found" Feb 9 19:22:26.298565 kubelet[1346]: E0209 19:22:26.298495 1346 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.101.17b2482837aeb497", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.101", UID:"172.24.4.101", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeAllocatableEnforced", Message:"Updated Node Allocatable limit across pods", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.101"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 22, 26, 296927383, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 22, 26, 296927383, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:22:26.306465 kubelet[1346]: I0209 19:22:26.306428 1346 kubelet_node_status.go:70] "Attempting to register node" node="172.24.4.101" Feb 9 19:22:26.308311 kubelet[1346]: E0209 19:22:26.308285 1346 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="172.24.4.101" Feb 9 19:22:26.309288 kubelet[1346]: E0209 19:22:26.309218 1346 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.101.17b2482833ebd927", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.101", UID:"172.24.4.101", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.24.4.101 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.101"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 22, 26, 233825575, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 22, 26, 306387335, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.24.4.101.17b2482833ebd927" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:22:26.310535 kubelet[1346]: E0209 19:22:26.310486 1346 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.101.17b2482833ebee2b", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.101", UID:"172.24.4.101", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.24.4.101 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.101"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 22, 26, 233830955, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 22, 26, 306397254, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.24.4.101.17b2482833ebee2b" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:22:26.311738 kubelet[1346]: E0209 19:22:26.311687 1346 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.101.17b2482833ebf98f", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.101", UID:"172.24.4.101", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.24.4.101 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.101"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 22, 26, 233833871, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 22, 26, 306400981, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.24.4.101.17b2482833ebf98f" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:22:26.422161 kubelet[1346]: I0209 19:22:26.421789 1346 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 9 19:22:26.435302 kubelet[1346]: E0209 19:22:26.435260 1346 controller.go:146] failed to ensure lease exists, will retry in 400ms, error: leases.coordination.k8s.io "172.24.4.101" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 9 19:22:26.462134 kubelet[1346]: I0209 19:22:26.462087 1346 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 9 19:22:26.462134 kubelet[1346]: I0209 19:22:26.462120 1346 status_manager.go:176] "Starting to sync pod status with apiserver" Feb 9 19:22:26.462134 kubelet[1346]: I0209 19:22:26.462149 1346 kubelet.go:2113] "Starting kubelet main sync loop" Feb 9 19:22:26.462526 kubelet[1346]: E0209 19:22:26.462192 1346 kubelet.go:2137] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Feb 9 19:22:26.465022 kubelet[1346]: W0209 19:22:26.464946 1346 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 9 19:22:26.465338 kubelet[1346]: E0209 19:22:26.465281 1346 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 9 19:22:26.510075 kubelet[1346]: I0209 19:22:26.510024 1346 kubelet_node_status.go:70] "Attempting to register node" node="172.24.4.101" Feb 9 19:22:26.512377 kubelet[1346]: E0209 19:22:26.512334 1346 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="172.24.4.101" Feb 9 19:22:26.513133 kubelet[1346]: E0209 19:22:26.512951 1346 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.101.17b2482833ebd927", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.101", UID:"172.24.4.101", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.24.4.101 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.101"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 22, 26, 233825575, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 22, 26, 509947612, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.24.4.101.17b2482833ebd927" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:22:26.515247 kubelet[1346]: E0209 19:22:26.515085 1346 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.101.17b2482833ebee2b", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.101", UID:"172.24.4.101", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.24.4.101 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.101"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 22, 26, 233830955, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 22, 26, 509967550, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.24.4.101.17b2482833ebee2b" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:22:26.600577 kubelet[1346]: E0209 19:22:26.593806 1346 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.101.17b2482833ebf98f", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.101", UID:"172.24.4.101", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.24.4.101 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.101"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 22, 26, 233833871, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 22, 26, 509974082, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.24.4.101.17b2482833ebf98f" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:22:26.838991 kubelet[1346]: E0209 19:22:26.838902 1346 controller.go:146] failed to ensure lease exists, will retry in 800ms, error: leases.coordination.k8s.io "172.24.4.101" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 9 19:22:26.914092 kubelet[1346]: I0209 19:22:26.914047 1346 kubelet_node_status.go:70] "Attempting to register node" node="172.24.4.101" Feb 9 19:22:26.917127 kubelet[1346]: E0209 19:22:26.917046 1346 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="172.24.4.101" Feb 9 19:22:26.917291 kubelet[1346]: E0209 19:22:26.917009 1346 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.101.17b2482833ebd927", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.101", UID:"172.24.4.101", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.24.4.101 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.101"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 22, 26, 233825575, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 22, 26, 913958525, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.24.4.101.17b2482833ebd927" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:22:26.993417 kubelet[1346]: E0209 19:22:26.993226 1346 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.101.17b2482833ebee2b", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.101", UID:"172.24.4.101", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.24.4.101 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.101"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 22, 26, 233830955, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 22, 26, 913985956, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.24.4.101.17b2482833ebee2b" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:22:27.158133 kubelet[1346]: W0209 19:22:27.157929 1346 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 9 19:22:27.158133 kubelet[1346]: E0209 19:22:27.158002 1346 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 9 19:22:27.187600 kubelet[1346]: E0209 19:22:27.187536 1346 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:22:27.191950 kubelet[1346]: E0209 19:22:27.191744 1346 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.101.17b2482833ebf98f", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.101", UID:"172.24.4.101", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.24.4.101 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.101"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 22, 26, 233833871, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 22, 26, 913992548, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.24.4.101.17b2482833ebf98f" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:22:27.229735 kubelet[1346]: W0209 19:22:27.229684 1346 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "172.24.4.101" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 9 19:22:27.230078 kubelet[1346]: E0209 19:22:27.230052 1346 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "172.24.4.101" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 9 19:22:27.455694 kubelet[1346]: W0209 19:22:27.455477 1346 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 9 19:22:27.456736 kubelet[1346]: E0209 19:22:27.456692 1346 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 9 19:22:27.641910 kubelet[1346]: E0209 19:22:27.641772 1346 controller.go:146] failed to ensure lease exists, will retry in 1.6s, error: leases.coordination.k8s.io "172.24.4.101" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 9 19:22:27.719692 kubelet[1346]: I0209 19:22:27.719524 1346 kubelet_node_status.go:70] "Attempting to register node" node="172.24.4.101" Feb 9 19:22:27.729806 kubelet[1346]: E0209 19:22:27.729757 1346 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="172.24.4.101" Feb 9 19:22:27.730103 kubelet[1346]: E0209 19:22:27.729819 1346 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.101.17b2482833ebd927", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.101", UID:"172.24.4.101", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.24.4.101 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.101"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 22, 26, 233825575, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 22, 27, 719369663, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.24.4.101.17b2482833ebd927" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:22:27.733501 kubelet[1346]: E0209 19:22:27.733371 1346 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.101.17b2482833ebee2b", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.101", UID:"172.24.4.101", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.24.4.101 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.101"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 22, 26, 233830955, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 22, 27, 719454111, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.24.4.101.17b2482833ebee2b" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:22:27.792769 kubelet[1346]: E0209 19:22:27.792564 1346 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.101.17b2482833ebf98f", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.101", UID:"172.24.4.101", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.24.4.101 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.101"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 22, 26, 233833871, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 22, 27, 719461555, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.24.4.101.17b2482833ebf98f" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:22:27.855376 kubelet[1346]: W0209 19:22:27.855268 1346 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 9 19:22:27.855376 kubelet[1346]: E0209 19:22:27.855359 1346 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 9 19:22:28.188267 kubelet[1346]: E0209 19:22:28.188187 1346 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:22:28.984620 kubelet[1346]: W0209 19:22:28.984525 1346 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "172.24.4.101" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 9 19:22:28.984620 kubelet[1346]: E0209 19:22:28.984591 1346 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "172.24.4.101" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 9 19:22:29.188528 kubelet[1346]: E0209 19:22:29.188378 1346 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:22:29.245453 kubelet[1346]: E0209 19:22:29.245191 1346 controller.go:146] failed to ensure lease exists, will retry in 3.2s, error: leases.coordination.k8s.io "172.24.4.101" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 9 19:22:29.333645 kubelet[1346]: I0209 19:22:29.333542 1346 kubelet_node_status.go:70] "Attempting to register node" node="172.24.4.101" Feb 9 19:22:29.335456 kubelet[1346]: E0209 19:22:29.335258 1346 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.101.17b2482833ebd927", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.101", UID:"172.24.4.101", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.24.4.101 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.101"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 22, 26, 233825575, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 22, 29, 332524494, time.Local), Count:6, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.24.4.101.17b2482833ebd927" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:22:29.336790 kubelet[1346]: E0209 19:22:29.336747 1346 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="172.24.4.101" Feb 9 19:22:29.338104 kubelet[1346]: E0209 19:22:29.337952 1346 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.101.17b2482833ebee2b", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.101", UID:"172.24.4.101", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.24.4.101 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.101"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 22, 26, 233830955, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 22, 29, 332548349, time.Local), Count:6, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.24.4.101.17b2482833ebee2b" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:22:29.340075 kubelet[1346]: E0209 19:22:29.339918 1346 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.101.17b2482833ebf98f", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.101", UID:"172.24.4.101", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.24.4.101 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.101"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 22, 26, 233833871, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 22, 29, 332556304, time.Local), Count:6, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.24.4.101.17b2482833ebf98f" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:22:29.548958 kubelet[1346]: W0209 19:22:29.548661 1346 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 9 19:22:29.548958 kubelet[1346]: E0209 19:22:29.548759 1346 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 9 19:22:29.592179 kubelet[1346]: W0209 19:22:29.592137 1346 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 9 19:22:29.592534 kubelet[1346]: E0209 19:22:29.592479 1346 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 9 19:22:29.936916 kubelet[1346]: W0209 19:22:29.936829 1346 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 9 19:22:29.937245 kubelet[1346]: E0209 19:22:29.937219 1346 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 9 19:22:30.189467 kubelet[1346]: E0209 19:22:30.189087 1346 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:22:31.191081 kubelet[1346]: E0209 19:22:31.191025 1346 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:22:32.192576 kubelet[1346]: E0209 19:22:32.192515 1346 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:22:32.448471 kubelet[1346]: E0209 19:22:32.448123 1346 controller.go:146] failed to ensure lease exists, will retry in 6.4s, error: leases.coordination.k8s.io "172.24.4.101" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 9 19:22:32.538479 kubelet[1346]: I0209 19:22:32.538416 1346 kubelet_node_status.go:70] "Attempting to register node" node="172.24.4.101" Feb 9 19:22:32.541757 kubelet[1346]: E0209 19:22:32.541688 1346 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="172.24.4.101" Feb 9 19:22:32.542249 kubelet[1346]: E0209 19:22:32.541708 1346 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.101.17b2482833ebd927", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.101", UID:"172.24.4.101", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.24.4.101 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.101"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 22, 26, 233825575, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 22, 32, 538328369, time.Local), Count:7, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.24.4.101.17b2482833ebd927" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:22:32.544906 kubelet[1346]: E0209 19:22:32.544702 1346 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.101.17b2482833ebee2b", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.101", UID:"172.24.4.101", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.24.4.101 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.101"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 22, 26, 233830955, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 22, 32, 538355089, time.Local), Count:7, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.24.4.101.17b2482833ebee2b" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:22:32.549016 kubelet[1346]: E0209 19:22:32.548781 1346 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.101.17b2482833ebf98f", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.101", UID:"172.24.4.101", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.24.4.101 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.101"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 22, 26, 233833871, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 22, 32, 538361862, time.Local), Count:7, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.24.4.101.17b2482833ebf98f" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:22:32.610929 kubelet[1346]: W0209 19:22:32.610817 1346 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "172.24.4.101" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 9 19:22:32.611268 kubelet[1346]: E0209 19:22:32.611219 1346 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "172.24.4.101" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 9 19:22:33.193494 kubelet[1346]: E0209 19:22:33.193432 1346 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:22:33.438128 kubelet[1346]: W0209 19:22:33.438070 1346 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 9 19:22:33.438529 kubelet[1346]: E0209 19:22:33.438503 1346 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 9 19:22:34.194202 kubelet[1346]: E0209 19:22:34.194131 1346 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:22:35.195052 kubelet[1346]: E0209 19:22:35.194951 1346 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:22:35.574793 kubelet[1346]: W0209 19:22:35.574582 1346 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 9 19:22:35.574793 kubelet[1346]: E0209 19:22:35.574656 1346 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 9 19:22:35.672008 kubelet[1346]: W0209 19:22:35.671953 1346 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 9 19:22:35.672314 kubelet[1346]: E0209 19:22:35.672289 1346 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 9 19:22:36.177474 kubelet[1346]: I0209 19:22:36.177384 1346 transport.go:135] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 9 19:22:36.195481 kubelet[1346]: E0209 19:22:36.195432 1346 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:22:36.297909 kubelet[1346]: E0209 19:22:36.297502 1346 eviction_manager.go:261] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172.24.4.101\" not found" Feb 9 19:22:36.617250 kubelet[1346]: E0209 19:22:36.617149 1346 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "172.24.4.101" not found Feb 9 19:22:37.196530 kubelet[1346]: E0209 19:22:37.196485 1346 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:22:37.638992 kubelet[1346]: E0209 19:22:37.638959 1346 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "172.24.4.101" not found Feb 9 19:22:38.198522 kubelet[1346]: E0209 19:22:38.198429 1346 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:22:38.861757 kubelet[1346]: E0209 19:22:38.861725 1346 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"172.24.4.101\" not found" node="172.24.4.101" Feb 9 19:22:38.944660 kubelet[1346]: I0209 19:22:38.944605 1346 kubelet_node_status.go:70] "Attempting to register node" node="172.24.4.101" Feb 9 19:22:39.043340 kubelet[1346]: I0209 19:22:39.043303 1346 kubelet_node_status.go:73] "Successfully registered node" node="172.24.4.101" Feb 9 19:22:39.073189 kubelet[1346]: E0209 19:22:39.073159 1346 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.101\" not found" Feb 9 19:22:39.140347 sudo[1155]: pam_unix(sudo:session): session closed for user root Feb 9 19:22:39.174290 kubelet[1346]: E0209 19:22:39.174240 1346 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.101\" not found" Feb 9 19:22:39.199428 kubelet[1346]: E0209 19:22:39.199389 1346 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:22:39.275283 kubelet[1346]: E0209 19:22:39.275213 1346 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.101\" not found" Feb 9 19:22:39.372440 sshd[1151]: pam_unix(sshd:session): session closed for user core Feb 9 19:22:39.377189 kubelet[1346]: E0209 19:22:39.375994 1346 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.101\" not found" Feb 9 19:22:39.377745 systemd[1]: sshd@4-172.24.4.101:22-172.24.4.1:33890.service: Deactivated successfully. Feb 9 19:22:39.379493 systemd[1]: session-5.scope: Deactivated successfully. Feb 9 19:22:39.381039 systemd-logind[1049]: Session 5 logged out. Waiting for processes to exit. Feb 9 19:22:39.384232 systemd-logind[1049]: Removed session 5. Feb 9 19:22:39.477078 kubelet[1346]: E0209 19:22:39.476226 1346 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.101\" not found" Feb 9 19:22:39.577294 kubelet[1346]: E0209 19:22:39.577248 1346 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.101\" not found" Feb 9 19:22:39.678522 kubelet[1346]: E0209 19:22:39.678477 1346 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.101\" not found" Feb 9 19:22:39.780255 kubelet[1346]: E0209 19:22:39.779724 1346 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.101\" not found" Feb 9 19:22:39.880787 kubelet[1346]: E0209 19:22:39.880716 1346 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.101\" not found" Feb 9 19:22:39.981599 kubelet[1346]: E0209 19:22:39.981552 1346 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.101\" not found" Feb 9 19:22:40.083127 kubelet[1346]: I0209 19:22:40.083101 1346 kuberuntime_manager.go:1114] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Feb 9 19:22:40.085513 env[1061]: time="2024-02-09T19:22:40.085417422Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 9 19:22:40.086179 kubelet[1346]: I0209 19:22:40.085979 1346 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Feb 9 19:22:40.196579 kubelet[1346]: I0209 19:22:40.196535 1346 apiserver.go:52] "Watching apiserver" Feb 9 19:22:40.201148 kubelet[1346]: E0209 19:22:40.201053 1346 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:22:40.242402 kubelet[1346]: I0209 19:22:40.242343 1346 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:22:40.242856 kubelet[1346]: I0209 19:22:40.242813 1346 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:22:40.252070 systemd[1]: Created slice kubepods-burstable-podd8f0b0a1_2dd3_4792_828d_9fd8a54a0c4e.slice. Feb 9 19:22:40.277239 systemd[1]: Created slice kubepods-besteffort-podf4f9af84_a338_4799_9011_c7e4acdfa644.slice. Feb 9 19:22:40.308321 kubelet[1346]: I0209 19:22:40.308267 1346 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 9 19:22:40.394462 kubelet[1346]: I0209 19:22:40.394303 1346 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d8f0b0a1-2dd3-4792-828d-9fd8a54a0c4e-lib-modules\") pod \"cilium-zvfsv\" (UID: \"d8f0b0a1-2dd3-4792-828d-9fd8a54a0c4e\") " pod="kube-system/cilium-zvfsv" Feb 9 19:22:40.395544 kubelet[1346]: I0209 19:22:40.395506 1346 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d8f0b0a1-2dd3-4792-828d-9fd8a54a0c4e-clustermesh-secrets\") pod \"cilium-zvfsv\" (UID: \"d8f0b0a1-2dd3-4792-828d-9fd8a54a0c4e\") " pod="kube-system/cilium-zvfsv" Feb 9 19:22:40.395779 kubelet[1346]: I0209 19:22:40.395755 1346 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d8f0b0a1-2dd3-4792-828d-9fd8a54a0c4e-cilium-config-path\") pod \"cilium-zvfsv\" (UID: \"d8f0b0a1-2dd3-4792-828d-9fd8a54a0c4e\") " pod="kube-system/cilium-zvfsv" Feb 9 19:22:40.396248 kubelet[1346]: I0209 19:22:40.396167 1346 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d8f0b0a1-2dd3-4792-828d-9fd8a54a0c4e-hubble-tls\") pod \"cilium-zvfsv\" (UID: \"d8f0b0a1-2dd3-4792-828d-9fd8a54a0c4e\") " pod="kube-system/cilium-zvfsv" Feb 9 19:22:40.396407 kubelet[1346]: I0209 19:22:40.396259 1346 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqc66\" (UniqueName: \"kubernetes.io/projected/d8f0b0a1-2dd3-4792-828d-9fd8a54a0c4e-kube-api-access-sqc66\") pod \"cilium-zvfsv\" (UID: \"d8f0b0a1-2dd3-4792-828d-9fd8a54a0c4e\") " pod="kube-system/cilium-zvfsv" Feb 9 19:22:40.396407 kubelet[1346]: I0209 19:22:40.396330 1346 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f4f9af84-a338-4799-9011-c7e4acdfa644-xtables-lock\") pod \"kube-proxy-d7lkm\" (UID: \"f4f9af84-a338-4799-9011-c7e4acdfa644\") " pod="kube-system/kube-proxy-d7lkm" Feb 9 19:22:40.396407 kubelet[1346]: I0209 19:22:40.396395 1346 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d8f0b0a1-2dd3-4792-828d-9fd8a54a0c4e-cilium-run\") pod \"cilium-zvfsv\" (UID: \"d8f0b0a1-2dd3-4792-828d-9fd8a54a0c4e\") " pod="kube-system/cilium-zvfsv" Feb 9 19:22:40.396639 kubelet[1346]: I0209 19:22:40.396452 1346 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d8f0b0a1-2dd3-4792-828d-9fd8a54a0c4e-hostproc\") pod \"cilium-zvfsv\" (UID: \"d8f0b0a1-2dd3-4792-828d-9fd8a54a0c4e\") " pod="kube-system/cilium-zvfsv" Feb 9 19:22:40.396639 kubelet[1346]: I0209 19:22:40.396504 1346 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f4f9af84-a338-4799-9011-c7e4acdfa644-lib-modules\") pod \"kube-proxy-d7lkm\" (UID: \"f4f9af84-a338-4799-9011-c7e4acdfa644\") " pod="kube-system/kube-proxy-d7lkm" Feb 9 19:22:40.396639 kubelet[1346]: I0209 19:22:40.396553 1346 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d8f0b0a1-2dd3-4792-828d-9fd8a54a0c4e-bpf-maps\") pod \"cilium-zvfsv\" (UID: \"d8f0b0a1-2dd3-4792-828d-9fd8a54a0c4e\") " pod="kube-system/cilium-zvfsv" Feb 9 19:22:40.396639 kubelet[1346]: I0209 19:22:40.396610 1346 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d8f0b0a1-2dd3-4792-828d-9fd8a54a0c4e-cilium-cgroup\") pod \"cilium-zvfsv\" (UID: \"d8f0b0a1-2dd3-4792-828d-9fd8a54a0c4e\") " pod="kube-system/cilium-zvfsv" Feb 9 19:22:40.396927 kubelet[1346]: I0209 19:22:40.396663 1346 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d8f0b0a1-2dd3-4792-828d-9fd8a54a0c4e-etc-cni-netd\") pod \"cilium-zvfsv\" (UID: \"d8f0b0a1-2dd3-4792-828d-9fd8a54a0c4e\") " pod="kube-system/cilium-zvfsv" Feb 9 19:22:40.396927 kubelet[1346]: I0209 19:22:40.396718 1346 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d8f0b0a1-2dd3-4792-828d-9fd8a54a0c4e-host-proc-sys-net\") pod \"cilium-zvfsv\" (UID: \"d8f0b0a1-2dd3-4792-828d-9fd8a54a0c4e\") " pod="kube-system/cilium-zvfsv" Feb 9 19:22:40.396927 kubelet[1346]: I0209 19:22:40.396770 1346 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d8f0b0a1-2dd3-4792-828d-9fd8a54a0c4e-host-proc-sys-kernel\") pod \"cilium-zvfsv\" (UID: \"d8f0b0a1-2dd3-4792-828d-9fd8a54a0c4e\") " pod="kube-system/cilium-zvfsv" Feb 9 19:22:40.396927 kubelet[1346]: I0209 19:22:40.396824 1346 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f4f9af84-a338-4799-9011-c7e4acdfa644-kube-proxy\") pod \"kube-proxy-d7lkm\" (UID: \"f4f9af84-a338-4799-9011-c7e4acdfa644\") " pod="kube-system/kube-proxy-d7lkm" Feb 9 19:22:40.396927 kubelet[1346]: I0209 19:22:40.396922 1346 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zg5jw\" (UniqueName: \"kubernetes.io/projected/f4f9af84-a338-4799-9011-c7e4acdfa644-kube-api-access-zg5jw\") pod \"kube-proxy-d7lkm\" (UID: \"f4f9af84-a338-4799-9011-c7e4acdfa644\") " pod="kube-system/kube-proxy-d7lkm" Feb 9 19:22:40.397270 kubelet[1346]: I0209 19:22:40.397036 1346 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d8f0b0a1-2dd3-4792-828d-9fd8a54a0c4e-cni-path\") pod \"cilium-zvfsv\" (UID: \"d8f0b0a1-2dd3-4792-828d-9fd8a54a0c4e\") " pod="kube-system/cilium-zvfsv" Feb 9 19:22:40.397270 kubelet[1346]: I0209 19:22:40.397131 1346 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d8f0b0a1-2dd3-4792-828d-9fd8a54a0c4e-xtables-lock\") pod \"cilium-zvfsv\" (UID: \"d8f0b0a1-2dd3-4792-828d-9fd8a54a0c4e\") " pod="kube-system/cilium-zvfsv" Feb 9 19:22:40.397270 kubelet[1346]: I0209 19:22:40.397171 1346 reconciler.go:41] "Reconciler: start to sync state" Feb 9 19:22:41.202176 kubelet[1346]: E0209 19:22:41.202032 1346 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:22:41.435409 kubelet[1346]: I0209 19:22:41.435369 1346 request.go:690] Waited for 1.191475352s due to client-side throttling, not priority and fairness, request: GET:https://172.24.4.140:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0 Feb 9 19:22:42.074041 env[1061]: time="2024-02-09T19:22:42.073848020Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zvfsv,Uid:d8f0b0a1-2dd3-4792-828d-9fd8a54a0c4e,Namespace:kube-system,Attempt:0,}" Feb 9 19:22:42.087704 env[1061]: time="2024-02-09T19:22:42.087086267Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-d7lkm,Uid:f4f9af84-a338-4799-9011-c7e4acdfa644,Namespace:kube-system,Attempt:0,}" Feb 9 19:22:42.202762 kubelet[1346]: E0209 19:22:42.202638 1346 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:22:42.843600 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3778224648.mount: Deactivated successfully. Feb 9 19:22:42.864428 env[1061]: time="2024-02-09T19:22:42.864272929Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:22:42.869822 env[1061]: time="2024-02-09T19:22:42.869706779Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:22:42.873809 env[1061]: time="2024-02-09T19:22:42.873684785Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:22:42.883663 env[1061]: time="2024-02-09T19:22:42.883579930Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:22:42.891367 env[1061]: time="2024-02-09T19:22:42.891307445Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:22:42.895571 env[1061]: time="2024-02-09T19:22:42.895520088Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:22:42.897434 env[1061]: time="2024-02-09T19:22:42.897324252Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:22:42.898828 env[1061]: time="2024-02-09T19:22:42.898772140Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:22:42.935356 env[1061]: time="2024-02-09T19:22:42.934138278Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:22:42.935356 env[1061]: time="2024-02-09T19:22:42.934184153Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:22:42.935356 env[1061]: time="2024-02-09T19:22:42.934199362Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:22:42.935815 env[1061]: time="2024-02-09T19:22:42.935732030Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/76b01cfee10e35d967cd57cf01a1c5dea92689bf34c2b6327000b12209ae6270 pid=1437 runtime=io.containerd.runc.v2 Feb 9 19:22:42.948026 env[1061]: time="2024-02-09T19:22:42.947847384Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:22:42.948348 env[1061]: time="2024-02-09T19:22:42.947991653Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:22:42.948589 env[1061]: time="2024-02-09T19:22:42.948490412Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:22:42.952171 env[1061]: time="2024-02-09T19:22:42.950509165Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/bdb72ea726923b0fb2c34e252f2d425132eccdb45449a5e8c153f3415216f157 pid=1447 runtime=io.containerd.runc.v2 Feb 9 19:22:42.977833 systemd[1]: Started cri-containerd-76b01cfee10e35d967cd57cf01a1c5dea92689bf34c2b6327000b12209ae6270.scope. Feb 9 19:22:42.999034 systemd[1]: Started cri-containerd-bdb72ea726923b0fb2c34e252f2d425132eccdb45449a5e8c153f3415216f157.scope. Feb 9 19:22:43.034212 env[1061]: time="2024-02-09T19:22:43.034129629Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-d7lkm,Uid:f4f9af84-a338-4799-9011-c7e4acdfa644,Namespace:kube-system,Attempt:0,} returns sandbox id \"76b01cfee10e35d967cd57cf01a1c5dea92689bf34c2b6327000b12209ae6270\"" Feb 9 19:22:43.038712 env[1061]: time="2024-02-09T19:22:43.038671008Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\"" Feb 9 19:22:43.049713 env[1061]: time="2024-02-09T19:22:43.049671552Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zvfsv,Uid:d8f0b0a1-2dd3-4792-828d-9fd8a54a0c4e,Namespace:kube-system,Attempt:0,} returns sandbox id \"bdb72ea726923b0fb2c34e252f2d425132eccdb45449a5e8c153f3415216f157\"" Feb 9 19:22:43.206002 kubelet[1346]: E0209 19:22:43.203824 1346 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:22:43.921969 systemd[1]: run-containerd-runc-k8s.io-76b01cfee10e35d967cd57cf01a1c5dea92689bf34c2b6327000b12209ae6270-runc.4Kcwv1.mount: Deactivated successfully. Feb 9 19:22:44.205226 kubelet[1346]: E0209 19:22:44.205022 1346 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:22:44.526822 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1249069416.mount: Deactivated successfully. Feb 9 19:22:45.205340 kubelet[1346]: E0209 19:22:45.205231 1346 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:22:45.734917 env[1061]: time="2024-02-09T19:22:45.734833398Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:22:45.737680 env[1061]: time="2024-02-09T19:22:45.737621621Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5a7325fa2b6e8d712e4a770abb4a5a5852e87b6de8df34552d67853e9bfb9f9f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:22:45.739933 env[1061]: time="2024-02-09T19:22:45.739781371Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:22:45.748528 env[1061]: time="2024-02-09T19:22:45.747669240Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:f6e0de32a002b910b9b2e0e8d769e2d7b05208240559c745ce4781082ab15f22,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:22:45.752273 env[1061]: time="2024-02-09T19:22:45.750827445Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\" returns image reference \"sha256:5a7325fa2b6e8d712e4a770abb4a5a5852e87b6de8df34552d67853e9bfb9f9f\"" Feb 9 19:22:45.755255 env[1061]: time="2024-02-09T19:22:45.755168085Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 9 19:22:45.757959 env[1061]: time="2024-02-09T19:22:45.757831505Z" level=info msg="CreateContainer within sandbox \"76b01cfee10e35d967cd57cf01a1c5dea92689bf34c2b6327000b12209ae6270\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 9 19:22:45.776695 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3702303887.mount: Deactivated successfully. Feb 9 19:22:45.786683 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1470609073.mount: Deactivated successfully. Feb 9 19:22:45.817978 env[1061]: time="2024-02-09T19:22:45.817836422Z" level=info msg="CreateContainer within sandbox \"76b01cfee10e35d967cd57cf01a1c5dea92689bf34c2b6327000b12209ae6270\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"9608697413f1f3671fe4f6db54db44c2641acd62fb4e0d2c233bcba2aaa385e8\"" Feb 9 19:22:45.820474 env[1061]: time="2024-02-09T19:22:45.820351727Z" level=info msg="StartContainer for \"9608697413f1f3671fe4f6db54db44c2641acd62fb4e0d2c233bcba2aaa385e8\"" Feb 9 19:22:45.871179 systemd[1]: Started cri-containerd-9608697413f1f3671fe4f6db54db44c2641acd62fb4e0d2c233bcba2aaa385e8.scope. Feb 9 19:22:45.924198 env[1061]: time="2024-02-09T19:22:45.924109432Z" level=info msg="StartContainer for \"9608697413f1f3671fe4f6db54db44c2641acd62fb4e0d2c233bcba2aaa385e8\" returns successfully" Feb 9 19:22:46.187564 kubelet[1346]: E0209 19:22:46.187458 1346 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:22:46.206755 kubelet[1346]: E0209 19:22:46.206706 1346 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:22:46.558698 kubelet[1346]: I0209 19:22:46.558468 1346 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-d7lkm" podStartSLOduration=-9.223372029296415e+09 pod.CreationTimestamp="2024-02-09 19:22:39 +0000 UTC" firstStartedPulling="2024-02-09 19:22:43.036244393 +0000 UTC m=+17.443545391" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:22:46.55383627 +0000 UTC m=+20.961137308" watchObservedRunningTime="2024-02-09 19:22:46.558360465 +0000 UTC m=+20.965661533" Feb 9 19:22:47.207908 kubelet[1346]: E0209 19:22:47.207732 1346 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:22:48.208213 kubelet[1346]: E0209 19:22:48.208140 1346 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:22:49.208772 kubelet[1346]: E0209 19:22:49.208697 1346 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:22:50.209333 kubelet[1346]: E0209 19:22:50.209274 1346 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:22:51.210939 kubelet[1346]: E0209 19:22:51.210187 1346 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:22:52.211199 kubelet[1346]: E0209 19:22:52.211103 1346 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:22:53.211925 kubelet[1346]: E0209 19:22:53.211836 1346 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:22:53.480826 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4190085218.mount: Deactivated successfully. Feb 9 19:22:54.214442 kubelet[1346]: E0209 19:22:54.214320 1346 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:22:55.215356 kubelet[1346]: E0209 19:22:55.215269 1346 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:22:55.455466 update_engine[1052]: I0209 19:22:55.455009 1052 update_attempter.cc:509] Updating boot flags... Feb 9 19:22:56.216214 kubelet[1346]: E0209 19:22:56.216163 1346 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:22:57.217348 kubelet[1346]: E0209 19:22:57.217288 1346 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:22:57.745708 env[1061]: time="2024-02-09T19:22:57.745614563Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:22:57.749019 env[1061]: time="2024-02-09T19:22:57.748957730Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:22:57.753613 env[1061]: time="2024-02-09T19:22:57.753551205Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:22:57.757833 env[1061]: time="2024-02-09T19:22:57.755824889Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Feb 9 19:22:57.764225 env[1061]: time="2024-02-09T19:22:57.764183960Z" level=info msg="CreateContainer within sandbox \"bdb72ea726923b0fb2c34e252f2d425132eccdb45449a5e8c153f3415216f157\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 19:22:57.785784 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3786850271.mount: Deactivated successfully. Feb 9 19:22:57.796843 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3000153799.mount: Deactivated successfully. Feb 9 19:22:57.816404 env[1061]: time="2024-02-09T19:22:57.816306291Z" level=info msg="CreateContainer within sandbox \"bdb72ea726923b0fb2c34e252f2d425132eccdb45449a5e8c153f3415216f157\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"c4fd5411eff9acd28233d840081dd38459fbbd3e0c453158d7204a4e77d821d6\"" Feb 9 19:22:57.817413 env[1061]: time="2024-02-09T19:22:57.817353049Z" level=info msg="StartContainer for \"c4fd5411eff9acd28233d840081dd38459fbbd3e0c453158d7204a4e77d821d6\"" Feb 9 19:22:57.845375 systemd[1]: Started cri-containerd-c4fd5411eff9acd28233d840081dd38459fbbd3e0c453158d7204a4e77d821d6.scope. Feb 9 19:22:57.889917 env[1061]: time="2024-02-09T19:22:57.889848482Z" level=info msg="StartContainer for \"c4fd5411eff9acd28233d840081dd38459fbbd3e0c453158d7204a4e77d821d6\" returns successfully" Feb 9 19:22:57.891350 systemd[1]: cri-containerd-c4fd5411eff9acd28233d840081dd38459fbbd3e0c453158d7204a4e77d821d6.scope: Deactivated successfully. Feb 9 19:22:58.217497 kubelet[1346]: E0209 19:22:58.217429 1346 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:22:58.581847 env[1061]: time="2024-02-09T19:22:58.581757466Z" level=info msg="shim disconnected" id=c4fd5411eff9acd28233d840081dd38459fbbd3e0c453158d7204a4e77d821d6 Feb 9 19:22:58.582143 env[1061]: time="2024-02-09T19:22:58.581859487Z" level=warning msg="cleaning up after shim disconnected" id=c4fd5411eff9acd28233d840081dd38459fbbd3e0c453158d7204a4e77d821d6 namespace=k8s.io Feb 9 19:22:58.582143 env[1061]: time="2024-02-09T19:22:58.581916183Z" level=info msg="cleaning up dead shim" Feb 9 19:22:58.598086 env[1061]: time="2024-02-09T19:22:58.597989683Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:22:58Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1724 runtime=io.containerd.runc.v2\n" Feb 9 19:22:58.781162 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c4fd5411eff9acd28233d840081dd38459fbbd3e0c453158d7204a4e77d821d6-rootfs.mount: Deactivated successfully. Feb 9 19:22:59.218032 kubelet[1346]: E0209 19:22:59.217934 1346 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:22:59.572121 env[1061]: time="2024-02-09T19:22:59.571935240Z" level=info msg="CreateContainer within sandbox \"bdb72ea726923b0fb2c34e252f2d425132eccdb45449a5e8c153f3415216f157\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 9 19:22:59.603871 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount580149028.mount: Deactivated successfully. Feb 9 19:22:59.620866 env[1061]: time="2024-02-09T19:22:59.620726673Z" level=info msg="CreateContainer within sandbox \"bdb72ea726923b0fb2c34e252f2d425132eccdb45449a5e8c153f3415216f157\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"104a05419ce4c6376a740e963ec0419fa7b2cf548c9ebc1ff3374989209f5513\"" Feb 9 19:22:59.621715 env[1061]: time="2024-02-09T19:22:59.621651934Z" level=info msg="StartContainer for \"104a05419ce4c6376a740e963ec0419fa7b2cf548c9ebc1ff3374989209f5513\"" Feb 9 19:22:59.668815 systemd[1]: Started cri-containerd-104a05419ce4c6376a740e963ec0419fa7b2cf548c9ebc1ff3374989209f5513.scope. Feb 9 19:22:59.714147 env[1061]: time="2024-02-09T19:22:59.714077187Z" level=info msg="StartContainer for \"104a05419ce4c6376a740e963ec0419fa7b2cf548c9ebc1ff3374989209f5513\" returns successfully" Feb 9 19:22:59.734309 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 9 19:22:59.735750 systemd[1]: Stopped systemd-sysctl.service. Feb 9 19:22:59.736354 systemd[1]: Stopping systemd-sysctl.service... Feb 9 19:22:59.739811 systemd[1]: Starting systemd-sysctl.service... Feb 9 19:22:59.742473 systemd[1]: cri-containerd-104a05419ce4c6376a740e963ec0419fa7b2cf548c9ebc1ff3374989209f5513.scope: Deactivated successfully. Feb 9 19:22:59.749694 systemd[1]: Finished systemd-sysctl.service. Feb 9 19:22:59.778057 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-104a05419ce4c6376a740e963ec0419fa7b2cf548c9ebc1ff3374989209f5513-rootfs.mount: Deactivated successfully. Feb 9 19:22:59.778750 env[1061]: time="2024-02-09T19:22:59.778671007Z" level=info msg="shim disconnected" id=104a05419ce4c6376a740e963ec0419fa7b2cf548c9ebc1ff3374989209f5513 Feb 9 19:22:59.778750 env[1061]: time="2024-02-09T19:22:59.778722855Z" level=warning msg="cleaning up after shim disconnected" id=104a05419ce4c6376a740e963ec0419fa7b2cf548c9ebc1ff3374989209f5513 namespace=k8s.io Feb 9 19:22:59.778750 env[1061]: time="2024-02-09T19:22:59.778733254Z" level=info msg="cleaning up dead shim" Feb 9 19:22:59.786401 env[1061]: time="2024-02-09T19:22:59.786367553Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:22:59Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1788 runtime=io.containerd.runc.v2\n" Feb 9 19:23:00.218961 kubelet[1346]: E0209 19:23:00.218822 1346 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:23:00.576687 env[1061]: time="2024-02-09T19:23:00.576182844Z" level=info msg="CreateContainer within sandbox \"bdb72ea726923b0fb2c34e252f2d425132eccdb45449a5e8c153f3415216f157\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 9 19:23:00.611308 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount289108466.mount: Deactivated successfully. Feb 9 19:23:00.637678 env[1061]: time="2024-02-09T19:23:00.637558525Z" level=info msg="CreateContainer within sandbox \"bdb72ea726923b0fb2c34e252f2d425132eccdb45449a5e8c153f3415216f157\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"f9ee01dcc56876cd57824194e7245856aad9e4dd11f446d1fd2de8da5f6b2f50\"" Feb 9 19:23:00.638696 env[1061]: time="2024-02-09T19:23:00.638614130Z" level=info msg="StartContainer for \"f9ee01dcc56876cd57824194e7245856aad9e4dd11f446d1fd2de8da5f6b2f50\"" Feb 9 19:23:00.685058 systemd[1]: Started cri-containerd-f9ee01dcc56876cd57824194e7245856aad9e4dd11f446d1fd2de8da5f6b2f50.scope. Feb 9 19:23:00.721444 systemd[1]: cri-containerd-f9ee01dcc56876cd57824194e7245856aad9e4dd11f446d1fd2de8da5f6b2f50.scope: Deactivated successfully. Feb 9 19:23:00.736125 env[1061]: time="2024-02-09T19:23:00.736088967Z" level=info msg="StartContainer for \"f9ee01dcc56876cd57824194e7245856aad9e4dd11f446d1fd2de8da5f6b2f50\" returns successfully" Feb 9 19:23:00.765545 env[1061]: time="2024-02-09T19:23:00.765447225Z" level=info msg="shim disconnected" id=f9ee01dcc56876cd57824194e7245856aad9e4dd11f446d1fd2de8da5f6b2f50 Feb 9 19:23:00.765731 env[1061]: time="2024-02-09T19:23:00.765562179Z" level=warning msg="cleaning up after shim disconnected" id=f9ee01dcc56876cd57824194e7245856aad9e4dd11f446d1fd2de8da5f6b2f50 namespace=k8s.io Feb 9 19:23:00.765731 env[1061]: time="2024-02-09T19:23:00.765587397Z" level=info msg="cleaning up dead shim" Feb 9 19:23:00.779686 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f9ee01dcc56876cd57824194e7245856aad9e4dd11f446d1fd2de8da5f6b2f50-rootfs.mount: Deactivated successfully. Feb 9 19:23:00.785240 env[1061]: time="2024-02-09T19:23:00.785128938Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:23:00Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1849 runtime=io.containerd.runc.v2\n" Feb 9 19:23:01.219941 kubelet[1346]: E0209 19:23:01.219862 1346 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:23:01.585308 env[1061]: time="2024-02-09T19:23:01.585240006Z" level=info msg="CreateContainer within sandbox \"bdb72ea726923b0fb2c34e252f2d425132eccdb45449a5e8c153f3415216f157\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 9 19:23:01.628158 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1800643136.mount: Deactivated successfully. Feb 9 19:23:01.641294 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2751315198.mount: Deactivated successfully. Feb 9 19:23:01.649745 env[1061]: time="2024-02-09T19:23:01.649673036Z" level=info msg="CreateContainer within sandbox \"bdb72ea726923b0fb2c34e252f2d425132eccdb45449a5e8c153f3415216f157\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"952ecdfb5b703cc71b9734072a2fe15511ee2df9bb2c63d0a05fdcee45faf96f\"" Feb 9 19:23:01.650925 env[1061]: time="2024-02-09T19:23:01.650836313Z" level=info msg="StartContainer for \"952ecdfb5b703cc71b9734072a2fe15511ee2df9bb2c63d0a05fdcee45faf96f\"" Feb 9 19:23:01.688260 systemd[1]: Started cri-containerd-952ecdfb5b703cc71b9734072a2fe15511ee2df9bb2c63d0a05fdcee45faf96f.scope. Feb 9 19:23:01.737374 systemd[1]: cri-containerd-952ecdfb5b703cc71b9734072a2fe15511ee2df9bb2c63d0a05fdcee45faf96f.scope: Deactivated successfully. Feb 9 19:23:01.739427 env[1061]: time="2024-02-09T19:23:01.739320160Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd8f0b0a1_2dd3_4792_828d_9fd8a54a0c4e.slice/cri-containerd-952ecdfb5b703cc71b9734072a2fe15511ee2df9bb2c63d0a05fdcee45faf96f.scope/memory.events\": no such file or directory" Feb 9 19:23:01.748616 env[1061]: time="2024-02-09T19:23:01.748583483Z" level=info msg="StartContainer for \"952ecdfb5b703cc71b9734072a2fe15511ee2df9bb2c63d0a05fdcee45faf96f\" returns successfully" Feb 9 19:23:01.781284 env[1061]: time="2024-02-09T19:23:01.781225344Z" level=info msg="shim disconnected" id=952ecdfb5b703cc71b9734072a2fe15511ee2df9bb2c63d0a05fdcee45faf96f Feb 9 19:23:01.781284 env[1061]: time="2024-02-09T19:23:01.781284765Z" level=warning msg="cleaning up after shim disconnected" id=952ecdfb5b703cc71b9734072a2fe15511ee2df9bb2c63d0a05fdcee45faf96f namespace=k8s.io Feb 9 19:23:01.781521 env[1061]: time="2024-02-09T19:23:01.781296457Z" level=info msg="cleaning up dead shim" Feb 9 19:23:01.789398 env[1061]: time="2024-02-09T19:23:01.789355936Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:23:01Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1906 runtime=io.containerd.runc.v2\n" Feb 9 19:23:02.221910 kubelet[1346]: E0209 19:23:02.221810 1346 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:23:02.596734 env[1061]: time="2024-02-09T19:23:02.596535903Z" level=info msg="CreateContainer within sandbox \"bdb72ea726923b0fb2c34e252f2d425132eccdb45449a5e8c153f3415216f157\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 9 19:23:02.642743 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1484083685.mount: Deactivated successfully. Feb 9 19:23:02.657031 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2364895868.mount: Deactivated successfully. Feb 9 19:23:02.663951 env[1061]: time="2024-02-09T19:23:02.663906196Z" level=info msg="CreateContainer within sandbox \"bdb72ea726923b0fb2c34e252f2d425132eccdb45449a5e8c153f3415216f157\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"c0544fadc7d3f51d4e66733c30d7491197bbfd77fbe75c4ca2514cd84b0d8b71\"" Feb 9 19:23:02.665130 env[1061]: time="2024-02-09T19:23:02.665107685Z" level=info msg="StartContainer for \"c0544fadc7d3f51d4e66733c30d7491197bbfd77fbe75c4ca2514cd84b0d8b71\"" Feb 9 19:23:02.693729 systemd[1]: Started cri-containerd-c0544fadc7d3f51d4e66733c30d7491197bbfd77fbe75c4ca2514cd84b0d8b71.scope. Feb 9 19:23:02.736760 env[1061]: time="2024-02-09T19:23:02.736638978Z" level=info msg="StartContainer for \"c0544fadc7d3f51d4e66733c30d7491197bbfd77fbe75c4ca2514cd84b0d8b71\" returns successfully" Feb 9 19:23:02.827077 kubelet[1346]: I0209 19:23:02.827044 1346 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Feb 9 19:23:03.222840 kubelet[1346]: E0209 19:23:03.222523 1346 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:23:03.236935 kernel: Initializing XFRM netlink socket Feb 9 19:23:03.637273 kubelet[1346]: I0209 19:23:03.637218 1346 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-zvfsv" podStartSLOduration=-9.223372012217657e+09 pod.CreationTimestamp="2024-02-09 19:22:39 +0000 UTC" firstStartedPulling="2024-02-09 19:22:43.051255656 +0000 UTC m=+17.458556645" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:23:03.635133402 +0000 UTC m=+38.042434430" watchObservedRunningTime="2024-02-09 19:23:03.637118388 +0000 UTC m=+38.044419427" Feb 9 19:23:04.223654 kubelet[1346]: E0209 19:23:04.223560 1346 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:23:04.570987 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Feb 9 19:23:04.572775 systemd-networkd[969]: cilium_host: Link UP Feb 9 19:23:04.574630 systemd-networkd[969]: cilium_net: Link UP Feb 9 19:23:04.574640 systemd-networkd[969]: cilium_net: Gained carrier Feb 9 19:23:04.574842 systemd-networkd[969]: cilium_host: Gained carrier Feb 9 19:23:04.698206 systemd-networkd[969]: cilium_vxlan: Link UP Feb 9 19:23:04.698218 systemd-networkd[969]: cilium_vxlan: Gained carrier Feb 9 19:23:04.726707 systemd-networkd[969]: cilium_net: Gained IPv6LL Feb 9 19:23:05.046045 kernel: NET: Registered PF_ALG protocol family Feb 9 19:23:05.225656 kubelet[1346]: E0209 19:23:05.225480 1346 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:23:05.486361 systemd-networkd[969]: cilium_host: Gained IPv6LL Feb 9 19:23:05.806283 systemd-networkd[969]: cilium_vxlan: Gained IPv6LL Feb 9 19:23:05.963283 systemd-networkd[969]: lxc_health: Link UP Feb 9 19:23:05.992143 systemd-networkd[969]: lxc_health: Gained carrier Feb 9 19:23:05.992968 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 9 19:23:06.187834 kubelet[1346]: E0209 19:23:06.187227 1346 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:23:06.225714 kubelet[1346]: E0209 19:23:06.225676 1346 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:23:07.224316 systemd-networkd[969]: lxc_health: Gained IPv6LL Feb 9 19:23:07.227051 kubelet[1346]: E0209 19:23:07.227022 1346 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:23:07.798308 kubelet[1346]: I0209 19:23:07.798226 1346 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:23:07.812460 systemd[1]: Created slice kubepods-besteffort-pod708d5f47_5158_4066_a1a3_479811d99584.slice. Feb 9 19:23:07.902123 kubelet[1346]: I0209 19:23:07.902067 1346 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w2shv\" (UniqueName: \"kubernetes.io/projected/708d5f47-5158-4066-a1a3-479811d99584-kube-api-access-w2shv\") pod \"nginx-deployment-8ffc5cf85-n29qq\" (UID: \"708d5f47-5158-4066-a1a3-479811d99584\") " pod="default/nginx-deployment-8ffc5cf85-n29qq" Feb 9 19:23:08.071644 kubelet[1346]: I0209 19:23:08.071482 1346 prober_manager.go:287] "Failed to trigger a manual run" probe="Readiness" Feb 9 19:23:08.120490 env[1061]: time="2024-02-09T19:23:08.120380002Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8ffc5cf85-n29qq,Uid:708d5f47-5158-4066-a1a3-479811d99584,Namespace:default,Attempt:0,}" Feb 9 19:23:08.222526 systemd-networkd[969]: lxcb655094f1f4a: Link UP Feb 9 19:23:08.227606 kubelet[1346]: E0209 19:23:08.227550 1346 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:23:08.230725 kernel: eth0: renamed from tmp1ad3c Feb 9 19:23:08.236410 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 19:23:08.236468 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcb655094f1f4a: link becomes ready Feb 9 19:23:08.237812 systemd-networkd[969]: lxcb655094f1f4a: Gained carrier Feb 9 19:23:09.228749 kubelet[1346]: E0209 19:23:09.228518 1346 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:23:09.903298 systemd-networkd[969]: lxcb655094f1f4a: Gained IPv6LL Feb 9 19:23:10.231266 kubelet[1346]: E0209 19:23:10.230855 1346 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:23:11.231414 kubelet[1346]: E0209 19:23:11.231331 1346 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:23:11.667947 env[1061]: time="2024-02-09T19:23:11.667840384Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:23:11.668530 env[1061]: time="2024-02-09T19:23:11.667901177Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:23:11.668530 env[1061]: time="2024-02-09T19:23:11.667921164Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:23:11.668530 env[1061]: time="2024-02-09T19:23:11.668042191Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1ad3ce80506561bd8d65bd25f2e2f8316f787dbb525172acc1900e4bbf6bf35c pid=2429 runtime=io.containerd.runc.v2 Feb 9 19:23:11.690024 systemd[1]: run-containerd-runc-k8s.io-1ad3ce80506561bd8d65bd25f2e2f8316f787dbb525172acc1900e4bbf6bf35c-runc.U37lQJ.mount: Deactivated successfully. Feb 9 19:23:11.697110 systemd[1]: Started cri-containerd-1ad3ce80506561bd8d65bd25f2e2f8316f787dbb525172acc1900e4bbf6bf35c.scope. Feb 9 19:23:11.745978 env[1061]: time="2024-02-09T19:23:11.745931146Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8ffc5cf85-n29qq,Uid:708d5f47-5158-4066-a1a3-479811d99584,Namespace:default,Attempt:0,} returns sandbox id \"1ad3ce80506561bd8d65bd25f2e2f8316f787dbb525172acc1900e4bbf6bf35c\"" Feb 9 19:23:11.748237 env[1061]: time="2024-02-09T19:23:11.748212511Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 9 19:23:12.233643 kubelet[1346]: E0209 19:23:12.233566 1346 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:23:13.234826 kubelet[1346]: E0209 19:23:13.234742 1346 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:23:14.235176 kubelet[1346]: E0209 19:23:14.235054 1346 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:23:15.236036 kubelet[1346]: E0209 19:23:15.235962 1346 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:23:16.237213 kubelet[1346]: E0209 19:23:16.237131 1346 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:23:16.332958 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount456911045.mount: Deactivated successfully. Feb 9 19:23:17.238228 kubelet[1346]: E0209 19:23:17.238160 1346 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:23:17.767748 env[1061]: time="2024-02-09T19:23:17.767692608Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:23:17.771544 env[1061]: time="2024-02-09T19:23:17.771505925Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3a8963c304a2f89d2bfa055e07403bae348b293c891b8ea01f7136642eaa277a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:23:17.775822 env[1061]: time="2024-02-09T19:23:17.775750802Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:23:17.781319 env[1061]: time="2024-02-09T19:23:17.781249007Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx@sha256:e34a272f01984c973b1e034e197c02f77dda18981038e3a54e957554ada4fec6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:23:17.784956 env[1061]: time="2024-02-09T19:23:17.783455803Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:3a8963c304a2f89d2bfa055e07403bae348b293c891b8ea01f7136642eaa277a\"" Feb 9 19:23:17.788325 env[1061]: time="2024-02-09T19:23:17.788249638Z" level=info msg="CreateContainer within sandbox \"1ad3ce80506561bd8d65bd25f2e2f8316f787dbb525172acc1900e4bbf6bf35c\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Feb 9 19:23:17.814108 env[1061]: time="2024-02-09T19:23:17.814009521Z" level=info msg="CreateContainer within sandbox \"1ad3ce80506561bd8d65bd25f2e2f8316f787dbb525172acc1900e4bbf6bf35c\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"b009f8944d9b65b25ab0d1e3843e193927c0fd0a04124ba11f3259bd15a6b2ac\"" Feb 9 19:23:17.815316 env[1061]: time="2024-02-09T19:23:17.815252260Z" level=info msg="StartContainer for \"b009f8944d9b65b25ab0d1e3843e193927c0fd0a04124ba11f3259bd15a6b2ac\"" Feb 9 19:23:17.848392 systemd[1]: run-containerd-runc-k8s.io-b009f8944d9b65b25ab0d1e3843e193927c0fd0a04124ba11f3259bd15a6b2ac-runc.eJjwZa.mount: Deactivated successfully. Feb 9 19:23:17.857074 systemd[1]: Started cri-containerd-b009f8944d9b65b25ab0d1e3843e193927c0fd0a04124ba11f3259bd15a6b2ac.scope. Feb 9 19:23:17.894114 env[1061]: time="2024-02-09T19:23:17.894070840Z" level=info msg="StartContainer for \"b009f8944d9b65b25ab0d1e3843e193927c0fd0a04124ba11f3259bd15a6b2ac\" returns successfully" Feb 9 19:23:18.239486 kubelet[1346]: E0209 19:23:18.239360 1346 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:23:18.732669 kubelet[1346]: I0209 19:23:18.732620 1346 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nginx-deployment-8ffc5cf85-n29qq" podStartSLOduration=-9.22337202512231e+09 pod.CreationTimestamp="2024-02-09 19:23:07 +0000 UTC" firstStartedPulling="2024-02-09 19:23:11.747528229 +0000 UTC m=+46.154829217" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:23:18.731173797 +0000 UTC m=+53.138474835" watchObservedRunningTime="2024-02-09 19:23:18.732464586 +0000 UTC m=+53.139765624" Feb 9 19:23:19.240398 kubelet[1346]: E0209 19:23:19.240322 1346 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:23:20.241552 kubelet[1346]: E0209 19:23:20.241480 1346 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:23:21.242949 kubelet[1346]: E0209 19:23:21.242825 1346 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:23:22.244206 kubelet[1346]: E0209 19:23:22.244133 1346 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:23:23.245490 kubelet[1346]: E0209 19:23:23.245405 1346 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:23:24.246465 kubelet[1346]: E0209 19:23:24.246310 1346 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:23:25.247708 kubelet[1346]: E0209 19:23:25.247618 1346 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:23:26.187376 kubelet[1346]: E0209 19:23:26.187303 1346 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:23:26.249256 kubelet[1346]: E0209 19:23:26.249143 1346 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:23:27.250527 kubelet[1346]: E0209 19:23:27.250437 1346 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:23:28.251141 kubelet[1346]: E0209 19:23:28.251020 1346 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:23:29.251584 kubelet[1346]: E0209 19:23:29.251369 1346 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:23:30.252705 kubelet[1346]: E0209 19:23:30.252523 1346 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:23:31.254441 kubelet[1346]: E0209 19:23:31.254335 1346 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:23:31.832721 kubelet[1346]: I0209 19:23:31.832657 1346 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:23:31.849771 systemd[1]: Created slice kubepods-besteffort-podec745151_f224_4ab1_8633_d2db29505a42.slice. Feb 9 19:23:31.982858 kubelet[1346]: I0209 19:23:31.982702 1346 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4nkmm\" (UniqueName: \"kubernetes.io/projected/ec745151-f224-4ab1-8633-d2db29505a42-kube-api-access-4nkmm\") pod \"nfs-server-provisioner-0\" (UID: \"ec745151-f224-4ab1-8633-d2db29505a42\") " pod="default/nfs-server-provisioner-0" Feb 9 19:23:31.982858 kubelet[1346]: I0209 19:23:31.982784 1346 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/ec745151-f224-4ab1-8633-d2db29505a42-data\") pod \"nfs-server-provisioner-0\" (UID: \"ec745151-f224-4ab1-8633-d2db29505a42\") " pod="default/nfs-server-provisioner-0" Feb 9 19:23:32.165804 env[1061]: time="2024-02-09T19:23:32.164337355Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:ec745151-f224-4ab1-8633-d2db29505a42,Namespace:default,Attempt:0,}" Feb 9 19:23:32.255219 kubelet[1346]: E0209 19:23:32.255103 1346 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:23:32.269148 systemd-networkd[969]: lxca8f2f7578289: Link UP Feb 9 19:23:32.278993 kernel: eth0: renamed from tmpf0e7c Feb 9 19:23:32.295005 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 19:23:32.295146 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxca8f2f7578289: link becomes ready Feb 9 19:23:32.295436 systemd-networkd[969]: lxca8f2f7578289: Gained carrier Feb 9 19:23:32.598783 env[1061]: time="2024-02-09T19:23:32.598505341Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:23:32.598783 env[1061]: time="2024-02-09T19:23:32.598571976Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:23:32.598783 env[1061]: time="2024-02-09T19:23:32.598587876Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:23:32.604948 env[1061]: time="2024-02-09T19:23:32.599012642Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f0e7cc892c38d39f883b0fd5b23a38e72e037c21b6531b56eccd07b09c59177e pid=2605 runtime=io.containerd.runc.v2 Feb 9 19:23:32.636164 systemd[1]: Started cri-containerd-f0e7cc892c38d39f883b0fd5b23a38e72e037c21b6531b56eccd07b09c59177e.scope. Feb 9 19:23:32.729799 env[1061]: time="2024-02-09T19:23:32.729571266Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:ec745151-f224-4ab1-8633-d2db29505a42,Namespace:default,Attempt:0,} returns sandbox id \"f0e7cc892c38d39f883b0fd5b23a38e72e037c21b6531b56eccd07b09c59177e\"" Feb 9 19:23:32.732415 env[1061]: time="2024-02-09T19:23:32.732384821Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Feb 9 19:23:33.255606 kubelet[1346]: E0209 19:23:33.255533 1346 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:23:33.582608 systemd-networkd[969]: lxca8f2f7578289: Gained IPv6LL Feb 9 19:23:34.255889 kubelet[1346]: E0209 19:23:34.255820 1346 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:23:35.256458 kubelet[1346]: E0209 19:23:35.256416 1346 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:23:36.258041 kubelet[1346]: E0209 19:23:36.257929 1346 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:23:37.053190 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2329795057.mount: Deactivated successfully. Feb 9 19:23:37.258933 kubelet[1346]: E0209 19:23:37.258821 1346 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:23:38.259868 kubelet[1346]: E0209 19:23:38.259807 1346 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:23:39.261062 kubelet[1346]: E0209 19:23:39.260996 1346 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:23:40.261555 kubelet[1346]: E0209 19:23:40.261474 1346 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:23:40.626068 env[1061]: time="2024-02-09T19:23:40.625929855Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:23:40.727248 env[1061]: time="2024-02-09T19:23:40.727174692Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:23:40.739213 env[1061]: time="2024-02-09T19:23:40.739123039Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:23:40.754501 env[1061]: time="2024-02-09T19:23:40.754420114Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:23:40.756864 env[1061]: time="2024-02-09T19:23:40.756740836Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Feb 9 19:23:40.762124 env[1061]: time="2024-02-09T19:23:40.762056645Z" level=info msg="CreateContainer within sandbox \"f0e7cc892c38d39f883b0fd5b23a38e72e037c21b6531b56eccd07b09c59177e\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Feb 9 19:23:40.806162 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2158048668.mount: Deactivated successfully. Feb 9 19:23:40.829087 env[1061]: time="2024-02-09T19:23:40.828966816Z" level=info msg="CreateContainer within sandbox \"f0e7cc892c38d39f883b0fd5b23a38e72e037c21b6531b56eccd07b09c59177e\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"846fbf3bd9d84cb83d414cc1ae96bc4e9cb8bbefef8ad959dbfe384fa81477bf\"" Feb 9 19:23:40.831078 env[1061]: time="2024-02-09T19:23:40.830999869Z" level=info msg="StartContainer for \"846fbf3bd9d84cb83d414cc1ae96bc4e9cb8bbefef8ad959dbfe384fa81477bf\"" Feb 9 19:23:40.883660 systemd[1]: Started cri-containerd-846fbf3bd9d84cb83d414cc1ae96bc4e9cb8bbefef8ad959dbfe384fa81477bf.scope. Feb 9 19:23:40.927930 env[1061]: time="2024-02-09T19:23:40.927861517Z" level=info msg="StartContainer for \"846fbf3bd9d84cb83d414cc1ae96bc4e9cb8bbefef8ad959dbfe384fa81477bf\" returns successfully" Feb 9 19:23:41.262516 kubelet[1346]: E0209 19:23:41.262355 1346 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:23:41.815097 kubelet[1346]: I0209 19:23:41.814932 1346 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=-9.22337202603997e+09 pod.CreationTimestamp="2024-02-09 19:23:31 +0000 UTC" firstStartedPulling="2024-02-09 19:23:32.731481438 +0000 UTC m=+67.138782426" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:23:41.813217808 +0000 UTC m=+76.220518836" watchObservedRunningTime="2024-02-09 19:23:41.814805334 +0000 UTC m=+76.222106372" Feb 9 19:23:42.264111 kubelet[1346]: E0209 19:23:42.263990 1346 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:23:43.265291 kubelet[1346]: E0209 19:23:43.265224 1346 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:23:44.266087 kubelet[1346]: E0209 19:23:44.266024 1346 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:23:45.267537 kubelet[1346]: E0209 19:23:45.267464 1346 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:23:46.187283 kubelet[1346]: E0209 19:23:46.187187 1346 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:23:46.268716 kubelet[1346]: E0209 19:23:46.268641 1346 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:23:47.268864 kubelet[1346]: E0209 19:23:47.268798 1346 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:23:48.270189 kubelet[1346]: E0209 19:23:48.270082 1346 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:23:49.271412 kubelet[1346]: E0209 19:23:49.271100 1346 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:23:50.272130 kubelet[1346]: E0209 19:23:50.271996 1346 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:23:51.194604 kubelet[1346]: I0209 19:23:51.194568 1346 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:23:51.203746 systemd[1]: Created slice kubepods-besteffort-pod47137a67_4eda_40df_842c_0a571bd48d18.slice. Feb 9 19:23:51.272667 kubelet[1346]: E0209 19:23:51.272619 1346 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:23:51.325018 kubelet[1346]: I0209 19:23:51.324956 1346 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-8dc71ddb-a6f9-4e1a-8cae-31b54f20fb28\" (UniqueName: \"kubernetes.io/nfs/47137a67-4eda-40df-842c-0a571bd48d18-pvc-8dc71ddb-a6f9-4e1a-8cae-31b54f20fb28\") pod \"test-pod-1\" (UID: \"47137a67-4eda-40df-842c-0a571bd48d18\") " pod="default/test-pod-1" Feb 9 19:23:51.325342 kubelet[1346]: I0209 19:23:51.325149 1346 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ntkw\" (UniqueName: \"kubernetes.io/projected/47137a67-4eda-40df-842c-0a571bd48d18-kube-api-access-7ntkw\") pod \"test-pod-1\" (UID: \"47137a67-4eda-40df-842c-0a571bd48d18\") " pod="default/test-pod-1" Feb 9 19:23:51.525060 kernel: FS-Cache: Loaded Feb 9 19:23:51.599424 kernel: RPC: Registered named UNIX socket transport module. Feb 9 19:23:51.599601 kernel: RPC: Registered udp transport module. Feb 9 19:23:51.599647 kernel: RPC: Registered tcp transport module. Feb 9 19:23:51.600304 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Feb 9 19:23:51.665949 kernel: FS-Cache: Netfs 'nfs' registered for caching Feb 9 19:23:51.900426 kernel: NFS: Registering the id_resolver key type Feb 9 19:23:51.900715 kernel: Key type id_resolver registered Feb 9 19:23:51.900766 kernel: Key type id_legacy registered Feb 9 19:23:51.963926 nfsidmap[2812]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'novalocal' Feb 9 19:23:51.974620 nfsidmap[2813]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'novalocal' Feb 9 19:23:52.110613 env[1061]: time="2024-02-09T19:23:52.109750115Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:47137a67-4eda-40df-842c-0a571bd48d18,Namespace:default,Attempt:0,}" Feb 9 19:23:52.197818 systemd-networkd[969]: lxcff93bf9999d8: Link UP Feb 9 19:23:52.209066 kernel: eth0: renamed from tmpbab95 Feb 9 19:23:52.222062 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 19:23:52.222171 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcff93bf9999d8: link becomes ready Feb 9 19:23:52.223143 systemd-networkd[969]: lxcff93bf9999d8: Gained carrier Feb 9 19:23:52.274353 kubelet[1346]: E0209 19:23:52.274280 1346 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:23:52.548423 env[1061]: time="2024-02-09T19:23:52.548286204Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:23:52.548607 env[1061]: time="2024-02-09T19:23:52.548581641Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:23:52.548735 env[1061]: time="2024-02-09T19:23:52.548711475Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:23:52.549649 env[1061]: time="2024-02-09T19:23:52.549567125Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/bab9584049000b31cd5d4b3c0712a0997678d28ea17b2db65e25cb40e1bdffaf pid=2837 runtime=io.containerd.runc.v2 Feb 9 19:23:52.579384 systemd[1]: run-containerd-runc-k8s.io-bab9584049000b31cd5d4b3c0712a0997678d28ea17b2db65e25cb40e1bdffaf-runc.7mQYUZ.mount: Deactivated successfully. Feb 9 19:23:52.581400 systemd[1]: Started cri-containerd-bab9584049000b31cd5d4b3c0712a0997678d28ea17b2db65e25cb40e1bdffaf.scope. Feb 9 19:23:52.653324 env[1061]: time="2024-02-09T19:23:52.653271728Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:47137a67-4eda-40df-842c-0a571bd48d18,Namespace:default,Attempt:0,} returns sandbox id \"bab9584049000b31cd5d4b3c0712a0997678d28ea17b2db65e25cb40e1bdffaf\"" Feb 9 19:23:52.655271 env[1061]: time="2024-02-09T19:23:52.655211907Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 9 19:23:53.090781 env[1061]: time="2024-02-09T19:23:53.090613468Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:23:53.093941 env[1061]: time="2024-02-09T19:23:53.093808148Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:3a8963c304a2f89d2bfa055e07403bae348b293c891b8ea01f7136642eaa277a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:23:53.098397 env[1061]: time="2024-02-09T19:23:53.098309525Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:23:53.102800 env[1061]: time="2024-02-09T19:23:53.102747984Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx@sha256:e34a272f01984c973b1e034e197c02f77dda18981038e3a54e957554ada4fec6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:23:53.104787 env[1061]: time="2024-02-09T19:23:53.104668878Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:3a8963c304a2f89d2bfa055e07403bae348b293c891b8ea01f7136642eaa277a\"" Feb 9 19:23:53.111094 env[1061]: time="2024-02-09T19:23:53.111032279Z" level=info msg="CreateContainer within sandbox \"bab9584049000b31cd5d4b3c0712a0997678d28ea17b2db65e25cb40e1bdffaf\" for container &ContainerMetadata{Name:test,Attempt:0,}" Feb 9 19:23:53.144138 env[1061]: time="2024-02-09T19:23:53.144032504Z" level=info msg="CreateContainer within sandbox \"bab9584049000b31cd5d4b3c0712a0997678d28ea17b2db65e25cb40e1bdffaf\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"472182f7c84fcbf909337b96bf82d2f80b202c558a989e3d42c1b463a6a3884f\"" Feb 9 19:23:53.145817 env[1061]: time="2024-02-09T19:23:53.145760385Z" level=info msg="StartContainer for \"472182f7c84fcbf909337b96bf82d2f80b202c558a989e3d42c1b463a6a3884f\"" Feb 9 19:23:53.184269 systemd[1]: Started cri-containerd-472182f7c84fcbf909337b96bf82d2f80b202c558a989e3d42c1b463a6a3884f.scope. Feb 9 19:23:53.253072 env[1061]: time="2024-02-09T19:23:53.252998779Z" level=info msg="StartContainer for \"472182f7c84fcbf909337b96bf82d2f80b202c558a989e3d42c1b463a6a3884f\" returns successfully" Feb 9 19:23:53.275207 kubelet[1346]: E0209 19:23:53.275132 1346 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:23:53.362174 systemd-networkd[969]: lxcff93bf9999d8: Gained IPv6LL Feb 9 19:23:53.850576 kubelet[1346]: I0209 19:23:53.850498 1346 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=-9.223372017004353e+09 pod.CreationTimestamp="2024-02-09 19:23:34 +0000 UTC" firstStartedPulling="2024-02-09 19:23:52.654868201 +0000 UTC m=+87.062169189" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:23:53.849930267 +0000 UTC m=+88.257231305" watchObservedRunningTime="2024-02-09 19:23:53.850422292 +0000 UTC m=+88.257723360" Feb 9 19:23:54.276100 kubelet[1346]: E0209 19:23:54.275870 1346 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:23:55.277202 kubelet[1346]: E0209 19:23:55.277131 1346 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:23:56.278578 kubelet[1346]: E0209 19:23:56.278506 1346 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:23:57.279542 kubelet[1346]: E0209 19:23:57.279387 1346 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:23:58.280766 kubelet[1346]: E0209 19:23:58.280629 1346 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:23:59.281358 kubelet[1346]: E0209 19:23:59.281283 1346 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:24:00.283019 kubelet[1346]: E0209 19:24:00.282865 1346 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:24:01.284421 kubelet[1346]: E0209 19:24:01.284368 1346 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:24:02.285610 kubelet[1346]: E0209 19:24:02.285533 1346 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:24:03.160363 env[1061]: time="2024-02-09T19:24:03.160231369Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 9 19:24:03.177512 env[1061]: time="2024-02-09T19:24:03.177321374Z" level=info msg="StopContainer for \"c0544fadc7d3f51d4e66733c30d7491197bbfd77fbe75c4ca2514cd84b0d8b71\" with timeout 1 (s)" Feb 9 19:24:03.178195 env[1061]: time="2024-02-09T19:24:03.178124975Z" level=info msg="Stop container \"c0544fadc7d3f51d4e66733c30d7491197bbfd77fbe75c4ca2514cd84b0d8b71\" with signal terminated" Feb 9 19:24:03.194602 systemd-networkd[969]: lxc_health: Link DOWN Feb 9 19:24:03.194639 systemd-networkd[969]: lxc_health: Lost carrier Feb 9 19:24:03.248961 systemd[1]: cri-containerd-c0544fadc7d3f51d4e66733c30d7491197bbfd77fbe75c4ca2514cd84b0d8b71.scope: Deactivated successfully. Feb 9 19:24:03.249568 systemd[1]: cri-containerd-c0544fadc7d3f51d4e66733c30d7491197bbfd77fbe75c4ca2514cd84b0d8b71.scope: Consumed 9.191s CPU time. Feb 9 19:24:03.286157 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c0544fadc7d3f51d4e66733c30d7491197bbfd77fbe75c4ca2514cd84b0d8b71-rootfs.mount: Deactivated successfully. Feb 9 19:24:03.288371 kubelet[1346]: E0209 19:24:03.286950 1346 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:24:03.325166 env[1061]: time="2024-02-09T19:24:03.325064797Z" level=info msg="shim disconnected" id=c0544fadc7d3f51d4e66733c30d7491197bbfd77fbe75c4ca2514cd84b0d8b71 Feb 9 19:24:03.325652 env[1061]: time="2024-02-09T19:24:03.325603700Z" level=warning msg="cleaning up after shim disconnected" id=c0544fadc7d3f51d4e66733c30d7491197bbfd77fbe75c4ca2514cd84b0d8b71 namespace=k8s.io Feb 9 19:24:03.325824 env[1061]: time="2024-02-09T19:24:03.325788858Z" level=info msg="cleaning up dead shim" Feb 9 19:24:03.344969 env[1061]: time="2024-02-09T19:24:03.344781279Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:24:03Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2965 runtime=io.containerd.runc.v2\n" Feb 9 19:24:03.355555 env[1061]: time="2024-02-09T19:24:03.355440457Z" level=info msg="StopContainer for \"c0544fadc7d3f51d4e66733c30d7491197bbfd77fbe75c4ca2514cd84b0d8b71\" returns successfully" Feb 9 19:24:03.357115 env[1061]: time="2024-02-09T19:24:03.357039573Z" level=info msg="StopPodSandbox for \"bdb72ea726923b0fb2c34e252f2d425132eccdb45449a5e8c153f3415216f157\"" Feb 9 19:24:03.357453 env[1061]: time="2024-02-09T19:24:03.357372678Z" level=info msg="Container to stop \"104a05419ce4c6376a740e963ec0419fa7b2cf548c9ebc1ff3374989209f5513\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 19:24:03.357702 env[1061]: time="2024-02-09T19:24:03.357637386Z" level=info msg="Container to stop \"f9ee01dcc56876cd57824194e7245856aad9e4dd11f446d1fd2de8da5f6b2f50\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 19:24:03.357955 env[1061]: time="2024-02-09T19:24:03.357846619Z" level=info msg="Container to stop \"c4fd5411eff9acd28233d840081dd38459fbbd3e0c453158d7204a4e77d821d6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 19:24:03.358151 env[1061]: time="2024-02-09T19:24:03.358103322Z" level=info msg="Container to stop \"952ecdfb5b703cc71b9734072a2fe15511ee2df9bb2c63d0a05fdcee45faf96f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 19:24:03.358323 env[1061]: time="2024-02-09T19:24:03.358281457Z" level=info msg="Container to stop \"c0544fadc7d3f51d4e66733c30d7491197bbfd77fbe75c4ca2514cd84b0d8b71\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 19:24:03.362531 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-bdb72ea726923b0fb2c34e252f2d425132eccdb45449a5e8c153f3415216f157-shm.mount: Deactivated successfully. Feb 9 19:24:03.378202 systemd[1]: cri-containerd-bdb72ea726923b0fb2c34e252f2d425132eccdb45449a5e8c153f3415216f157.scope: Deactivated successfully. Feb 9 19:24:03.429414 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bdb72ea726923b0fb2c34e252f2d425132eccdb45449a5e8c153f3415216f157-rootfs.mount: Deactivated successfully. Feb 9 19:24:03.444620 env[1061]: time="2024-02-09T19:24:03.444554420Z" level=info msg="shim disconnected" id=bdb72ea726923b0fb2c34e252f2d425132eccdb45449a5e8c153f3415216f157 Feb 9 19:24:03.445080 env[1061]: time="2024-02-09T19:24:03.445055933Z" level=warning msg="cleaning up after shim disconnected" id=bdb72ea726923b0fb2c34e252f2d425132eccdb45449a5e8c153f3415216f157 namespace=k8s.io Feb 9 19:24:03.445760 env[1061]: time="2024-02-09T19:24:03.445631836Z" level=info msg="cleaning up dead shim" Feb 9 19:24:03.464449 env[1061]: time="2024-02-09T19:24:03.464342027Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:24:03Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2996 runtime=io.containerd.runc.v2\n" Feb 9 19:24:03.465313 env[1061]: time="2024-02-09T19:24:03.465255885Z" level=info msg="TearDown network for sandbox \"bdb72ea726923b0fb2c34e252f2d425132eccdb45449a5e8c153f3415216f157\" successfully" Feb 9 19:24:03.465408 env[1061]: time="2024-02-09T19:24:03.465320185Z" level=info msg="StopPodSandbox for \"bdb72ea726923b0fb2c34e252f2d425132eccdb45449a5e8c153f3415216f157\" returns successfully" Feb 9 19:24:03.632449 kubelet[1346]: I0209 19:24:03.632376 1346 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d8f0b0a1-2dd3-4792-828d-9fd8a54a0c4e-clustermesh-secrets\") pod \"d8f0b0a1-2dd3-4792-828d-9fd8a54a0c4e\" (UID: \"d8f0b0a1-2dd3-4792-828d-9fd8a54a0c4e\") " Feb 9 19:24:03.633415 kubelet[1346]: I0209 19:24:03.633341 1346 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sqc66\" (UniqueName: \"kubernetes.io/projected/d8f0b0a1-2dd3-4792-828d-9fd8a54a0c4e-kube-api-access-sqc66\") pod \"d8f0b0a1-2dd3-4792-828d-9fd8a54a0c4e\" (UID: \"d8f0b0a1-2dd3-4792-828d-9fd8a54a0c4e\") " Feb 9 19:24:03.633801 kubelet[1346]: I0209 19:24:03.633752 1346 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d8f0b0a1-2dd3-4792-828d-9fd8a54a0c4e-cni-path\") pod \"d8f0b0a1-2dd3-4792-828d-9fd8a54a0c4e\" (UID: \"d8f0b0a1-2dd3-4792-828d-9fd8a54a0c4e\") " Feb 9 19:24:03.633977 kubelet[1346]: I0209 19:24:03.633870 1346 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d8f0b0a1-2dd3-4792-828d-9fd8a54a0c4e-lib-modules\") pod \"d8f0b0a1-2dd3-4792-828d-9fd8a54a0c4e\" (UID: \"d8f0b0a1-2dd3-4792-828d-9fd8a54a0c4e\") " Feb 9 19:24:03.634120 kubelet[1346]: I0209 19:24:03.634039 1346 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d8f0b0a1-2dd3-4792-828d-9fd8a54a0c4e-hostproc\") pod \"d8f0b0a1-2dd3-4792-828d-9fd8a54a0c4e\" (UID: \"d8f0b0a1-2dd3-4792-828d-9fd8a54a0c4e\") " Feb 9 19:24:03.634225 kubelet[1346]: I0209 19:24:03.634150 1346 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d8f0b0a1-2dd3-4792-828d-9fd8a54a0c4e-bpf-maps\") pod \"d8f0b0a1-2dd3-4792-828d-9fd8a54a0c4e\" (UID: \"d8f0b0a1-2dd3-4792-828d-9fd8a54a0c4e\") " Feb 9 19:24:03.634355 kubelet[1346]: I0209 19:24:03.634251 1346 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d8f0b0a1-2dd3-4792-828d-9fd8a54a0c4e-host-proc-sys-kernel\") pod \"d8f0b0a1-2dd3-4792-828d-9fd8a54a0c4e\" (UID: \"d8f0b0a1-2dd3-4792-828d-9fd8a54a0c4e\") " Feb 9 19:24:03.634355 kubelet[1346]: I0209 19:24:03.634345 1346 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d8f0b0a1-2dd3-4792-828d-9fd8a54a0c4e-xtables-lock\") pod \"d8f0b0a1-2dd3-4792-828d-9fd8a54a0c4e\" (UID: \"d8f0b0a1-2dd3-4792-828d-9fd8a54a0c4e\") " Feb 9 19:24:03.634554 kubelet[1346]: I0209 19:24:03.634444 1346 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d8f0b0a1-2dd3-4792-828d-9fd8a54a0c4e-hubble-tls\") pod \"d8f0b0a1-2dd3-4792-828d-9fd8a54a0c4e\" (UID: \"d8f0b0a1-2dd3-4792-828d-9fd8a54a0c4e\") " Feb 9 19:24:03.634554 kubelet[1346]: I0209 19:24:03.634536 1346 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d8f0b0a1-2dd3-4792-828d-9fd8a54a0c4e-cilium-run\") pod \"d8f0b0a1-2dd3-4792-828d-9fd8a54a0c4e\" (UID: \"d8f0b0a1-2dd3-4792-828d-9fd8a54a0c4e\") " Feb 9 19:24:03.634722 kubelet[1346]: I0209 19:24:03.634628 1346 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d8f0b0a1-2dd3-4792-828d-9fd8a54a0c4e-etc-cni-netd\") pod \"d8f0b0a1-2dd3-4792-828d-9fd8a54a0c4e\" (UID: \"d8f0b0a1-2dd3-4792-828d-9fd8a54a0c4e\") " Feb 9 19:24:03.634808 kubelet[1346]: I0209 19:24:03.634689 1346 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d8f0b0a1-2dd3-4792-828d-9fd8a54a0c4e-cilium-cgroup\") pod \"d8f0b0a1-2dd3-4792-828d-9fd8a54a0c4e\" (UID: \"d8f0b0a1-2dd3-4792-828d-9fd8a54a0c4e\") " Feb 9 19:24:03.634808 kubelet[1346]: I0209 19:24:03.634783 1346 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d8f0b0a1-2dd3-4792-828d-9fd8a54a0c4e-host-proc-sys-net\") pod \"d8f0b0a1-2dd3-4792-828d-9fd8a54a0c4e\" (UID: \"d8f0b0a1-2dd3-4792-828d-9fd8a54a0c4e\") " Feb 9 19:24:03.635032 kubelet[1346]: I0209 19:24:03.634942 1346 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d8f0b0a1-2dd3-4792-828d-9fd8a54a0c4e-cilium-config-path\") pod \"d8f0b0a1-2dd3-4792-828d-9fd8a54a0c4e\" (UID: \"d8f0b0a1-2dd3-4792-828d-9fd8a54a0c4e\") " Feb 9 19:24:03.636414 kubelet[1346]: W0209 19:24:03.635602 1346 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/d8f0b0a1-2dd3-4792-828d-9fd8a54a0c4e/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 9 19:24:03.636773 kubelet[1346]: I0209 19:24:03.636683 1346 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d8f0b0a1-2dd3-4792-828d-9fd8a54a0c4e-cni-path" (OuterVolumeSpecName: "cni-path") pod "d8f0b0a1-2dd3-4792-828d-9fd8a54a0c4e" (UID: "d8f0b0a1-2dd3-4792-828d-9fd8a54a0c4e"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:24:03.636988 kubelet[1346]: I0209 19:24:03.636804 1346 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d8f0b0a1-2dd3-4792-828d-9fd8a54a0c4e-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "d8f0b0a1-2dd3-4792-828d-9fd8a54a0c4e" (UID: "d8f0b0a1-2dd3-4792-828d-9fd8a54a0c4e"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:24:03.636988 kubelet[1346]: I0209 19:24:03.636850 1346 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d8f0b0a1-2dd3-4792-828d-9fd8a54a0c4e-hostproc" (OuterVolumeSpecName: "hostproc") pod "d8f0b0a1-2dd3-4792-828d-9fd8a54a0c4e" (UID: "d8f0b0a1-2dd3-4792-828d-9fd8a54a0c4e"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:24:03.637172 kubelet[1346]: I0209 19:24:03.636998 1346 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d8f0b0a1-2dd3-4792-828d-9fd8a54a0c4e-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "d8f0b0a1-2dd3-4792-828d-9fd8a54a0c4e" (UID: "d8f0b0a1-2dd3-4792-828d-9fd8a54a0c4e"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:24:03.637172 kubelet[1346]: I0209 19:24:03.637044 1346 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d8f0b0a1-2dd3-4792-828d-9fd8a54a0c4e-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "d8f0b0a1-2dd3-4792-828d-9fd8a54a0c4e" (UID: "d8f0b0a1-2dd3-4792-828d-9fd8a54a0c4e"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:24:03.637172 kubelet[1346]: I0209 19:24:03.637090 1346 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d8f0b0a1-2dd3-4792-828d-9fd8a54a0c4e-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "d8f0b0a1-2dd3-4792-828d-9fd8a54a0c4e" (UID: "d8f0b0a1-2dd3-4792-828d-9fd8a54a0c4e"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:24:03.639261 kubelet[1346]: I0209 19:24:03.639156 1346 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d8f0b0a1-2dd3-4792-828d-9fd8a54a0c4e-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "d8f0b0a1-2dd3-4792-828d-9fd8a54a0c4e" (UID: "d8f0b0a1-2dd3-4792-828d-9fd8a54a0c4e"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:24:03.639590 kubelet[1346]: I0209 19:24:03.639277 1346 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d8f0b0a1-2dd3-4792-828d-9fd8a54a0c4e-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "d8f0b0a1-2dd3-4792-828d-9fd8a54a0c4e" (UID: "d8f0b0a1-2dd3-4792-828d-9fd8a54a0c4e"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:24:03.639590 kubelet[1346]: I0209 19:24:03.639323 1346 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d8f0b0a1-2dd3-4792-828d-9fd8a54a0c4e-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "d8f0b0a1-2dd3-4792-828d-9fd8a54a0c4e" (UID: "d8f0b0a1-2dd3-4792-828d-9fd8a54a0c4e"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:24:03.639590 kubelet[1346]: I0209 19:24:03.639368 1346 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d8f0b0a1-2dd3-4792-828d-9fd8a54a0c4e-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "d8f0b0a1-2dd3-4792-828d-9fd8a54a0c4e" (UID: "d8f0b0a1-2dd3-4792-828d-9fd8a54a0c4e"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:24:03.648439 systemd[1]: var-lib-kubelet-pods-d8f0b0a1\x2d2dd3\x2d4792\x2d828d\x2d9fd8a54a0c4e-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 9 19:24:03.653229 kubelet[1346]: I0209 19:24:03.653144 1346 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8f0b0a1-2dd3-4792-828d-9fd8a54a0c4e-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "d8f0b0a1-2dd3-4792-828d-9fd8a54a0c4e" (UID: "d8f0b0a1-2dd3-4792-828d-9fd8a54a0c4e"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 19:24:03.654178 kubelet[1346]: I0209 19:24:03.654116 1346 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8f0b0a1-2dd3-4792-828d-9fd8a54a0c4e-kube-api-access-sqc66" (OuterVolumeSpecName: "kube-api-access-sqc66") pod "d8f0b0a1-2dd3-4792-828d-9fd8a54a0c4e" (UID: "d8f0b0a1-2dd3-4792-828d-9fd8a54a0c4e"). InnerVolumeSpecName "kube-api-access-sqc66". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 19:24:03.655189 kubelet[1346]: I0209 19:24:03.655112 1346 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8f0b0a1-2dd3-4792-828d-9fd8a54a0c4e-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "d8f0b0a1-2dd3-4792-828d-9fd8a54a0c4e" (UID: "d8f0b0a1-2dd3-4792-828d-9fd8a54a0c4e"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 19:24:03.657813 kubelet[1346]: I0209 19:24:03.657724 1346 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8f0b0a1-2dd3-4792-828d-9fd8a54a0c4e-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "d8f0b0a1-2dd3-4792-828d-9fd8a54a0c4e" (UID: "d8f0b0a1-2dd3-4792-828d-9fd8a54a0c4e"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 19:24:03.736065 kubelet[1346]: I0209 19:24:03.735715 1346 reconciler_common.go:295] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d8f0b0a1-2dd3-4792-828d-9fd8a54a0c4e-hostproc\") on node \"172.24.4.101\" DevicePath \"\"" Feb 9 19:24:03.736065 kubelet[1346]: I0209 19:24:03.735791 1346 reconciler_common.go:295] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d8f0b0a1-2dd3-4792-828d-9fd8a54a0c4e-bpf-maps\") on node \"172.24.4.101\" DevicePath \"\"" Feb 9 19:24:03.736065 kubelet[1346]: I0209 19:24:03.735825 1346 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d8f0b0a1-2dd3-4792-828d-9fd8a54a0c4e-host-proc-sys-kernel\") on node \"172.24.4.101\" DevicePath \"\"" Feb 9 19:24:03.736065 kubelet[1346]: I0209 19:24:03.735856 1346 reconciler_common.go:295] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d8f0b0a1-2dd3-4792-828d-9fd8a54a0c4e-xtables-lock\") on node \"172.24.4.101\" DevicePath \"\"" Feb 9 19:24:03.736065 kubelet[1346]: I0209 19:24:03.735923 1346 reconciler_common.go:295] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d8f0b0a1-2dd3-4792-828d-9fd8a54a0c4e-cilium-run\") on node \"172.24.4.101\" DevicePath \"\"" Feb 9 19:24:03.736065 kubelet[1346]: I0209 19:24:03.735954 1346 reconciler_common.go:295] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d8f0b0a1-2dd3-4792-828d-9fd8a54a0c4e-etc-cni-netd\") on node \"172.24.4.101\" DevicePath \"\"" Feb 9 19:24:03.736065 kubelet[1346]: I0209 19:24:03.735998 1346 reconciler_common.go:295] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d8f0b0a1-2dd3-4792-828d-9fd8a54a0c4e-cilium-cgroup\") on node \"172.24.4.101\" DevicePath \"\"" Feb 9 19:24:03.737009 kubelet[1346]: I0209 19:24:03.736027 1346 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d8f0b0a1-2dd3-4792-828d-9fd8a54a0c4e-host-proc-sys-net\") on node \"172.24.4.101\" DevicePath \"\"" Feb 9 19:24:03.737432 kubelet[1346]: I0209 19:24:03.737395 1346 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d8f0b0a1-2dd3-4792-828d-9fd8a54a0c4e-cilium-config-path\") on node \"172.24.4.101\" DevicePath \"\"" Feb 9 19:24:03.737733 kubelet[1346]: I0209 19:24:03.737705 1346 reconciler_common.go:295] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d8f0b0a1-2dd3-4792-828d-9fd8a54a0c4e-hubble-tls\") on node \"172.24.4.101\" DevicePath \"\"" Feb 9 19:24:03.738070 kubelet[1346]: I0209 19:24:03.738037 1346 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-sqc66\" (UniqueName: \"kubernetes.io/projected/d8f0b0a1-2dd3-4792-828d-9fd8a54a0c4e-kube-api-access-sqc66\") on node \"172.24.4.101\" DevicePath \"\"" Feb 9 19:24:03.738408 kubelet[1346]: I0209 19:24:03.738379 1346 reconciler_common.go:295] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d8f0b0a1-2dd3-4792-828d-9fd8a54a0c4e-cni-path\") on node \"172.24.4.101\" DevicePath \"\"" Feb 9 19:24:03.738707 kubelet[1346]: I0209 19:24:03.738680 1346 reconciler_common.go:295] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d8f0b0a1-2dd3-4792-828d-9fd8a54a0c4e-lib-modules\") on node \"172.24.4.101\" DevicePath \"\"" Feb 9 19:24:03.739062 kubelet[1346]: I0209 19:24:03.739007 1346 reconciler_common.go:295] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d8f0b0a1-2dd3-4792-828d-9fd8a54a0c4e-clustermesh-secrets\") on node \"172.24.4.101\" DevicePath \"\"" Feb 9 19:24:03.877539 kubelet[1346]: I0209 19:24:03.877462 1346 scope.go:115] "RemoveContainer" containerID="c0544fadc7d3f51d4e66733c30d7491197bbfd77fbe75c4ca2514cd84b0d8b71" Feb 9 19:24:03.886332 env[1061]: time="2024-02-09T19:24:03.886229257Z" level=info msg="RemoveContainer for \"c0544fadc7d3f51d4e66733c30d7491197bbfd77fbe75c4ca2514cd84b0d8b71\"" Feb 9 19:24:03.894694 systemd[1]: Removed slice kubepods-burstable-podd8f0b0a1_2dd3_4792_828d_9fd8a54a0c4e.slice. Feb 9 19:24:03.894986 systemd[1]: kubepods-burstable-podd8f0b0a1_2dd3_4792_828d_9fd8a54a0c4e.slice: Consumed 9.325s CPU time. Feb 9 19:24:03.908473 env[1061]: time="2024-02-09T19:24:03.908279457Z" level=info msg="RemoveContainer for \"c0544fadc7d3f51d4e66733c30d7491197bbfd77fbe75c4ca2514cd84b0d8b71\" returns successfully" Feb 9 19:24:03.909471 kubelet[1346]: I0209 19:24:03.909426 1346 scope.go:115] "RemoveContainer" containerID="952ecdfb5b703cc71b9734072a2fe15511ee2df9bb2c63d0a05fdcee45faf96f" Feb 9 19:24:03.913231 env[1061]: time="2024-02-09T19:24:03.913113683Z" level=info msg="RemoveContainer for \"952ecdfb5b703cc71b9734072a2fe15511ee2df9bb2c63d0a05fdcee45faf96f\"" Feb 9 19:24:03.919553 env[1061]: time="2024-02-09T19:24:03.919470573Z" level=info msg="RemoveContainer for \"952ecdfb5b703cc71b9734072a2fe15511ee2df9bb2c63d0a05fdcee45faf96f\" returns successfully" Feb 9 19:24:03.920063 kubelet[1346]: I0209 19:24:03.920029 1346 scope.go:115] "RemoveContainer" containerID="f9ee01dcc56876cd57824194e7245856aad9e4dd11f446d1fd2de8da5f6b2f50" Feb 9 19:24:03.923167 env[1061]: time="2024-02-09T19:24:03.923103251Z" level=info msg="RemoveContainer for \"f9ee01dcc56876cd57824194e7245856aad9e4dd11f446d1fd2de8da5f6b2f50\"" Feb 9 19:24:03.928723 env[1061]: time="2024-02-09T19:24:03.928660067Z" level=info msg="RemoveContainer for \"f9ee01dcc56876cd57824194e7245856aad9e4dd11f446d1fd2de8da5f6b2f50\" returns successfully" Feb 9 19:24:03.929396 kubelet[1346]: I0209 19:24:03.929367 1346 scope.go:115] "RemoveContainer" containerID="104a05419ce4c6376a740e963ec0419fa7b2cf548c9ebc1ff3374989209f5513" Feb 9 19:24:03.932571 env[1061]: time="2024-02-09T19:24:03.932505115Z" level=info msg="RemoveContainer for \"104a05419ce4c6376a740e963ec0419fa7b2cf548c9ebc1ff3374989209f5513\"" Feb 9 19:24:03.937873 env[1061]: time="2024-02-09T19:24:03.937814565Z" level=info msg="RemoveContainer for \"104a05419ce4c6376a740e963ec0419fa7b2cf548c9ebc1ff3374989209f5513\" returns successfully" Feb 9 19:24:03.938544 kubelet[1346]: I0209 19:24:03.938473 1346 scope.go:115] "RemoveContainer" containerID="c4fd5411eff9acd28233d840081dd38459fbbd3e0c453158d7204a4e77d821d6" Feb 9 19:24:03.941720 env[1061]: time="2024-02-09T19:24:03.941667738Z" level=info msg="RemoveContainer for \"c4fd5411eff9acd28233d840081dd38459fbbd3e0c453158d7204a4e77d821d6\"" Feb 9 19:24:03.947426 env[1061]: time="2024-02-09T19:24:03.947344550Z" level=info msg="RemoveContainer for \"c4fd5411eff9acd28233d840081dd38459fbbd3e0c453158d7204a4e77d821d6\" returns successfully" Feb 9 19:24:03.947799 kubelet[1346]: I0209 19:24:03.947717 1346 scope.go:115] "RemoveContainer" containerID="c0544fadc7d3f51d4e66733c30d7491197bbfd77fbe75c4ca2514cd84b0d8b71" Feb 9 19:24:03.948319 env[1061]: time="2024-02-09T19:24:03.948157928Z" level=error msg="ContainerStatus for \"c0544fadc7d3f51d4e66733c30d7491197bbfd77fbe75c4ca2514cd84b0d8b71\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c0544fadc7d3f51d4e66733c30d7491197bbfd77fbe75c4ca2514cd84b0d8b71\": not found" Feb 9 19:24:03.948564 kubelet[1346]: E0209 19:24:03.948511 1346 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c0544fadc7d3f51d4e66733c30d7491197bbfd77fbe75c4ca2514cd84b0d8b71\": not found" containerID="c0544fadc7d3f51d4e66733c30d7491197bbfd77fbe75c4ca2514cd84b0d8b71" Feb 9 19:24:03.948666 kubelet[1346]: I0209 19:24:03.948599 1346 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:c0544fadc7d3f51d4e66733c30d7491197bbfd77fbe75c4ca2514cd84b0d8b71} err="failed to get container status \"c0544fadc7d3f51d4e66733c30d7491197bbfd77fbe75c4ca2514cd84b0d8b71\": rpc error: code = NotFound desc = an error occurred when try to find container \"c0544fadc7d3f51d4e66733c30d7491197bbfd77fbe75c4ca2514cd84b0d8b71\": not found" Feb 9 19:24:03.948666 kubelet[1346]: I0209 19:24:03.948633 1346 scope.go:115] "RemoveContainer" containerID="952ecdfb5b703cc71b9734072a2fe15511ee2df9bb2c63d0a05fdcee45faf96f" Feb 9 19:24:03.949551 env[1061]: time="2024-02-09T19:24:03.949363645Z" level=error msg="ContainerStatus for \"952ecdfb5b703cc71b9734072a2fe15511ee2df9bb2c63d0a05fdcee45faf96f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"952ecdfb5b703cc71b9734072a2fe15511ee2df9bb2c63d0a05fdcee45faf96f\": not found" Feb 9 19:24:03.949801 kubelet[1346]: E0209 19:24:03.949776 1346 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"952ecdfb5b703cc71b9734072a2fe15511ee2df9bb2c63d0a05fdcee45faf96f\": not found" containerID="952ecdfb5b703cc71b9734072a2fe15511ee2df9bb2c63d0a05fdcee45faf96f" Feb 9 19:24:03.950005 kubelet[1346]: I0209 19:24:03.949857 1346 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:952ecdfb5b703cc71b9734072a2fe15511ee2df9bb2c63d0a05fdcee45faf96f} err="failed to get container status \"952ecdfb5b703cc71b9734072a2fe15511ee2df9bb2c63d0a05fdcee45faf96f\": rpc error: code = NotFound desc = an error occurred when try to find container \"952ecdfb5b703cc71b9734072a2fe15511ee2df9bb2c63d0a05fdcee45faf96f\": not found" Feb 9 19:24:03.950005 kubelet[1346]: I0209 19:24:03.949941 1346 scope.go:115] "RemoveContainer" containerID="f9ee01dcc56876cd57824194e7245856aad9e4dd11f446d1fd2de8da5f6b2f50" Feb 9 19:24:03.950987 env[1061]: time="2024-02-09T19:24:03.950746304Z" level=error msg="ContainerStatus for \"f9ee01dcc56876cd57824194e7245856aad9e4dd11f446d1fd2de8da5f6b2f50\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f9ee01dcc56876cd57824194e7245856aad9e4dd11f446d1fd2de8da5f6b2f50\": not found" Feb 9 19:24:03.951447 kubelet[1346]: E0209 19:24:03.951387 1346 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f9ee01dcc56876cd57824194e7245856aad9e4dd11f446d1fd2de8da5f6b2f50\": not found" containerID="f9ee01dcc56876cd57824194e7245856aad9e4dd11f446d1fd2de8da5f6b2f50" Feb 9 19:24:03.951577 kubelet[1346]: I0209 19:24:03.951461 1346 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:f9ee01dcc56876cd57824194e7245856aad9e4dd11f446d1fd2de8da5f6b2f50} err="failed to get container status \"f9ee01dcc56876cd57824194e7245856aad9e4dd11f446d1fd2de8da5f6b2f50\": rpc error: code = NotFound desc = an error occurred when try to find container \"f9ee01dcc56876cd57824194e7245856aad9e4dd11f446d1fd2de8da5f6b2f50\": not found" Feb 9 19:24:03.951577 kubelet[1346]: I0209 19:24:03.951487 1346 scope.go:115] "RemoveContainer" containerID="104a05419ce4c6376a740e963ec0419fa7b2cf548c9ebc1ff3374989209f5513" Feb 9 19:24:03.952316 env[1061]: time="2024-02-09T19:24:03.952160572Z" level=error msg="ContainerStatus for \"104a05419ce4c6376a740e963ec0419fa7b2cf548c9ebc1ff3374989209f5513\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"104a05419ce4c6376a740e963ec0419fa7b2cf548c9ebc1ff3374989209f5513\": not found" Feb 9 19:24:03.952597 kubelet[1346]: E0209 19:24:03.952519 1346 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"104a05419ce4c6376a740e963ec0419fa7b2cf548c9ebc1ff3374989209f5513\": not found" containerID="104a05419ce4c6376a740e963ec0419fa7b2cf548c9ebc1ff3374989209f5513" Feb 9 19:24:03.952597 kubelet[1346]: I0209 19:24:03.952573 1346 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:104a05419ce4c6376a740e963ec0419fa7b2cf548c9ebc1ff3374989209f5513} err="failed to get container status \"104a05419ce4c6376a740e963ec0419fa7b2cf548c9ebc1ff3374989209f5513\": rpc error: code = NotFound desc = an error occurred when try to find container \"104a05419ce4c6376a740e963ec0419fa7b2cf548c9ebc1ff3374989209f5513\": not found" Feb 9 19:24:03.952597 kubelet[1346]: I0209 19:24:03.952596 1346 scope.go:115] "RemoveContainer" containerID="c4fd5411eff9acd28233d840081dd38459fbbd3e0c453158d7204a4e77d821d6" Feb 9 19:24:03.953708 env[1061]: time="2024-02-09T19:24:03.953526179Z" level=error msg="ContainerStatus for \"c4fd5411eff9acd28233d840081dd38459fbbd3e0c453158d7204a4e77d821d6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c4fd5411eff9acd28233d840081dd38459fbbd3e0c453158d7204a4e77d821d6\": not found" Feb 9 19:24:03.954318 kubelet[1346]: E0209 19:24:03.954246 1346 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c4fd5411eff9acd28233d840081dd38459fbbd3e0c453158d7204a4e77d821d6\": not found" containerID="c4fd5411eff9acd28233d840081dd38459fbbd3e0c453158d7204a4e77d821d6" Feb 9 19:24:03.954318 kubelet[1346]: I0209 19:24:03.954320 1346 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:c4fd5411eff9acd28233d840081dd38459fbbd3e0c453158d7204a4e77d821d6} err="failed to get container status \"c4fd5411eff9acd28233d840081dd38459fbbd3e0c453158d7204a4e77d821d6\": rpc error: code = NotFound desc = an error occurred when try to find container \"c4fd5411eff9acd28233d840081dd38459fbbd3e0c453158d7204a4e77d821d6\": not found" Feb 9 19:24:04.124294 systemd[1]: var-lib-kubelet-pods-d8f0b0a1\x2d2dd3\x2d4792\x2d828d\x2d9fd8a54a0c4e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dsqc66.mount: Deactivated successfully. Feb 9 19:24:04.124524 systemd[1]: var-lib-kubelet-pods-d8f0b0a1\x2d2dd3\x2d4792\x2d828d\x2d9fd8a54a0c4e-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 9 19:24:04.287713 kubelet[1346]: E0209 19:24:04.287665 1346 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:24:04.470707 kubelet[1346]: I0209 19:24:04.470111 1346 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=d8f0b0a1-2dd3-4792-828d-9fd8a54a0c4e path="/var/lib/kubelet/pods/d8f0b0a1-2dd3-4792-828d-9fd8a54a0c4e/volumes" Feb 9 19:24:05.288857 kubelet[1346]: E0209 19:24:05.288803 1346 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:24:06.186828 kubelet[1346]: E0209 19:24:06.186785 1346 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:24:06.290077 kubelet[1346]: E0209 19:24:06.289994 1346 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:24:06.327311 kubelet[1346]: E0209 19:24:06.327247 1346 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 9 19:24:07.290945 kubelet[1346]: E0209 19:24:07.290817 1346 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:24:07.682632 kubelet[1346]: I0209 19:24:07.682527 1346 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:24:07.683098 kubelet[1346]: E0209 19:24:07.683070 1346 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d8f0b0a1-2dd3-4792-828d-9fd8a54a0c4e" containerName="mount-cgroup" Feb 9 19:24:07.683313 kubelet[1346]: E0209 19:24:07.683289 1346 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d8f0b0a1-2dd3-4792-828d-9fd8a54a0c4e" containerName="mount-bpf-fs" Feb 9 19:24:07.683521 kubelet[1346]: E0209 19:24:07.683499 1346 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d8f0b0a1-2dd3-4792-828d-9fd8a54a0c4e" containerName="cilium-agent" Feb 9 19:24:07.683812 kubelet[1346]: E0209 19:24:07.683789 1346 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d8f0b0a1-2dd3-4792-828d-9fd8a54a0c4e" containerName="clean-cilium-state" Feb 9 19:24:07.684049 kubelet[1346]: E0209 19:24:07.684026 1346 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d8f0b0a1-2dd3-4792-828d-9fd8a54a0c4e" containerName="apply-sysctl-overwrites" Feb 9 19:24:07.684286 kubelet[1346]: I0209 19:24:07.684243 1346 memory_manager.go:346] "RemoveStaleState removing state" podUID="d8f0b0a1-2dd3-4792-828d-9fd8a54a0c4e" containerName="cilium-agent" Feb 9 19:24:07.692120 kubelet[1346]: I0209 19:24:07.692088 1346 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:24:07.697480 systemd[1]: Created slice kubepods-burstable-pod1aa5ab7e_268c_4050_aeec_1b6c1ede04b7.slice. Feb 9 19:24:07.717430 systemd[1]: Created slice kubepods-besteffort-podad7fef62_7303_461b_8c4a_e94da7777f22.slice. Feb 9 19:24:07.769628 kubelet[1346]: I0209 19:24:07.769582 1346 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1aa5ab7e-268c-4050-aeec-1b6c1ede04b7-cni-path\") pod \"cilium-5qbzl\" (UID: \"1aa5ab7e-268c-4050-aeec-1b6c1ede04b7\") " pod="kube-system/cilium-5qbzl" Feb 9 19:24:07.770015 kubelet[1346]: I0209 19:24:07.769989 1346 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1aa5ab7e-268c-4050-aeec-1b6c1ede04b7-etc-cni-netd\") pod \"cilium-5qbzl\" (UID: \"1aa5ab7e-268c-4050-aeec-1b6c1ede04b7\") " pod="kube-system/cilium-5qbzl" Feb 9 19:24:07.770238 kubelet[1346]: I0209 19:24:07.770215 1346 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1aa5ab7e-268c-4050-aeec-1b6c1ede04b7-xtables-lock\") pod \"cilium-5qbzl\" (UID: \"1aa5ab7e-268c-4050-aeec-1b6c1ede04b7\") " pod="kube-system/cilium-5qbzl" Feb 9 19:24:07.770452 kubelet[1346]: I0209 19:24:07.770427 1346 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/1aa5ab7e-268c-4050-aeec-1b6c1ede04b7-cilium-ipsec-secrets\") pod \"cilium-5qbzl\" (UID: \"1aa5ab7e-268c-4050-aeec-1b6c1ede04b7\") " pod="kube-system/cilium-5qbzl" Feb 9 19:24:07.770691 kubelet[1346]: I0209 19:24:07.770666 1346 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1aa5ab7e-268c-4050-aeec-1b6c1ede04b7-cilium-cgroup\") pod \"cilium-5qbzl\" (UID: \"1aa5ab7e-268c-4050-aeec-1b6c1ede04b7\") " pod="kube-system/cilium-5qbzl" Feb 9 19:24:07.770951 kubelet[1346]: I0209 19:24:07.770923 1346 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1aa5ab7e-268c-4050-aeec-1b6c1ede04b7-clustermesh-secrets\") pod \"cilium-5qbzl\" (UID: \"1aa5ab7e-268c-4050-aeec-1b6c1ede04b7\") " pod="kube-system/cilium-5qbzl" Feb 9 19:24:07.771170 kubelet[1346]: I0209 19:24:07.771146 1346 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1aa5ab7e-268c-4050-aeec-1b6c1ede04b7-cilium-config-path\") pod \"cilium-5qbzl\" (UID: \"1aa5ab7e-268c-4050-aeec-1b6c1ede04b7\") " pod="kube-system/cilium-5qbzl" Feb 9 19:24:07.771420 kubelet[1346]: I0209 19:24:07.771385 1346 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1aa5ab7e-268c-4050-aeec-1b6c1ede04b7-host-proc-sys-net\") pod \"cilium-5qbzl\" (UID: \"1aa5ab7e-268c-4050-aeec-1b6c1ede04b7\") " pod="kube-system/cilium-5qbzl" Feb 9 19:24:07.771657 kubelet[1346]: I0209 19:24:07.771633 1346 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1aa5ab7e-268c-4050-aeec-1b6c1ede04b7-hubble-tls\") pod \"cilium-5qbzl\" (UID: \"1aa5ab7e-268c-4050-aeec-1b6c1ede04b7\") " pod="kube-system/cilium-5qbzl" Feb 9 19:24:07.771934 kubelet[1346]: I0209 19:24:07.771909 1346 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1aa5ab7e-268c-4050-aeec-1b6c1ede04b7-bpf-maps\") pod \"cilium-5qbzl\" (UID: \"1aa5ab7e-268c-4050-aeec-1b6c1ede04b7\") " pod="kube-system/cilium-5qbzl" Feb 9 19:24:07.772155 kubelet[1346]: I0209 19:24:07.772127 1346 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1aa5ab7e-268c-4050-aeec-1b6c1ede04b7-hostproc\") pod \"cilium-5qbzl\" (UID: \"1aa5ab7e-268c-4050-aeec-1b6c1ede04b7\") " pod="kube-system/cilium-5qbzl" Feb 9 19:24:07.772359 kubelet[1346]: I0209 19:24:07.772338 1346 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1aa5ab7e-268c-4050-aeec-1b6c1ede04b7-lib-modules\") pod \"cilium-5qbzl\" (UID: \"1aa5ab7e-268c-4050-aeec-1b6c1ede04b7\") " pod="kube-system/cilium-5qbzl" Feb 9 19:24:07.772562 kubelet[1346]: I0209 19:24:07.772541 1346 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1aa5ab7e-268c-4050-aeec-1b6c1ede04b7-host-proc-sys-kernel\") pod \"cilium-5qbzl\" (UID: \"1aa5ab7e-268c-4050-aeec-1b6c1ede04b7\") " pod="kube-system/cilium-5qbzl" Feb 9 19:24:07.772775 kubelet[1346]: I0209 19:24:07.772750 1346 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1aa5ab7e-268c-4050-aeec-1b6c1ede04b7-cilium-run\") pod \"cilium-5qbzl\" (UID: \"1aa5ab7e-268c-4050-aeec-1b6c1ede04b7\") " pod="kube-system/cilium-5qbzl" Feb 9 19:24:07.773055 kubelet[1346]: I0209 19:24:07.773029 1346 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvtwp\" (UniqueName: \"kubernetes.io/projected/1aa5ab7e-268c-4050-aeec-1b6c1ede04b7-kube-api-access-wvtwp\") pod \"cilium-5qbzl\" (UID: \"1aa5ab7e-268c-4050-aeec-1b6c1ede04b7\") " pod="kube-system/cilium-5qbzl" Feb 9 19:24:07.874764 kubelet[1346]: I0209 19:24:07.874704 1346 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ad7fef62-7303-461b-8c4a-e94da7777f22-cilium-config-path\") pod \"cilium-operator-f59cbd8c6-nxr8s\" (UID: \"ad7fef62-7303-461b-8c4a-e94da7777f22\") " pod="kube-system/cilium-operator-f59cbd8c6-nxr8s" Feb 9 19:24:07.875206 kubelet[1346]: I0209 19:24:07.875177 1346 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxb2d\" (UniqueName: \"kubernetes.io/projected/ad7fef62-7303-461b-8c4a-e94da7777f22-kube-api-access-pxb2d\") pod \"cilium-operator-f59cbd8c6-nxr8s\" (UID: \"ad7fef62-7303-461b-8c4a-e94da7777f22\") " pod="kube-system/cilium-operator-f59cbd8c6-nxr8s" Feb 9 19:24:08.019091 env[1061]: time="2024-02-09T19:24:08.015566196Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5qbzl,Uid:1aa5ab7e-268c-4050-aeec-1b6c1ede04b7,Namespace:kube-system,Attempt:0,}" Feb 9 19:24:08.024233 env[1061]: time="2024-02-09T19:24:08.023647974Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-f59cbd8c6-nxr8s,Uid:ad7fef62-7303-461b-8c4a-e94da7777f22,Namespace:kube-system,Attempt:0,}" Feb 9 19:24:08.052753 env[1061]: time="2024-02-09T19:24:08.052620601Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:24:08.053228 env[1061]: time="2024-02-09T19:24:08.053162840Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:24:08.053505 env[1061]: time="2024-02-09T19:24:08.053447014Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:24:08.054083 env[1061]: time="2024-02-09T19:24:08.054011715Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/771de2db7383e144d6c82d08c6a62c372ca9d7d96301632302764a51c7c937dd pid=3026 runtime=io.containerd.runc.v2 Feb 9 19:24:08.071223 env[1061]: time="2024-02-09T19:24:08.071053681Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:24:08.071500 env[1061]: time="2024-02-09T19:24:08.071154500Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:24:08.071500 env[1061]: time="2024-02-09T19:24:08.071243358Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:24:08.071842 env[1061]: time="2024-02-09T19:24:08.071680569Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/92375f0e61e07ec40e82211c05c38d3c2dc62f3887f72cca8414aec7f3c356d8 pid=3045 runtime=io.containerd.runc.v2 Feb 9 19:24:08.089508 systemd[1]: Started cri-containerd-771de2db7383e144d6c82d08c6a62c372ca9d7d96301632302764a51c7c937dd.scope. Feb 9 19:24:08.117086 systemd[1]: Started cri-containerd-92375f0e61e07ec40e82211c05c38d3c2dc62f3887f72cca8414aec7f3c356d8.scope. Feb 9 19:24:08.141151 env[1061]: time="2024-02-09T19:24:08.141098803Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5qbzl,Uid:1aa5ab7e-268c-4050-aeec-1b6c1ede04b7,Namespace:kube-system,Attempt:0,} returns sandbox id \"771de2db7383e144d6c82d08c6a62c372ca9d7d96301632302764a51c7c937dd\"" Feb 9 19:24:08.152626 env[1061]: time="2024-02-09T19:24:08.152533903Z" level=info msg="CreateContainer within sandbox \"771de2db7383e144d6c82d08c6a62c372ca9d7d96301632302764a51c7c937dd\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 19:24:08.171194 env[1061]: time="2024-02-09T19:24:08.171088752Z" level=info msg="CreateContainer within sandbox \"771de2db7383e144d6c82d08c6a62c372ca9d7d96301632302764a51c7c937dd\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"bf1802b88d7bd55de039cc81e310d229f3c4f7aebf66733844eaadf4b809efa7\"" Feb 9 19:24:08.173044 env[1061]: time="2024-02-09T19:24:08.172979736Z" level=info msg="StartContainer for \"bf1802b88d7bd55de039cc81e310d229f3c4f7aebf66733844eaadf4b809efa7\"" Feb 9 19:24:08.187706 env[1061]: time="2024-02-09T19:24:08.187658091Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-f59cbd8c6-nxr8s,Uid:ad7fef62-7303-461b-8c4a-e94da7777f22,Namespace:kube-system,Attempt:0,} returns sandbox id \"92375f0e61e07ec40e82211c05c38d3c2dc62f3887f72cca8414aec7f3c356d8\"" Feb 9 19:24:08.190178 env[1061]: time="2024-02-09T19:24:08.190145184Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 9 19:24:08.203743 systemd[1]: Started cri-containerd-bf1802b88d7bd55de039cc81e310d229f3c4f7aebf66733844eaadf4b809efa7.scope. Feb 9 19:24:08.226753 systemd[1]: cri-containerd-bf1802b88d7bd55de039cc81e310d229f3c4f7aebf66733844eaadf4b809efa7.scope: Deactivated successfully. Feb 9 19:24:08.278438 env[1061]: time="2024-02-09T19:24:08.277136924Z" level=info msg="shim disconnected" id=bf1802b88d7bd55de039cc81e310d229f3c4f7aebf66733844eaadf4b809efa7 Feb 9 19:24:08.278438 env[1061]: time="2024-02-09T19:24:08.278129269Z" level=warning msg="cleaning up after shim disconnected" id=bf1802b88d7bd55de039cc81e310d229f3c4f7aebf66733844eaadf4b809efa7 namespace=k8s.io Feb 9 19:24:08.278438 env[1061]: time="2024-02-09T19:24:08.278225820Z" level=info msg="cleaning up dead shim" Feb 9 19:24:08.292764 env[1061]: time="2024-02-09T19:24:08.292647392Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:24:08Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3124 runtime=io.containerd.runc.v2\ntime=\"2024-02-09T19:24:08Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\ntime=\"2024-02-09T19:24:08Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/bf1802b88d7bd55de039cc81e310d229f3c4f7aebf66733844eaadf4b809efa7/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Feb 9 19:24:08.292988 kubelet[1346]: E0209 19:24:08.292858 1346 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:24:08.293654 env[1061]: time="2024-02-09T19:24:08.293550449Z" level=error msg="copy shim log" error="read /proc/self/fd/64: file already closed" Feb 9 19:24:08.294207 env[1061]: time="2024-02-09T19:24:08.294087478Z" level=error msg="Failed to pipe stderr of container \"bf1802b88d7bd55de039cc81e310d229f3c4f7aebf66733844eaadf4b809efa7\"" error="reading from a closed fifo" Feb 9 19:24:08.294350 env[1061]: time="2024-02-09T19:24:08.294281653Z" level=error msg="Failed to pipe stdout of container \"bf1802b88d7bd55de039cc81e310d229f3c4f7aebf66733844eaadf4b809efa7\"" error="reading from a closed fifo" Feb 9 19:24:08.301745 env[1061]: time="2024-02-09T19:24:08.301576131Z" level=error msg="StartContainer for \"bf1802b88d7bd55de039cc81e310d229f3c4f7aebf66733844eaadf4b809efa7\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Feb 9 19:24:08.302456 kubelet[1346]: E0209 19:24:08.302374 1346 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="bf1802b88d7bd55de039cc81e310d229f3c4f7aebf66733844eaadf4b809efa7" Feb 9 19:24:08.303523 kubelet[1346]: E0209 19:24:08.303473 1346 kuberuntime_manager.go:872] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Feb 9 19:24:08.303523 kubelet[1346]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Feb 9 19:24:08.303523 kubelet[1346]: rm /hostbin/cilium-mount Feb 9 19:24:08.303523 kubelet[1346]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-wvtwp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-5qbzl_kube-system(1aa5ab7e-268c-4050-aeec-1b6c1ede04b7): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Feb 9 19:24:08.303733 kubelet[1346]: E0209 19:24:08.303660 1346 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-5qbzl" podUID=1aa5ab7e-268c-4050-aeec-1b6c1ede04b7 Feb 9 19:24:08.923524 env[1061]: time="2024-02-09T19:24:08.923383194Z" level=info msg="CreateContainer within sandbox \"771de2db7383e144d6c82d08c6a62c372ca9d7d96301632302764a51c7c937dd\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:1,}" Feb 9 19:24:08.953816 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4089972935.mount: Deactivated successfully. Feb 9 19:24:08.961027 env[1061]: time="2024-02-09T19:24:08.960856736Z" level=info msg="CreateContainer within sandbox \"771de2db7383e144d6c82d08c6a62c372ca9d7d96301632302764a51c7c937dd\" for &ContainerMetadata{Name:mount-cgroup,Attempt:1,} returns container id \"0cbb1a0cffe84f12c5b796d88519834f95f917d90cab7be85e8574656c4e9e41\"" Feb 9 19:24:08.962617 env[1061]: time="2024-02-09T19:24:08.962479215Z" level=info msg="StartContainer for \"0cbb1a0cffe84f12c5b796d88519834f95f917d90cab7be85e8574656c4e9e41\"" Feb 9 19:24:09.003087 systemd[1]: Started cri-containerd-0cbb1a0cffe84f12c5b796d88519834f95f917d90cab7be85e8574656c4e9e41.scope. Feb 9 19:24:09.019355 systemd[1]: cri-containerd-0cbb1a0cffe84f12c5b796d88519834f95f917d90cab7be85e8574656c4e9e41.scope: Deactivated successfully. Feb 9 19:24:09.029258 env[1061]: time="2024-02-09T19:24:09.029187224Z" level=info msg="shim disconnected" id=0cbb1a0cffe84f12c5b796d88519834f95f917d90cab7be85e8574656c4e9e41 Feb 9 19:24:09.029587 env[1061]: time="2024-02-09T19:24:09.029253890Z" level=warning msg="cleaning up after shim disconnected" id=0cbb1a0cffe84f12c5b796d88519834f95f917d90cab7be85e8574656c4e9e41 namespace=k8s.io Feb 9 19:24:09.029587 env[1061]: time="2024-02-09T19:24:09.029271072Z" level=info msg="cleaning up dead shim" Feb 9 19:24:09.039653 env[1061]: time="2024-02-09T19:24:09.039581676Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:24:09Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3161 runtime=io.containerd.runc.v2\ntime=\"2024-02-09T19:24:09Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/0cbb1a0cffe84f12c5b796d88519834f95f917d90cab7be85e8574656c4e9e41/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Feb 9 19:24:09.040013 env[1061]: time="2024-02-09T19:24:09.039925283Z" level=error msg="copy shim log" error="read /proc/self/fd/67: file already closed" Feb 9 19:24:09.040427 env[1061]: time="2024-02-09T19:24:09.040374907Z" level=error msg="Failed to pipe stdout of container \"0cbb1a0cffe84f12c5b796d88519834f95f917d90cab7be85e8574656c4e9e41\"" error="reading from a closed fifo" Feb 9 19:24:09.040569 env[1061]: time="2024-02-09T19:24:09.040522044Z" level=error msg="Failed to pipe stderr of container \"0cbb1a0cffe84f12c5b796d88519834f95f917d90cab7be85e8574656c4e9e41\"" error="reading from a closed fifo" Feb 9 19:24:09.044151 env[1061]: time="2024-02-09T19:24:09.044102913Z" level=error msg="StartContainer for \"0cbb1a0cffe84f12c5b796d88519834f95f917d90cab7be85e8574656c4e9e41\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Feb 9 19:24:09.045054 kubelet[1346]: E0209 19:24:09.044421 1346 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="0cbb1a0cffe84f12c5b796d88519834f95f917d90cab7be85e8574656c4e9e41" Feb 9 19:24:09.045054 kubelet[1346]: E0209 19:24:09.044548 1346 kuberuntime_manager.go:872] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Feb 9 19:24:09.045054 kubelet[1346]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Feb 9 19:24:09.045054 kubelet[1346]: rm /hostbin/cilium-mount Feb 9 19:24:09.045235 kubelet[1346]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-wvtwp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-5qbzl_kube-system(1aa5ab7e-268c-4050-aeec-1b6c1ede04b7): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Feb 9 19:24:09.045345 kubelet[1346]: E0209 19:24:09.044608 1346 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-5qbzl" podUID=1aa5ab7e-268c-4050-aeec-1b6c1ede04b7 Feb 9 19:24:09.294148 kubelet[1346]: E0209 19:24:09.293401 1346 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:24:09.895965 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0cbb1a0cffe84f12c5b796d88519834f95f917d90cab7be85e8574656c4e9e41-rootfs.mount: Deactivated successfully. Feb 9 19:24:09.926922 kubelet[1346]: I0209 19:24:09.926183 1346 scope.go:115] "RemoveContainer" containerID="bf1802b88d7bd55de039cc81e310d229f3c4f7aebf66733844eaadf4b809efa7" Feb 9 19:24:09.926922 kubelet[1346]: I0209 19:24:09.926685 1346 scope.go:115] "RemoveContainer" containerID="bf1802b88d7bd55de039cc81e310d229f3c4f7aebf66733844eaadf4b809efa7" Feb 9 19:24:09.943460 env[1061]: time="2024-02-09T19:24:09.943314194Z" level=info msg="RemoveContainer for \"bf1802b88d7bd55de039cc81e310d229f3c4f7aebf66733844eaadf4b809efa7\"" Feb 9 19:24:09.946834 env[1061]: time="2024-02-09T19:24:09.946775990Z" level=info msg="RemoveContainer for \"bf1802b88d7bd55de039cc81e310d229f3c4f7aebf66733844eaadf4b809efa7\"" Feb 9 19:24:09.947046 env[1061]: time="2024-02-09T19:24:09.946976587Z" level=error msg="RemoveContainer for \"bf1802b88d7bd55de039cc81e310d229f3c4f7aebf66733844eaadf4b809efa7\" failed" error="failed to set removing state for container \"bf1802b88d7bd55de039cc81e310d229f3c4f7aebf66733844eaadf4b809efa7\": container is already in removing state" Feb 9 19:24:09.947871 kubelet[1346]: E0209 19:24:09.947200 1346 remote_runtime.go:368] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"bf1802b88d7bd55de039cc81e310d229f3c4f7aebf66733844eaadf4b809efa7\": container is already in removing state" containerID="bf1802b88d7bd55de039cc81e310d229f3c4f7aebf66733844eaadf4b809efa7" Feb 9 19:24:09.947871 kubelet[1346]: E0209 19:24:09.947253 1346 kuberuntime_container.go:784] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "bf1802b88d7bd55de039cc81e310d229f3c4f7aebf66733844eaadf4b809efa7": container is already in removing state; Skipping pod "cilium-5qbzl_kube-system(1aa5ab7e-268c-4050-aeec-1b6c1ede04b7)" Feb 9 19:24:09.947871 kubelet[1346]: E0209 19:24:09.947627 1346 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mount-cgroup pod=cilium-5qbzl_kube-system(1aa5ab7e-268c-4050-aeec-1b6c1ede04b7)\"" pod="kube-system/cilium-5qbzl" podUID=1aa5ab7e-268c-4050-aeec-1b6c1ede04b7 Feb 9 19:24:09.966309 env[1061]: time="2024-02-09T19:24:09.966242721Z" level=info msg="RemoveContainer for \"bf1802b88d7bd55de039cc81e310d229f3c4f7aebf66733844eaadf4b809efa7\" returns successfully" Feb 9 19:24:10.295110 kubelet[1346]: E0209 19:24:10.294854 1346 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:24:10.933477 kubelet[1346]: E0209 19:24:10.933413 1346 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mount-cgroup pod=cilium-5qbzl_kube-system(1aa5ab7e-268c-4050-aeec-1b6c1ede04b7)\"" pod="kube-system/cilium-5qbzl" podUID=1aa5ab7e-268c-4050-aeec-1b6c1ede04b7 Feb 9 19:24:10.954320 kubelet[1346]: I0209 19:24:10.954203 1346 setters.go:548] "Node became not ready" node="172.24.4.101" condition={Type:Ready Status:False LastHeartbeatTime:2024-02-09 19:24:10.954060671 +0000 UTC m=+105.361361709 LastTransitionTime:2024-02-09 19:24:10.954060671 +0000 UTC m=+105.361361709 Reason:KubeletNotReady Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized} Feb 9 19:24:11.100670 env[1061]: time="2024-02-09T19:24:11.100620088Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:24:11.107125 env[1061]: time="2024-02-09T19:24:11.107054568Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:24:11.109973 env[1061]: time="2024-02-09T19:24:11.109912979Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:24:11.111086 env[1061]: time="2024-02-09T19:24:11.111017004Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Feb 9 19:24:11.114697 env[1061]: time="2024-02-09T19:24:11.114650321Z" level=info msg="CreateContainer within sandbox \"92375f0e61e07ec40e82211c05c38d3c2dc62f3887f72cca8414aec7f3c356d8\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 9 19:24:11.136420 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2438549786.mount: Deactivated successfully. Feb 9 19:24:11.138856 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1491563765.mount: Deactivated successfully. Feb 9 19:24:11.155322 env[1061]: time="2024-02-09T19:24:11.155248213Z" level=info msg="CreateContainer within sandbox \"92375f0e61e07ec40e82211c05c38d3c2dc62f3887f72cca8414aec7f3c356d8\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"0299294c8555fc6390dba1e0e38e3941d08778b4a7fc47bc05c98edb7e423953\"" Feb 9 19:24:11.157161 env[1061]: time="2024-02-09T19:24:11.157075267Z" level=info msg="StartContainer for \"0299294c8555fc6390dba1e0e38e3941d08778b4a7fc47bc05c98edb7e423953\"" Feb 9 19:24:11.195337 systemd[1]: Started cri-containerd-0299294c8555fc6390dba1e0e38e3941d08778b4a7fc47bc05c98edb7e423953.scope. Feb 9 19:24:11.239511 env[1061]: time="2024-02-09T19:24:11.239443124Z" level=info msg="StartContainer for \"0299294c8555fc6390dba1e0e38e3941d08778b4a7fc47bc05c98edb7e423953\" returns successfully" Feb 9 19:24:11.295787 kubelet[1346]: E0209 19:24:11.295733 1346 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:24:11.329412 kubelet[1346]: E0209 19:24:11.329327 1346 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 9 19:24:11.393692 kubelet[1346]: W0209 19:24:11.393467 1346 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1aa5ab7e_268c_4050_aeec_1b6c1ede04b7.slice/cri-containerd-bf1802b88d7bd55de039cc81e310d229f3c4f7aebf66733844eaadf4b809efa7.scope WatchSource:0}: container "bf1802b88d7bd55de039cc81e310d229f3c4f7aebf66733844eaadf4b809efa7" in namespace "k8s.io": not found Feb 9 19:24:11.940548 env[1061]: time="2024-02-09T19:24:11.940405119Z" level=info msg="StopPodSandbox for \"771de2db7383e144d6c82d08c6a62c372ca9d7d96301632302764a51c7c937dd\"" Feb 9 19:24:11.940816 env[1061]: time="2024-02-09T19:24:11.940679254Z" level=info msg="Container to stop \"0cbb1a0cffe84f12c5b796d88519834f95f917d90cab7be85e8574656c4e9e41\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 19:24:11.958406 systemd[1]: cri-containerd-771de2db7383e144d6c82d08c6a62c372ca9d7d96301632302764a51c7c937dd.scope: Deactivated successfully. Feb 9 19:24:12.130813 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-771de2db7383e144d6c82d08c6a62c372ca9d7d96301632302764a51c7c937dd-rootfs.mount: Deactivated successfully. Feb 9 19:24:12.131081 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-771de2db7383e144d6c82d08c6a62c372ca9d7d96301632302764a51c7c937dd-shm.mount: Deactivated successfully. Feb 9 19:24:12.297658 kubelet[1346]: E0209 19:24:12.297526 1346 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:24:12.409215 env[1061]: time="2024-02-09T19:24:12.409113948Z" level=info msg="shim disconnected" id=771de2db7383e144d6c82d08c6a62c372ca9d7d96301632302764a51c7c937dd Feb 9 19:24:12.410124 env[1061]: time="2024-02-09T19:24:12.410074704Z" level=warning msg="cleaning up after shim disconnected" id=771de2db7383e144d6c82d08c6a62c372ca9d7d96301632302764a51c7c937dd namespace=k8s.io Feb 9 19:24:12.410377 env[1061]: time="2024-02-09T19:24:12.410340422Z" level=info msg="cleaning up dead shim" Feb 9 19:24:12.430016 env[1061]: time="2024-02-09T19:24:12.429855299Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:24:12Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3232 runtime=io.containerd.runc.v2\n" Feb 9 19:24:12.430736 env[1061]: time="2024-02-09T19:24:12.430626487Z" level=info msg="TearDown network for sandbox \"771de2db7383e144d6c82d08c6a62c372ca9d7d96301632302764a51c7c937dd\" successfully" Feb 9 19:24:12.430840 env[1061]: time="2024-02-09T19:24:12.430721527Z" level=info msg="StopPodSandbox for \"771de2db7383e144d6c82d08c6a62c372ca9d7d96301632302764a51c7c937dd\" returns successfully" Feb 9 19:24:12.617155 kubelet[1346]: I0209 19:24:12.617084 1346 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1aa5ab7e-268c-4050-aeec-1b6c1ede04b7-cni-path\") pod \"1aa5ab7e-268c-4050-aeec-1b6c1ede04b7\" (UID: \"1aa5ab7e-268c-4050-aeec-1b6c1ede04b7\") " Feb 9 19:24:12.617587 kubelet[1346]: I0209 19:24:12.617312 1346 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1aa5ab7e-268c-4050-aeec-1b6c1ede04b7-cni-path" (OuterVolumeSpecName: "cni-path") pod "1aa5ab7e-268c-4050-aeec-1b6c1ede04b7" (UID: "1aa5ab7e-268c-4050-aeec-1b6c1ede04b7"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:24:12.617718 kubelet[1346]: I0209 19:24:12.617544 1346 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1aa5ab7e-268c-4050-aeec-1b6c1ede04b7-cilium-cgroup\") pod \"1aa5ab7e-268c-4050-aeec-1b6c1ede04b7\" (UID: \"1aa5ab7e-268c-4050-aeec-1b6c1ede04b7\") " Feb 9 19:24:12.617718 kubelet[1346]: I0209 19:24:12.617694 1346 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1aa5ab7e-268c-4050-aeec-1b6c1ede04b7-clustermesh-secrets\") pod \"1aa5ab7e-268c-4050-aeec-1b6c1ede04b7\" (UID: \"1aa5ab7e-268c-4050-aeec-1b6c1ede04b7\") " Feb 9 19:24:12.617860 kubelet[1346]: I0209 19:24:12.617761 1346 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1aa5ab7e-268c-4050-aeec-1b6c1ede04b7-hostproc\") pod \"1aa5ab7e-268c-4050-aeec-1b6c1ede04b7\" (UID: \"1aa5ab7e-268c-4050-aeec-1b6c1ede04b7\") " Feb 9 19:24:12.617860 kubelet[1346]: I0209 19:24:12.617829 1346 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1aa5ab7e-268c-4050-aeec-1b6c1ede04b7-cilium-config-path\") pod \"1aa5ab7e-268c-4050-aeec-1b6c1ede04b7\" (UID: \"1aa5ab7e-268c-4050-aeec-1b6c1ede04b7\") " Feb 9 19:24:12.618068 kubelet[1346]: I0209 19:24:12.617983 1346 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1aa5ab7e-268c-4050-aeec-1b6c1ede04b7-bpf-maps\") pod \"1aa5ab7e-268c-4050-aeec-1b6c1ede04b7\" (UID: \"1aa5ab7e-268c-4050-aeec-1b6c1ede04b7\") " Feb 9 19:24:12.618183 kubelet[1346]: I0209 19:24:12.618072 1346 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1aa5ab7e-268c-4050-aeec-1b6c1ede04b7-etc-cni-netd\") pod \"1aa5ab7e-268c-4050-aeec-1b6c1ede04b7\" (UID: \"1aa5ab7e-268c-4050-aeec-1b6c1ede04b7\") " Feb 9 19:24:12.618183 kubelet[1346]: I0209 19:24:12.618131 1346 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1aa5ab7e-268c-4050-aeec-1b6c1ede04b7-xtables-lock\") pod \"1aa5ab7e-268c-4050-aeec-1b6c1ede04b7\" (UID: \"1aa5ab7e-268c-4050-aeec-1b6c1ede04b7\") " Feb 9 19:24:12.618326 kubelet[1346]: I0209 19:24:12.618191 1346 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/1aa5ab7e-268c-4050-aeec-1b6c1ede04b7-cilium-ipsec-secrets\") pod \"1aa5ab7e-268c-4050-aeec-1b6c1ede04b7\" (UID: \"1aa5ab7e-268c-4050-aeec-1b6c1ede04b7\") " Feb 9 19:24:12.618326 kubelet[1346]: I0209 19:24:12.618245 1346 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1aa5ab7e-268c-4050-aeec-1b6c1ede04b7-host-proc-sys-net\") pod \"1aa5ab7e-268c-4050-aeec-1b6c1ede04b7\" (UID: \"1aa5ab7e-268c-4050-aeec-1b6c1ede04b7\") " Feb 9 19:24:12.618326 kubelet[1346]: I0209 19:24:12.618294 1346 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1aa5ab7e-268c-4050-aeec-1b6c1ede04b7-cilium-run\") pod \"1aa5ab7e-268c-4050-aeec-1b6c1ede04b7\" (UID: \"1aa5ab7e-268c-4050-aeec-1b6c1ede04b7\") " Feb 9 19:24:12.618515 kubelet[1346]: I0209 19:24:12.618355 1346 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wvtwp\" (UniqueName: \"kubernetes.io/projected/1aa5ab7e-268c-4050-aeec-1b6c1ede04b7-kube-api-access-wvtwp\") pod \"1aa5ab7e-268c-4050-aeec-1b6c1ede04b7\" (UID: \"1aa5ab7e-268c-4050-aeec-1b6c1ede04b7\") " Feb 9 19:24:12.618515 kubelet[1346]: I0209 19:24:12.618411 1346 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1aa5ab7e-268c-4050-aeec-1b6c1ede04b7-hubble-tls\") pod \"1aa5ab7e-268c-4050-aeec-1b6c1ede04b7\" (UID: \"1aa5ab7e-268c-4050-aeec-1b6c1ede04b7\") " Feb 9 19:24:12.618515 kubelet[1346]: I0209 19:24:12.618461 1346 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1aa5ab7e-268c-4050-aeec-1b6c1ede04b7-lib-modules\") pod \"1aa5ab7e-268c-4050-aeec-1b6c1ede04b7\" (UID: \"1aa5ab7e-268c-4050-aeec-1b6c1ede04b7\") " Feb 9 19:24:12.618515 kubelet[1346]: I0209 19:24:12.618513 1346 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1aa5ab7e-268c-4050-aeec-1b6c1ede04b7-host-proc-sys-kernel\") pod \"1aa5ab7e-268c-4050-aeec-1b6c1ede04b7\" (UID: \"1aa5ab7e-268c-4050-aeec-1b6c1ede04b7\") " Feb 9 19:24:12.618769 kubelet[1346]: I0209 19:24:12.618594 1346 reconciler_common.go:295] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1aa5ab7e-268c-4050-aeec-1b6c1ede04b7-cni-path\") on node \"172.24.4.101\" DevicePath \"\"" Feb 9 19:24:12.618769 kubelet[1346]: I0209 19:24:12.618637 1346 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1aa5ab7e-268c-4050-aeec-1b6c1ede04b7-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "1aa5ab7e-268c-4050-aeec-1b6c1ede04b7" (UID: "1aa5ab7e-268c-4050-aeec-1b6c1ede04b7"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:24:12.619075 kubelet[1346]: I0209 19:24:12.619025 1346 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1aa5ab7e-268c-4050-aeec-1b6c1ede04b7-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "1aa5ab7e-268c-4050-aeec-1b6c1ede04b7" (UID: "1aa5ab7e-268c-4050-aeec-1b6c1ede04b7"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:24:12.619790 kubelet[1346]: I0209 19:24:12.619696 1346 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1aa5ab7e-268c-4050-aeec-1b6c1ede04b7-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "1aa5ab7e-268c-4050-aeec-1b6c1ede04b7" (UID: "1aa5ab7e-268c-4050-aeec-1b6c1ede04b7"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:24:12.619963 kubelet[1346]: I0209 19:24:12.619800 1346 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1aa5ab7e-268c-4050-aeec-1b6c1ede04b7-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "1aa5ab7e-268c-4050-aeec-1b6c1ede04b7" (UID: "1aa5ab7e-268c-4050-aeec-1b6c1ede04b7"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:24:12.620612 kubelet[1346]: I0209 19:24:12.620529 1346 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1aa5ab7e-268c-4050-aeec-1b6c1ede04b7-hostproc" (OuterVolumeSpecName: "hostproc") pod "1aa5ab7e-268c-4050-aeec-1b6c1ede04b7" (UID: "1aa5ab7e-268c-4050-aeec-1b6c1ede04b7"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:24:12.621087 kubelet[1346]: W0209 19:24:12.621012 1346 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/1aa5ab7e-268c-4050-aeec-1b6c1ede04b7/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 9 19:24:12.621684 kubelet[1346]: I0209 19:24:12.621626 1346 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1aa5ab7e-268c-4050-aeec-1b6c1ede04b7-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "1aa5ab7e-268c-4050-aeec-1b6c1ede04b7" (UID: "1aa5ab7e-268c-4050-aeec-1b6c1ede04b7"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:24:12.621801 kubelet[1346]: I0209 19:24:12.621698 1346 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1aa5ab7e-268c-4050-aeec-1b6c1ede04b7-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "1aa5ab7e-268c-4050-aeec-1b6c1ede04b7" (UID: "1aa5ab7e-268c-4050-aeec-1b6c1ede04b7"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:24:12.621801 kubelet[1346]: I0209 19:24:12.621742 1346 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1aa5ab7e-268c-4050-aeec-1b6c1ede04b7-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "1aa5ab7e-268c-4050-aeec-1b6c1ede04b7" (UID: "1aa5ab7e-268c-4050-aeec-1b6c1ede04b7"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:24:12.625349 kubelet[1346]: I0209 19:24:12.625279 1346 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1aa5ab7e-268c-4050-aeec-1b6c1ede04b7-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "1aa5ab7e-268c-4050-aeec-1b6c1ede04b7" (UID: "1aa5ab7e-268c-4050-aeec-1b6c1ede04b7"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:24:12.627788 kubelet[1346]: I0209 19:24:12.627724 1346 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1aa5ab7e-268c-4050-aeec-1b6c1ede04b7-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "1aa5ab7e-268c-4050-aeec-1b6c1ede04b7" (UID: "1aa5ab7e-268c-4050-aeec-1b6c1ede04b7"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 19:24:12.636456 systemd[1]: var-lib-kubelet-pods-1aa5ab7e\x2d268c\x2d4050\x2daeec\x2d1b6c1ede04b7-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dwvtwp.mount: Deactivated successfully. Feb 9 19:24:12.639089 kubelet[1346]: I0209 19:24:12.639024 1346 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1aa5ab7e-268c-4050-aeec-1b6c1ede04b7-kube-api-access-wvtwp" (OuterVolumeSpecName: "kube-api-access-wvtwp") pod "1aa5ab7e-268c-4050-aeec-1b6c1ede04b7" (UID: "1aa5ab7e-268c-4050-aeec-1b6c1ede04b7"). InnerVolumeSpecName "kube-api-access-wvtwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 19:24:12.644341 systemd[1]: var-lib-kubelet-pods-1aa5ab7e\x2d268c\x2d4050\x2daeec\x2d1b6c1ede04b7-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 9 19:24:12.649720 systemd[1]: var-lib-kubelet-pods-1aa5ab7e\x2d268c\x2d4050\x2daeec\x2d1b6c1ede04b7-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 9 19:24:12.655015 kubelet[1346]: I0209 19:24:12.651243 1346 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1aa5ab7e-268c-4050-aeec-1b6c1ede04b7-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "1aa5ab7e-268c-4050-aeec-1b6c1ede04b7" (UID: "1aa5ab7e-268c-4050-aeec-1b6c1ede04b7"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 19:24:12.655015 kubelet[1346]: I0209 19:24:12.651536 1346 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1aa5ab7e-268c-4050-aeec-1b6c1ede04b7-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "1aa5ab7e-268c-4050-aeec-1b6c1ede04b7" (UID: "1aa5ab7e-268c-4050-aeec-1b6c1ede04b7"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 19:24:12.654741 systemd[1]: var-lib-kubelet-pods-1aa5ab7e\x2d268c\x2d4050\x2daeec\x2d1b6c1ede04b7-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Feb 9 19:24:12.656864 kubelet[1346]: I0209 19:24:12.656804 1346 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1aa5ab7e-268c-4050-aeec-1b6c1ede04b7-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "1aa5ab7e-268c-4050-aeec-1b6c1ede04b7" (UID: "1aa5ab7e-268c-4050-aeec-1b6c1ede04b7"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 19:24:12.719530 kubelet[1346]: I0209 19:24:12.719462 1346 reconciler_common.go:295] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1aa5ab7e-268c-4050-aeec-1b6c1ede04b7-cilium-run\") on node \"172.24.4.101\" DevicePath \"\"" Feb 9 19:24:12.719530 kubelet[1346]: I0209 19:24:12.719527 1346 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-wvtwp\" (UniqueName: \"kubernetes.io/projected/1aa5ab7e-268c-4050-aeec-1b6c1ede04b7-kube-api-access-wvtwp\") on node \"172.24.4.101\" DevicePath \"\"" Feb 9 19:24:12.719941 kubelet[1346]: I0209 19:24:12.719568 1346 reconciler_common.go:295] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1aa5ab7e-268c-4050-aeec-1b6c1ede04b7-etc-cni-netd\") on node \"172.24.4.101\" DevicePath \"\"" Feb 9 19:24:12.719941 kubelet[1346]: I0209 19:24:12.719596 1346 reconciler_common.go:295] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1aa5ab7e-268c-4050-aeec-1b6c1ede04b7-xtables-lock\") on node \"172.24.4.101\" DevicePath \"\"" Feb 9 19:24:12.719941 kubelet[1346]: I0209 19:24:12.719628 1346 reconciler_common.go:295] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/1aa5ab7e-268c-4050-aeec-1b6c1ede04b7-cilium-ipsec-secrets\") on node \"172.24.4.101\" DevicePath \"\"" Feb 9 19:24:12.719941 kubelet[1346]: I0209 19:24:12.719657 1346 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1aa5ab7e-268c-4050-aeec-1b6c1ede04b7-host-proc-sys-net\") on node \"172.24.4.101\" DevicePath \"\"" Feb 9 19:24:12.719941 kubelet[1346]: I0209 19:24:12.719688 1346 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1aa5ab7e-268c-4050-aeec-1b6c1ede04b7-host-proc-sys-kernel\") on node \"172.24.4.101\" DevicePath \"\"" Feb 9 19:24:12.719941 kubelet[1346]: I0209 19:24:12.719715 1346 reconciler_common.go:295] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1aa5ab7e-268c-4050-aeec-1b6c1ede04b7-hubble-tls\") on node \"172.24.4.101\" DevicePath \"\"" Feb 9 19:24:12.719941 kubelet[1346]: I0209 19:24:12.719742 1346 reconciler_common.go:295] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1aa5ab7e-268c-4050-aeec-1b6c1ede04b7-lib-modules\") on node \"172.24.4.101\" DevicePath \"\"" Feb 9 19:24:12.719941 kubelet[1346]: I0209 19:24:12.719769 1346 reconciler_common.go:295] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1aa5ab7e-268c-4050-aeec-1b6c1ede04b7-cilium-cgroup\") on node \"172.24.4.101\" DevicePath \"\"" Feb 9 19:24:12.720458 kubelet[1346]: I0209 19:24:12.719797 1346 reconciler_common.go:295] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1aa5ab7e-268c-4050-aeec-1b6c1ede04b7-clustermesh-secrets\") on node \"172.24.4.101\" DevicePath \"\"" Feb 9 19:24:12.720458 kubelet[1346]: I0209 19:24:12.719824 1346 reconciler_common.go:295] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1aa5ab7e-268c-4050-aeec-1b6c1ede04b7-hostproc\") on node \"172.24.4.101\" DevicePath \"\"" Feb 9 19:24:12.720458 kubelet[1346]: I0209 19:24:12.719918 1346 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1aa5ab7e-268c-4050-aeec-1b6c1ede04b7-cilium-config-path\") on node \"172.24.4.101\" DevicePath \"\"" Feb 9 19:24:12.720458 kubelet[1346]: I0209 19:24:12.719997 1346 reconciler_common.go:295] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1aa5ab7e-268c-4050-aeec-1b6c1ede04b7-bpf-maps\") on node \"172.24.4.101\" DevicePath \"\"" Feb 9 19:24:12.947948 kubelet[1346]: I0209 19:24:12.946986 1346 scope.go:115] "RemoveContainer" containerID="0cbb1a0cffe84f12c5b796d88519834f95f917d90cab7be85e8574656c4e9e41" Feb 9 19:24:12.955516 systemd[1]: Removed slice kubepods-burstable-pod1aa5ab7e_268c_4050_aeec_1b6c1ede04b7.slice. Feb 9 19:24:12.958382 env[1061]: time="2024-02-09T19:24:12.958253905Z" level=info msg="RemoveContainer for \"0cbb1a0cffe84f12c5b796d88519834f95f917d90cab7be85e8574656c4e9e41\"" Feb 9 19:24:12.965536 env[1061]: time="2024-02-09T19:24:12.965471526Z" level=info msg="RemoveContainer for \"0cbb1a0cffe84f12c5b796d88519834f95f917d90cab7be85e8574656c4e9e41\" returns successfully" Feb 9 19:24:12.977112 kubelet[1346]: I0209 19:24:12.976993 1346 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-f59cbd8c6-nxr8s" podStartSLOduration=-9.223372030877949e+09 pod.CreationTimestamp="2024-02-09 19:24:07 +0000 UTC" firstStartedPulling="2024-02-09 19:24:08.189522945 +0000 UTC m=+102.596823934" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:24:11.995933645 +0000 UTC m=+106.403234713" watchObservedRunningTime="2024-02-09 19:24:12.976826902 +0000 UTC m=+107.384127940" Feb 9 19:24:13.021078 kubelet[1346]: I0209 19:24:13.020994 1346 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:24:13.021458 kubelet[1346]: E0209 19:24:13.021431 1346 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1aa5ab7e-268c-4050-aeec-1b6c1ede04b7" containerName="mount-cgroup" Feb 9 19:24:13.021690 kubelet[1346]: E0209 19:24:13.021664 1346 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1aa5ab7e-268c-4050-aeec-1b6c1ede04b7" containerName="mount-cgroup" Feb 9 19:24:13.021992 kubelet[1346]: I0209 19:24:13.021964 1346 memory_manager.go:346] "RemoveStaleState removing state" podUID="1aa5ab7e-268c-4050-aeec-1b6c1ede04b7" containerName="mount-cgroup" Feb 9 19:24:13.022219 kubelet[1346]: I0209 19:24:13.022193 1346 memory_manager.go:346] "RemoveStaleState removing state" podUID="1aa5ab7e-268c-4050-aeec-1b6c1ede04b7" containerName="mount-cgroup" Feb 9 19:24:13.038095 systemd[1]: Created slice kubepods-burstable-pod58962e22_5a3e_4d07_a1b3_52c2f7072dee.slice. Feb 9 19:24:13.122652 kubelet[1346]: I0209 19:24:13.122508 1346 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/58962e22-5a3e-4d07-a1b3-52c2f7072dee-clustermesh-secrets\") pod \"cilium-682g4\" (UID: \"58962e22-5a3e-4d07-a1b3-52c2f7072dee\") " pod="kube-system/cilium-682g4" Feb 9 19:24:13.122652 kubelet[1346]: I0209 19:24:13.122643 1346 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/58962e22-5a3e-4d07-a1b3-52c2f7072dee-cilium-run\") pod \"cilium-682g4\" (UID: \"58962e22-5a3e-4d07-a1b3-52c2f7072dee\") " pod="kube-system/cilium-682g4" Feb 9 19:24:13.123029 kubelet[1346]: I0209 19:24:13.122759 1346 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/58962e22-5a3e-4d07-a1b3-52c2f7072dee-cilium-cgroup\") pod \"cilium-682g4\" (UID: \"58962e22-5a3e-4d07-a1b3-52c2f7072dee\") " pod="kube-system/cilium-682g4" Feb 9 19:24:13.123029 kubelet[1346]: I0209 19:24:13.122861 1346 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/58962e22-5a3e-4d07-a1b3-52c2f7072dee-etc-cni-netd\") pod \"cilium-682g4\" (UID: \"58962e22-5a3e-4d07-a1b3-52c2f7072dee\") " pod="kube-system/cilium-682g4" Feb 9 19:24:13.123183 kubelet[1346]: I0209 19:24:13.123029 1346 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/58962e22-5a3e-4d07-a1b3-52c2f7072dee-lib-modules\") pod \"cilium-682g4\" (UID: \"58962e22-5a3e-4d07-a1b3-52c2f7072dee\") " pod="kube-system/cilium-682g4" Feb 9 19:24:13.123258 kubelet[1346]: I0209 19:24:13.123246 1346 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/58962e22-5a3e-4d07-a1b3-52c2f7072dee-xtables-lock\") pod \"cilium-682g4\" (UID: \"58962e22-5a3e-4d07-a1b3-52c2f7072dee\") " pod="kube-system/cilium-682g4" Feb 9 19:24:13.123390 kubelet[1346]: I0209 19:24:13.123354 1346 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nsp6w\" (UniqueName: \"kubernetes.io/projected/58962e22-5a3e-4d07-a1b3-52c2f7072dee-kube-api-access-nsp6w\") pod \"cilium-682g4\" (UID: \"58962e22-5a3e-4d07-a1b3-52c2f7072dee\") " pod="kube-system/cilium-682g4" Feb 9 19:24:13.123485 kubelet[1346]: I0209 19:24:13.123463 1346 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/58962e22-5a3e-4d07-a1b3-52c2f7072dee-hostproc\") pod \"cilium-682g4\" (UID: \"58962e22-5a3e-4d07-a1b3-52c2f7072dee\") " pod="kube-system/cilium-682g4" Feb 9 19:24:13.123573 kubelet[1346]: I0209 19:24:13.123564 1346 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/58962e22-5a3e-4d07-a1b3-52c2f7072dee-cni-path\") pod \"cilium-682g4\" (UID: \"58962e22-5a3e-4d07-a1b3-52c2f7072dee\") " pod="kube-system/cilium-682g4" Feb 9 19:24:13.123740 kubelet[1346]: I0209 19:24:13.123671 1346 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/58962e22-5a3e-4d07-a1b3-52c2f7072dee-cilium-ipsec-secrets\") pod \"cilium-682g4\" (UID: \"58962e22-5a3e-4d07-a1b3-52c2f7072dee\") " pod="kube-system/cilium-682g4" Feb 9 19:24:13.123851 kubelet[1346]: I0209 19:24:13.123786 1346 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/58962e22-5a3e-4d07-a1b3-52c2f7072dee-host-proc-sys-net\") pod \"cilium-682g4\" (UID: \"58962e22-5a3e-4d07-a1b3-52c2f7072dee\") " pod="kube-system/cilium-682g4" Feb 9 19:24:13.124003 kubelet[1346]: I0209 19:24:13.123929 1346 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/58962e22-5a3e-4d07-a1b3-52c2f7072dee-hubble-tls\") pod \"cilium-682g4\" (UID: \"58962e22-5a3e-4d07-a1b3-52c2f7072dee\") " pod="kube-system/cilium-682g4" Feb 9 19:24:13.124090 kubelet[1346]: I0209 19:24:13.124079 1346 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/58962e22-5a3e-4d07-a1b3-52c2f7072dee-bpf-maps\") pod \"cilium-682g4\" (UID: \"58962e22-5a3e-4d07-a1b3-52c2f7072dee\") " pod="kube-system/cilium-682g4" Feb 9 19:24:13.124224 kubelet[1346]: I0209 19:24:13.124189 1346 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/58962e22-5a3e-4d07-a1b3-52c2f7072dee-cilium-config-path\") pod \"cilium-682g4\" (UID: \"58962e22-5a3e-4d07-a1b3-52c2f7072dee\") " pod="kube-system/cilium-682g4" Feb 9 19:24:13.124521 kubelet[1346]: I0209 19:24:13.124489 1346 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/58962e22-5a3e-4d07-a1b3-52c2f7072dee-host-proc-sys-kernel\") pod \"cilium-682g4\" (UID: \"58962e22-5a3e-4d07-a1b3-52c2f7072dee\") " pod="kube-system/cilium-682g4" Feb 9 19:24:13.297938 kubelet[1346]: E0209 19:24:13.297678 1346 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:24:13.349973 env[1061]: time="2024-02-09T19:24:13.348861177Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-682g4,Uid:58962e22-5a3e-4d07-a1b3-52c2f7072dee,Namespace:kube-system,Attempt:0,}" Feb 9 19:24:13.401366 env[1061]: time="2024-02-09T19:24:13.400986330Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:24:13.401366 env[1061]: time="2024-02-09T19:24:13.401071319Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:24:13.401366 env[1061]: time="2024-02-09T19:24:13.401103601Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:24:13.401753 env[1061]: time="2024-02-09T19:24:13.401434382Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/46614988d0271c8b247a8f77c8bf05d7e649b8c21d0b0ee60ca07d45ee7f6b03 pid=3261 runtime=io.containerd.runc.v2 Feb 9 19:24:13.432284 systemd[1]: Started cri-containerd-46614988d0271c8b247a8f77c8bf05d7e649b8c21d0b0ee60ca07d45ee7f6b03.scope. Feb 9 19:24:13.466326 env[1061]: time="2024-02-09T19:24:13.466261610Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-682g4,Uid:58962e22-5a3e-4d07-a1b3-52c2f7072dee,Namespace:kube-system,Attempt:0,} returns sandbox id \"46614988d0271c8b247a8f77c8bf05d7e649b8c21d0b0ee60ca07d45ee7f6b03\"" Feb 9 19:24:13.470212 env[1061]: time="2024-02-09T19:24:13.470155977Z" level=info msg="CreateContainer within sandbox \"46614988d0271c8b247a8f77c8bf05d7e649b8c21d0b0ee60ca07d45ee7f6b03\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 19:24:13.493050 env[1061]: time="2024-02-09T19:24:13.492956595Z" level=info msg="CreateContainer within sandbox \"46614988d0271c8b247a8f77c8bf05d7e649b8c21d0b0ee60ca07d45ee7f6b03\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"3ccb5cb8e0b975041a936ed043ae6d2ea3a489f5ec58e90b3454399601d386bc\"" Feb 9 19:24:13.493872 env[1061]: time="2024-02-09T19:24:13.493839514Z" level=info msg="StartContainer for \"3ccb5cb8e0b975041a936ed043ae6d2ea3a489f5ec58e90b3454399601d386bc\"" Feb 9 19:24:13.513029 systemd[1]: Started cri-containerd-3ccb5cb8e0b975041a936ed043ae6d2ea3a489f5ec58e90b3454399601d386bc.scope. Feb 9 19:24:13.562978 env[1061]: time="2024-02-09T19:24:13.561709559Z" level=info msg="StartContainer for \"3ccb5cb8e0b975041a936ed043ae6d2ea3a489f5ec58e90b3454399601d386bc\" returns successfully" Feb 9 19:24:13.601328 systemd[1]: cri-containerd-3ccb5cb8e0b975041a936ed043ae6d2ea3a489f5ec58e90b3454399601d386bc.scope: Deactivated successfully. Feb 9 19:24:13.678965 env[1061]: time="2024-02-09T19:24:13.678800200Z" level=info msg="shim disconnected" id=3ccb5cb8e0b975041a936ed043ae6d2ea3a489f5ec58e90b3454399601d386bc Feb 9 19:24:13.678965 env[1061]: time="2024-02-09T19:24:13.678946896Z" level=warning msg="cleaning up after shim disconnected" id=3ccb5cb8e0b975041a936ed043ae6d2ea3a489f5ec58e90b3454399601d386bc namespace=k8s.io Feb 9 19:24:13.678965 env[1061]: time="2024-02-09T19:24:13.678973646Z" level=info msg="cleaning up dead shim" Feb 9 19:24:13.698541 env[1061]: time="2024-02-09T19:24:13.698458504Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:24:13Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3346 runtime=io.containerd.runc.v2\n" Feb 9 19:24:13.957597 env[1061]: time="2024-02-09T19:24:13.957514992Z" level=info msg="CreateContainer within sandbox \"46614988d0271c8b247a8f77c8bf05d7e649b8c21d0b0ee60ca07d45ee7f6b03\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 9 19:24:13.983675 env[1061]: time="2024-02-09T19:24:13.983593890Z" level=info msg="CreateContainer within sandbox \"46614988d0271c8b247a8f77c8bf05d7e649b8c21d0b0ee60ca07d45ee7f6b03\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"b119244556e40d1896cde03ff64b3f0561feeadcec0cc937e47dcb932cdfe07f\"" Feb 9 19:24:13.984860 env[1061]: time="2024-02-09T19:24:13.984810165Z" level=info msg="StartContainer for \"b119244556e40d1896cde03ff64b3f0561feeadcec0cc937e47dcb932cdfe07f\"" Feb 9 19:24:14.018708 systemd[1]: Started cri-containerd-b119244556e40d1896cde03ff64b3f0561feeadcec0cc937e47dcb932cdfe07f.scope. Feb 9 19:24:14.088073 env[1061]: time="2024-02-09T19:24:14.087961040Z" level=info msg="StartContainer for \"b119244556e40d1896cde03ff64b3f0561feeadcec0cc937e47dcb932cdfe07f\" returns successfully" Feb 9 19:24:14.096991 systemd[1]: cri-containerd-b119244556e40d1896cde03ff64b3f0561feeadcec0cc937e47dcb932cdfe07f.scope: Deactivated successfully. Feb 9 19:24:14.144365 env[1061]: time="2024-02-09T19:24:14.144251285Z" level=info msg="shim disconnected" id=b119244556e40d1896cde03ff64b3f0561feeadcec0cc937e47dcb932cdfe07f Feb 9 19:24:14.144845 env[1061]: time="2024-02-09T19:24:14.144801028Z" level=warning msg="cleaning up after shim disconnected" id=b119244556e40d1896cde03ff64b3f0561feeadcec0cc937e47dcb932cdfe07f namespace=k8s.io Feb 9 19:24:14.145133 env[1061]: time="2024-02-09T19:24:14.145096943Z" level=info msg="cleaning up dead shim" Feb 9 19:24:14.159934 env[1061]: time="2024-02-09T19:24:14.159828009Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:24:14Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3411 runtime=io.containerd.runc.v2\n" Feb 9 19:24:14.298273 kubelet[1346]: E0209 19:24:14.298000 1346 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:24:14.469173 kubelet[1346]: I0209 19:24:14.469100 1346 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=1aa5ab7e-268c-4050-aeec-1b6c1ede04b7 path="/var/lib/kubelet/pods/1aa5ab7e-268c-4050-aeec-1b6c1ede04b7/volumes" Feb 9 19:24:14.503694 kubelet[1346]: W0209 19:24:14.503606 1346 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1aa5ab7e_268c_4050_aeec_1b6c1ede04b7.slice/cri-containerd-0cbb1a0cffe84f12c5b796d88519834f95f917d90cab7be85e8574656c4e9e41.scope WatchSource:0}: container "0cbb1a0cffe84f12c5b796d88519834f95f917d90cab7be85e8574656c4e9e41" in namespace "k8s.io": not found Feb 9 19:24:14.969508 env[1061]: time="2024-02-09T19:24:14.969366086Z" level=info msg="CreateContainer within sandbox \"46614988d0271c8b247a8f77c8bf05d7e649b8c21d0b0ee60ca07d45ee7f6b03\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 9 19:24:15.013912 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2304023610.mount: Deactivated successfully. Feb 9 19:24:15.035706 env[1061]: time="2024-02-09T19:24:15.035623152Z" level=info msg="CreateContainer within sandbox \"46614988d0271c8b247a8f77c8bf05d7e649b8c21d0b0ee60ca07d45ee7f6b03\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"38b39ded6491f0dbda2bfbff01d36ca4c81eca2fd0998143c519d4c20d21e835\"" Feb 9 19:24:15.037672 env[1061]: time="2024-02-09T19:24:15.037546826Z" level=info msg="StartContainer for \"38b39ded6491f0dbda2bfbff01d36ca4c81eca2fd0998143c519d4c20d21e835\"" Feb 9 19:24:15.089065 systemd[1]: Started cri-containerd-38b39ded6491f0dbda2bfbff01d36ca4c81eca2fd0998143c519d4c20d21e835.scope. Feb 9 19:24:15.135851 env[1061]: time="2024-02-09T19:24:15.135788568Z" level=info msg="StartContainer for \"38b39ded6491f0dbda2bfbff01d36ca4c81eca2fd0998143c519d4c20d21e835\" returns successfully" Feb 9 19:24:15.144271 systemd[1]: cri-containerd-38b39ded6491f0dbda2bfbff01d36ca4c81eca2fd0998143c519d4c20d21e835.scope: Deactivated successfully. Feb 9 19:24:15.175098 env[1061]: time="2024-02-09T19:24:15.175010061Z" level=info msg="shim disconnected" id=38b39ded6491f0dbda2bfbff01d36ca4c81eca2fd0998143c519d4c20d21e835 Feb 9 19:24:15.175098 env[1061]: time="2024-02-09T19:24:15.175076476Z" level=warning msg="cleaning up after shim disconnected" id=38b39ded6491f0dbda2bfbff01d36ca4c81eca2fd0998143c519d4c20d21e835 namespace=k8s.io Feb 9 19:24:15.175098 env[1061]: time="2024-02-09T19:24:15.175089050Z" level=info msg="cleaning up dead shim" Feb 9 19:24:15.185022 env[1061]: time="2024-02-09T19:24:15.184963190Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:24:15Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3469 runtime=io.containerd.runc.v2\n" Feb 9 19:24:15.234919 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-38b39ded6491f0dbda2bfbff01d36ca4c81eca2fd0998143c519d4c20d21e835-rootfs.mount: Deactivated successfully. Feb 9 19:24:15.299284 kubelet[1346]: E0209 19:24:15.299180 1346 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:24:15.977326 env[1061]: time="2024-02-09T19:24:15.977198548Z" level=info msg="CreateContainer within sandbox \"46614988d0271c8b247a8f77c8bf05d7e649b8c21d0b0ee60ca07d45ee7f6b03\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 9 19:24:16.026149 env[1061]: time="2024-02-09T19:24:16.026099835Z" level=info msg="CreateContainer within sandbox \"46614988d0271c8b247a8f77c8bf05d7e649b8c21d0b0ee60ca07d45ee7f6b03\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"0345dcb93659764c5df5c19df17b9e65cb01996ce76caa9bf953f775a96ec7de\"" Feb 9 19:24:16.027229 env[1061]: time="2024-02-09T19:24:16.027203247Z" level=info msg="StartContainer for \"0345dcb93659764c5df5c19df17b9e65cb01996ce76caa9bf953f775a96ec7de\"" Feb 9 19:24:16.070039 systemd[1]: Started cri-containerd-0345dcb93659764c5df5c19df17b9e65cb01996ce76caa9bf953f775a96ec7de.scope. Feb 9 19:24:16.099858 systemd[1]: cri-containerd-0345dcb93659764c5df5c19df17b9e65cb01996ce76caa9bf953f775a96ec7de.scope: Deactivated successfully. Feb 9 19:24:16.102491 env[1061]: time="2024-02-09T19:24:16.102002193Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod58962e22_5a3e_4d07_a1b3_52c2f7072dee.slice/cri-containerd-0345dcb93659764c5df5c19df17b9e65cb01996ce76caa9bf953f775a96ec7de.scope/memory.events\": no such file or directory" Feb 9 19:24:16.108614 env[1061]: time="2024-02-09T19:24:16.108521330Z" level=info msg="StartContainer for \"0345dcb93659764c5df5c19df17b9e65cb01996ce76caa9bf953f775a96ec7de\" returns successfully" Feb 9 19:24:16.141179 env[1061]: time="2024-02-09T19:24:16.141123075Z" level=info msg="shim disconnected" id=0345dcb93659764c5df5c19df17b9e65cb01996ce76caa9bf953f775a96ec7de Feb 9 19:24:16.141468 env[1061]: time="2024-02-09T19:24:16.141443718Z" level=warning msg="cleaning up after shim disconnected" id=0345dcb93659764c5df5c19df17b9e65cb01996ce76caa9bf953f775a96ec7de namespace=k8s.io Feb 9 19:24:16.141551 env[1061]: time="2024-02-09T19:24:16.141531182Z" level=info msg="cleaning up dead shim" Feb 9 19:24:16.152726 env[1061]: time="2024-02-09T19:24:16.152661821Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:24:16Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3529 runtime=io.containerd.runc.v2\n" Feb 9 19:24:16.235245 systemd[1]: run-containerd-runc-k8s.io-0345dcb93659764c5df5c19df17b9e65cb01996ce76caa9bf953f775a96ec7de-runc.xthUV2.mount: Deactivated successfully. Feb 9 19:24:16.235382 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0345dcb93659764c5df5c19df17b9e65cb01996ce76caa9bf953f775a96ec7de-rootfs.mount: Deactivated successfully. Feb 9 19:24:16.300532 kubelet[1346]: E0209 19:24:16.300396 1346 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:24:16.331282 kubelet[1346]: E0209 19:24:16.331217 1346 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 9 19:24:16.985201 env[1061]: time="2024-02-09T19:24:16.984790440Z" level=info msg="CreateContainer within sandbox \"46614988d0271c8b247a8f77c8bf05d7e649b8c21d0b0ee60ca07d45ee7f6b03\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 9 19:24:17.044857 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount160977351.mount: Deactivated successfully. Feb 9 19:24:17.055610 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4124340749.mount: Deactivated successfully. Feb 9 19:24:17.066402 env[1061]: time="2024-02-09T19:24:17.066304338Z" level=info msg="CreateContainer within sandbox \"46614988d0271c8b247a8f77c8bf05d7e649b8c21d0b0ee60ca07d45ee7f6b03\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"178f0330b4628ab3a7390798c56ccefa5244ff2597ad70bf6f834b30b3297804\"" Feb 9 19:24:17.071639 env[1061]: time="2024-02-09T19:24:17.071549150Z" level=info msg="StartContainer for \"178f0330b4628ab3a7390798c56ccefa5244ff2597ad70bf6f834b30b3297804\"" Feb 9 19:24:17.104299 systemd[1]: Started cri-containerd-178f0330b4628ab3a7390798c56ccefa5244ff2597ad70bf6f834b30b3297804.scope. Feb 9 19:24:17.167618 env[1061]: time="2024-02-09T19:24:17.167551331Z" level=info msg="StartContainer for \"178f0330b4628ab3a7390798c56ccefa5244ff2597ad70bf6f834b30b3297804\" returns successfully" Feb 9 19:24:17.301646 kubelet[1346]: E0209 19:24:17.301436 1346 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:24:17.636136 kubelet[1346]: W0209 19:24:17.636069 1346 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod58962e22_5a3e_4d07_a1b3_52c2f7072dee.slice/cri-containerd-3ccb5cb8e0b975041a936ed043ae6d2ea3a489f5ec58e90b3454399601d386bc.scope WatchSource:0}: task 3ccb5cb8e0b975041a936ed043ae6d2ea3a489f5ec58e90b3454399601d386bc not found: not found Feb 9 19:24:18.023378 kubelet[1346]: I0209 19:24:18.023152 1346 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-682g4" podStartSLOduration=5.023019132 pod.CreationTimestamp="2024-02-09 19:24:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:24:18.022930806 +0000 UTC m=+112.430231804" watchObservedRunningTime="2024-02-09 19:24:18.023019132 +0000 UTC m=+112.430320120" Feb 9 19:24:18.130953 kernel: cryptd: max_cpu_qlen set to 1000 Feb 9 19:24:18.178932 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm_base(ctr(aes-generic),ghash-generic)))) Feb 9 19:24:18.303243 kubelet[1346]: E0209 19:24:18.302852 1346 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:24:19.303295 kubelet[1346]: E0209 19:24:19.303141 1346 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:24:20.304689 kubelet[1346]: E0209 19:24:20.304625 1346 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:24:20.752675 kubelet[1346]: W0209 19:24:20.752628 1346 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod58962e22_5a3e_4d07_a1b3_52c2f7072dee.slice/cri-containerd-b119244556e40d1896cde03ff64b3f0561feeadcec0cc937e47dcb932cdfe07f.scope WatchSource:0}: task b119244556e40d1896cde03ff64b3f0561feeadcec0cc937e47dcb932cdfe07f not found: not found Feb 9 19:24:21.305808 kubelet[1346]: E0209 19:24:21.305727 1346 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:24:21.365254 systemd-networkd[969]: lxc_health: Link UP Feb 9 19:24:21.375445 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 9 19:24:21.374522 systemd-networkd[969]: lxc_health: Gained carrier Feb 9 19:24:22.305986 kubelet[1346]: E0209 19:24:22.305936 1346 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:24:22.990551 systemd-networkd[969]: lxc_health: Gained IPv6LL Feb 9 19:24:23.307220 kubelet[1346]: E0209 19:24:23.307058 1346 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:24:23.417115 systemd[1]: run-containerd-runc-k8s.io-178f0330b4628ab3a7390798c56ccefa5244ff2597ad70bf6f834b30b3297804-runc.ZiQEcH.mount: Deactivated successfully. Feb 9 19:24:23.868624 kubelet[1346]: W0209 19:24:23.868593 1346 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod58962e22_5a3e_4d07_a1b3_52c2f7072dee.slice/cri-containerd-38b39ded6491f0dbda2bfbff01d36ca4c81eca2fd0998143c519d4c20d21e835.scope WatchSource:0}: task 38b39ded6491f0dbda2bfbff01d36ca4c81eca2fd0998143c519d4c20d21e835 not found: not found Feb 9 19:24:24.308551 kubelet[1346]: E0209 19:24:24.308456 1346 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:24:25.310073 kubelet[1346]: E0209 19:24:25.310024 1346 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:24:25.725645 systemd[1]: run-containerd-runc-k8s.io-178f0330b4628ab3a7390798c56ccefa5244ff2597ad70bf6f834b30b3297804-runc.bTs6aa.mount: Deactivated successfully. Feb 9 19:24:26.187113 kubelet[1346]: E0209 19:24:26.187047 1346 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:24:26.202674 env[1061]: time="2024-02-09T19:24:26.202629853Z" level=info msg="StopPodSandbox for \"bdb72ea726923b0fb2c34e252f2d425132eccdb45449a5e8c153f3415216f157\"" Feb 9 19:24:26.203345 env[1061]: time="2024-02-09T19:24:26.203301655Z" level=info msg="TearDown network for sandbox \"bdb72ea726923b0fb2c34e252f2d425132eccdb45449a5e8c153f3415216f157\" successfully" Feb 9 19:24:26.203427 env[1061]: time="2024-02-09T19:24:26.203408966Z" level=info msg="StopPodSandbox for \"bdb72ea726923b0fb2c34e252f2d425132eccdb45449a5e8c153f3415216f157\" returns successfully" Feb 9 19:24:26.203894 env[1061]: time="2024-02-09T19:24:26.203811102Z" level=info msg="RemovePodSandbox for \"bdb72ea726923b0fb2c34e252f2d425132eccdb45449a5e8c153f3415216f157\"" Feb 9 19:24:26.203949 env[1061]: time="2024-02-09T19:24:26.203858931Z" level=info msg="Forcibly stopping sandbox \"bdb72ea726923b0fb2c34e252f2d425132eccdb45449a5e8c153f3415216f157\"" Feb 9 19:24:26.204004 env[1061]: time="2024-02-09T19:24:26.203966072Z" level=info msg="TearDown network for sandbox \"bdb72ea726923b0fb2c34e252f2d425132eccdb45449a5e8c153f3415216f157\" successfully" Feb 9 19:24:26.219925 env[1061]: time="2024-02-09T19:24:26.219817210Z" level=info msg="RemovePodSandbox \"bdb72ea726923b0fb2c34e252f2d425132eccdb45449a5e8c153f3415216f157\" returns successfully" Feb 9 19:24:26.221317 env[1061]: time="2024-02-09T19:24:26.221293894Z" level=info msg="StopPodSandbox for \"771de2db7383e144d6c82d08c6a62c372ca9d7d96301632302764a51c7c937dd\"" Feb 9 19:24:26.221651 env[1061]: time="2024-02-09T19:24:26.221597453Z" level=info msg="TearDown network for sandbox \"771de2db7383e144d6c82d08c6a62c372ca9d7d96301632302764a51c7c937dd\" successfully" Feb 9 19:24:26.221757 env[1061]: time="2024-02-09T19:24:26.221727628Z" level=info msg="StopPodSandbox for \"771de2db7383e144d6c82d08c6a62c372ca9d7d96301632302764a51c7c937dd\" returns successfully" Feb 9 19:24:26.222141 env[1061]: time="2024-02-09T19:24:26.222120466Z" level=info msg="RemovePodSandbox for \"771de2db7383e144d6c82d08c6a62c372ca9d7d96301632302764a51c7c937dd\"" Feb 9 19:24:26.222246 env[1061]: time="2024-02-09T19:24:26.222212829Z" level=info msg="Forcibly stopping sandbox \"771de2db7383e144d6c82d08c6a62c372ca9d7d96301632302764a51c7c937dd\"" Feb 9 19:24:26.222437 env[1061]: time="2024-02-09T19:24:26.222417052Z" level=info msg="TearDown network for sandbox \"771de2db7383e144d6c82d08c6a62c372ca9d7d96301632302764a51c7c937dd\" successfully" Feb 9 19:24:26.230178 env[1061]: time="2024-02-09T19:24:26.230045759Z" level=info msg="RemovePodSandbox \"771de2db7383e144d6c82d08c6a62c372ca9d7d96301632302764a51c7c937dd\" returns successfully" Feb 9 19:24:26.311846 kubelet[1346]: E0209 19:24:26.311793 1346 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:24:26.980040 kubelet[1346]: W0209 19:24:26.979937 1346 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod58962e22_5a3e_4d07_a1b3_52c2f7072dee.slice/cri-containerd-0345dcb93659764c5df5c19df17b9e65cb01996ce76caa9bf953f775a96ec7de.scope WatchSource:0}: task 0345dcb93659764c5df5c19df17b9e65cb01996ce76caa9bf953f775a96ec7de not found: not found Feb 9 19:24:27.313640 kubelet[1346]: E0209 19:24:27.313098 1346 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:24:28.314659 kubelet[1346]: E0209 19:24:28.314609 1346 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:24:29.316440 kubelet[1346]: E0209 19:24:29.316392 1346 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:24:30.319092 kubelet[1346]: E0209 19:24:30.318329 1346 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:24:31.318982 kubelet[1346]: E0209 19:24:31.318869 1346 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:24:32.319947 kubelet[1346]: E0209 19:24:32.319859 1346 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:24:33.321043 kubelet[1346]: E0209 19:24:33.320973 1346 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:24:34.324803 kubelet[1346]: E0209 19:24:34.324617 1346 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"