Feb 8 23:40:43.040907 kernel: Linux version 5.15.148-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Thu Feb 8 21:14:17 -00 2024 Feb 8 23:40:43.040948 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=ae7db544026ede4699ee2036449b75950d3fb7929b25a6731d0ad396f1aa37c9 Feb 8 23:40:43.040969 kernel: BIOS-provided physical RAM map: Feb 8 23:40:43.040983 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Feb 8 23:40:43.040995 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Feb 8 23:40:43.041007 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Feb 8 23:40:43.041022 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdcfff] usable Feb 8 23:40:43.041035 kernel: BIOS-e820: [mem 0x000000007ffdd000-0x000000007fffffff] reserved Feb 8 23:40:43.041050 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Feb 8 23:40:43.041126 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Feb 8 23:40:43.041138 kernel: NX (Execute Disable) protection: active Feb 8 23:40:43.041150 kernel: SMBIOS 2.8 present. Feb 8 23:40:43.041163 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014 Feb 8 23:40:43.041175 kernel: Hypervisor detected: KVM Feb 8 23:40:43.041190 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Feb 8 23:40:43.041208 kernel: kvm-clock: cpu 0, msr 17faa001, primary cpu clock Feb 8 23:40:43.041221 kernel: kvm-clock: using sched offset of 7727129126 cycles Feb 8 23:40:43.041236 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Feb 8 23:40:43.041250 kernel: tsc: Detected 1996.249 MHz processor Feb 8 23:40:43.041264 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 8 23:40:43.041278 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 8 23:40:43.041292 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Feb 8 23:40:43.041306 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 8 23:40:43.041324 kernel: ACPI: Early table checksum verification disabled Feb 8 23:40:43.041337 kernel: ACPI: RSDP 0x00000000000F5930 000014 (v00 BOCHS ) Feb 8 23:40:43.041351 kernel: ACPI: RSDT 0x000000007FFE1848 000030 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 8 23:40:43.041365 kernel: ACPI: FACP 0x000000007FFE172C 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 8 23:40:43.041379 kernel: ACPI: DSDT 0x000000007FFE0040 0016EC (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 8 23:40:43.041392 kernel: ACPI: FACS 0x000000007FFE0000 000040 Feb 8 23:40:43.041405 kernel: ACPI: APIC 0x000000007FFE17A0 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 8 23:40:43.041419 kernel: ACPI: WAET 0x000000007FFE1820 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 8 23:40:43.041432 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe172c-0x7ffe179f] Feb 8 23:40:43.041449 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe172b] Feb 8 23:40:43.041463 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Feb 8 23:40:43.041472 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17a0-0x7ffe181f] Feb 8 23:40:43.041482 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe1820-0x7ffe1847] Feb 8 23:40:43.041489 kernel: No NUMA configuration found Feb 8 23:40:43.041497 kernel: Faking a node at [mem 0x0000000000000000-0x000000007ffdcfff] Feb 8 23:40:43.041505 kernel: NODE_DATA(0) allocated [mem 0x7ffd7000-0x7ffdcfff] Feb 8 23:40:43.041512 kernel: Zone ranges: Feb 8 23:40:43.041525 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 8 23:40:43.041534 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdcfff] Feb 8 23:40:43.041542 kernel: Normal empty Feb 8 23:40:43.041550 kernel: Movable zone start for each node Feb 8 23:40:43.041558 kernel: Early memory node ranges Feb 8 23:40:43.041566 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Feb 8 23:40:43.041576 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdcfff] Feb 8 23:40:43.041584 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdcfff] Feb 8 23:40:43.041592 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 8 23:40:43.041600 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Feb 8 23:40:43.041608 kernel: On node 0, zone DMA32: 35 pages in unavailable ranges Feb 8 23:40:43.041616 kernel: ACPI: PM-Timer IO Port: 0x608 Feb 8 23:40:43.041624 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Feb 8 23:40:43.041632 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Feb 8 23:40:43.041641 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Feb 8 23:40:43.041651 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Feb 8 23:40:43.041659 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 8 23:40:43.041667 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Feb 8 23:40:43.041675 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Feb 8 23:40:43.041683 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 8 23:40:43.041691 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Feb 8 23:40:43.041699 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Feb 8 23:40:43.041707 kernel: Booting paravirtualized kernel on KVM Feb 8 23:40:43.041716 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 8 23:40:43.041724 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 Feb 8 23:40:43.041734 kernel: percpu: Embedded 55 pages/cpu s185624 r8192 d31464 u1048576 Feb 8 23:40:43.041742 kernel: pcpu-alloc: s185624 r8192 d31464 u1048576 alloc=1*2097152 Feb 8 23:40:43.041750 kernel: pcpu-alloc: [0] 0 1 Feb 8 23:40:43.041758 kernel: kvm-guest: stealtime: cpu 0, msr 7dc1c0c0 Feb 8 23:40:43.041766 kernel: kvm-guest: PV spinlocks disabled, no host support Feb 8 23:40:43.041774 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515805 Feb 8 23:40:43.041783 kernel: Policy zone: DMA32 Feb 8 23:40:43.041792 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=ae7db544026ede4699ee2036449b75950d3fb7929b25a6731d0ad396f1aa37c9 Feb 8 23:40:43.041803 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 8 23:40:43.041811 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 8 23:40:43.041820 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Feb 8 23:40:43.041829 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 8 23:40:43.041837 kernel: Memory: 1975340K/2096620K available (12294K kernel code, 2275K rwdata, 13700K rodata, 45496K init, 4048K bss, 121020K reserved, 0K cma-reserved) Feb 8 23:40:43.041846 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 8 23:40:43.041854 kernel: ftrace: allocating 34475 entries in 135 pages Feb 8 23:40:43.041862 kernel: ftrace: allocated 135 pages with 4 groups Feb 8 23:40:43.041872 kernel: rcu: Hierarchical RCU implementation. Feb 8 23:40:43.041881 kernel: rcu: RCU event tracing is enabled. Feb 8 23:40:43.041889 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 8 23:40:43.041897 kernel: Rude variant of Tasks RCU enabled. Feb 8 23:40:43.041906 kernel: Tracing variant of Tasks RCU enabled. Feb 8 23:40:43.041914 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 8 23:40:43.041922 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 8 23:40:43.041930 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Feb 8 23:40:43.041938 kernel: Console: colour VGA+ 80x25 Feb 8 23:40:43.041948 kernel: printk: console [tty0] enabled Feb 8 23:40:43.041956 kernel: printk: console [ttyS0] enabled Feb 8 23:40:43.041965 kernel: ACPI: Core revision 20210730 Feb 8 23:40:43.041973 kernel: APIC: Switch to symmetric I/O mode setup Feb 8 23:40:43.041981 kernel: x2apic enabled Feb 8 23:40:43.041989 kernel: Switched APIC routing to physical x2apic. Feb 8 23:40:43.041997 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Feb 8 23:40:43.042005 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Feb 8 23:40:43.042014 kernel: Calibrating delay loop (skipped) preset value.. 3992.49 BogoMIPS (lpj=1996249) Feb 8 23:40:43.042022 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Feb 8 23:40:43.042032 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Feb 8 23:40:43.042040 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 8 23:40:43.042048 kernel: Spectre V2 : Mitigation: Retpolines Feb 8 23:40:43.042084 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 8 23:40:43.042094 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 8 23:40:43.042102 kernel: Speculative Store Bypass: Vulnerable Feb 8 23:40:43.042110 kernel: x86/fpu: x87 FPU will use FXSAVE Feb 8 23:40:43.042118 kernel: Freeing SMP alternatives memory: 32K Feb 8 23:40:43.042126 kernel: pid_max: default: 32768 minimum: 301 Feb 8 23:40:43.042137 kernel: LSM: Security Framework initializing Feb 8 23:40:43.042144 kernel: SELinux: Initializing. Feb 8 23:40:43.042153 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Feb 8 23:40:43.042161 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Feb 8 23:40:43.042169 kernel: smpboot: CPU0: AMD Intel Core i7 9xx (Nehalem Class Core i7) (family: 0x6, model: 0x1a, stepping: 0x3) Feb 8 23:40:43.042177 kernel: Performance Events: AMD PMU driver. Feb 8 23:40:43.042186 kernel: ... version: 0 Feb 8 23:40:43.042194 kernel: ... bit width: 48 Feb 8 23:40:43.042202 kernel: ... generic registers: 4 Feb 8 23:40:43.042218 kernel: ... value mask: 0000ffffffffffff Feb 8 23:40:43.042227 kernel: ... max period: 00007fffffffffff Feb 8 23:40:43.042237 kernel: ... fixed-purpose events: 0 Feb 8 23:40:43.042245 kernel: ... event mask: 000000000000000f Feb 8 23:40:43.042254 kernel: signal: max sigframe size: 1440 Feb 8 23:40:43.042262 kernel: rcu: Hierarchical SRCU implementation. Feb 8 23:40:43.042272 kernel: smp: Bringing up secondary CPUs ... Feb 8 23:40:43.042280 kernel: x86: Booting SMP configuration: Feb 8 23:40:43.042290 kernel: .... node #0, CPUs: #1 Feb 8 23:40:43.042299 kernel: kvm-clock: cpu 1, msr 17faa041, secondary cpu clock Feb 8 23:40:43.042307 kernel: kvm-guest: stealtime: cpu 1, msr 7dd1c0c0 Feb 8 23:40:43.042316 kernel: smp: Brought up 1 node, 2 CPUs Feb 8 23:40:43.042324 kernel: smpboot: Max logical packages: 2 Feb 8 23:40:43.042332 kernel: smpboot: Total of 2 processors activated (7984.99 BogoMIPS) Feb 8 23:40:43.042366 kernel: devtmpfs: initialized Feb 8 23:40:43.042375 kernel: x86/mm: Memory block size: 128MB Feb 8 23:40:43.042384 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 8 23:40:43.042395 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 8 23:40:43.042404 kernel: pinctrl core: initialized pinctrl subsystem Feb 8 23:40:43.042413 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 8 23:40:43.042421 kernel: audit: initializing netlink subsys (disabled) Feb 8 23:40:43.042430 kernel: audit: type=2000 audit(1707435642.760:1): state=initialized audit_enabled=0 res=1 Feb 8 23:40:43.042439 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 8 23:40:43.042448 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 8 23:40:43.042456 kernel: cpuidle: using governor menu Feb 8 23:40:43.042464 kernel: ACPI: bus type PCI registered Feb 8 23:40:43.042475 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 8 23:40:43.042483 kernel: dca service started, version 1.12.1 Feb 8 23:40:43.042492 kernel: PCI: Using configuration type 1 for base access Feb 8 23:40:43.042501 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 8 23:40:43.042509 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Feb 8 23:40:43.042518 kernel: ACPI: Added _OSI(Module Device) Feb 8 23:40:43.042526 kernel: ACPI: Added _OSI(Processor Device) Feb 8 23:40:43.042535 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 8 23:40:43.042543 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 8 23:40:43.042553 kernel: ACPI: Added _OSI(Linux-Dell-Video) Feb 8 23:40:43.042561 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Feb 8 23:40:43.042570 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Feb 8 23:40:43.042578 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 8 23:40:43.042587 kernel: ACPI: Interpreter enabled Feb 8 23:40:43.042595 kernel: ACPI: PM: (supports S0 S3 S5) Feb 8 23:40:43.042604 kernel: ACPI: Using IOAPIC for interrupt routing Feb 8 23:40:43.042612 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 8 23:40:43.042621 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Feb 8 23:40:43.042631 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 8 23:40:43.042826 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Feb 8 23:40:43.042921 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. Feb 8 23:40:43.042935 kernel: acpiphp: Slot [3] registered Feb 8 23:40:43.042944 kernel: acpiphp: Slot [4] registered Feb 8 23:40:43.042952 kernel: acpiphp: Slot [5] registered Feb 8 23:40:43.042960 kernel: acpiphp: Slot [6] registered Feb 8 23:40:43.042972 kernel: acpiphp: Slot [7] registered Feb 8 23:40:43.042980 kernel: acpiphp: Slot [8] registered Feb 8 23:40:43.042989 kernel: acpiphp: Slot [9] registered Feb 8 23:40:43.042998 kernel: acpiphp: Slot [10] registered Feb 8 23:40:43.043007 kernel: acpiphp: Slot [11] registered Feb 8 23:40:43.043016 kernel: acpiphp: Slot [12] registered Feb 8 23:40:43.043024 kernel: acpiphp: Slot [13] registered Feb 8 23:40:43.043032 kernel: acpiphp: Slot [14] registered Feb 8 23:40:43.043041 kernel: acpiphp: Slot [15] registered Feb 8 23:40:43.043097 kernel: acpiphp: Slot [16] registered Feb 8 23:40:43.043110 kernel: acpiphp: Slot [17] registered Feb 8 23:40:43.043118 kernel: acpiphp: Slot [18] registered Feb 8 23:40:43.043127 kernel: acpiphp: Slot [19] registered Feb 8 23:40:43.043135 kernel: acpiphp: Slot [20] registered Feb 8 23:40:43.043143 kernel: acpiphp: Slot [21] registered Feb 8 23:40:43.043152 kernel: acpiphp: Slot [22] registered Feb 8 23:40:43.043160 kernel: acpiphp: Slot [23] registered Feb 8 23:40:43.043168 kernel: acpiphp: Slot [24] registered Feb 8 23:40:43.043177 kernel: acpiphp: Slot [25] registered Feb 8 23:40:43.043187 kernel: acpiphp: Slot [26] registered Feb 8 23:40:43.043195 kernel: acpiphp: Slot [27] registered Feb 8 23:40:43.043204 kernel: acpiphp: Slot [28] registered Feb 8 23:40:43.043212 kernel: acpiphp: Slot [29] registered Feb 8 23:40:43.043220 kernel: acpiphp: Slot [30] registered Feb 8 23:40:43.043229 kernel: acpiphp: Slot [31] registered Feb 8 23:40:43.043237 kernel: PCI host bridge to bus 0000:00 Feb 8 23:40:43.043344 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 8 23:40:43.043426 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 8 23:40:43.043510 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 8 23:40:43.043588 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Feb 8 23:40:43.043664 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Feb 8 23:40:43.043741 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 8 23:40:43.043858 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Feb 8 23:40:43.043960 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Feb 8 23:40:43.044549 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Feb 8 23:40:43.044689 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc120-0xc12f] Feb 8 23:40:43.044786 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Feb 8 23:40:43.044879 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Feb 8 23:40:43.044973 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Feb 8 23:40:43.045100 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Feb 8 23:40:43.045214 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Feb 8 23:40:43.045332 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Feb 8 23:40:43.045432 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Feb 8 23:40:43.045560 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Feb 8 23:40:43.045655 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Feb 8 23:40:43.045758 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Feb 8 23:40:43.045856 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfeb90000-0xfeb90fff] Feb 8 23:40:43.045954 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfeb80000-0xfeb8ffff pref] Feb 8 23:40:43.046048 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Feb 8 23:40:43.048868 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Feb 8 23:40:43.048976 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc080-0xc0bf] Feb 8 23:40:43.049102 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfeb91000-0xfeb91fff] Feb 8 23:40:43.049221 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Feb 8 23:40:43.049313 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb00000-0xfeb7ffff pref] Feb 8 23:40:43.049451 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Feb 8 23:40:43.049552 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Feb 8 23:40:43.049649 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfeb92000-0xfeb92fff] Feb 8 23:40:43.049739 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Feb 8 23:40:43.049849 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 Feb 8 23:40:43.049939 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc0c0-0xc0ff] Feb 8 23:40:43.050025 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Feb 8 23:40:43.054246 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 Feb 8 23:40:43.054350 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc100-0xc11f] Feb 8 23:40:43.054441 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Feb 8 23:40:43.054455 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Feb 8 23:40:43.054465 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Feb 8 23:40:43.054474 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Feb 8 23:40:43.054483 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Feb 8 23:40:43.054493 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Feb 8 23:40:43.054506 kernel: iommu: Default domain type: Translated Feb 8 23:40:43.054514 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 8 23:40:43.054618 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Feb 8 23:40:43.054709 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Feb 8 23:40:43.054800 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Feb 8 23:40:43.054813 kernel: vgaarb: loaded Feb 8 23:40:43.054822 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 8 23:40:43.054832 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 8 23:40:43.054840 kernel: PTP clock support registered Feb 8 23:40:43.054852 kernel: PCI: Using ACPI for IRQ routing Feb 8 23:40:43.054861 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 8 23:40:43.054870 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Feb 8 23:40:43.054879 kernel: e820: reserve RAM buffer [mem 0x7ffdd000-0x7fffffff] Feb 8 23:40:43.054887 kernel: clocksource: Switched to clocksource kvm-clock Feb 8 23:40:43.054896 kernel: VFS: Disk quotas dquot_6.6.0 Feb 8 23:40:43.054905 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 8 23:40:43.054913 kernel: pnp: PnP ACPI init Feb 8 23:40:43.055016 kernel: pnp 00:03: [dma 2] Feb 8 23:40:43.055034 kernel: pnp: PnP ACPI: found 5 devices Feb 8 23:40:43.055044 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 8 23:40:43.055067 kernel: NET: Registered PF_INET protocol family Feb 8 23:40:43.055076 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 8 23:40:43.055085 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Feb 8 23:40:43.055094 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 8 23:40:43.055103 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 8 23:40:43.055112 kernel: TCP bind hash table entries: 16384 (order: 6, 262144 bytes, linear) Feb 8 23:40:43.055123 kernel: TCP: Hash tables configured (established 16384 bind 16384) Feb 8 23:40:43.055132 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Feb 8 23:40:43.055141 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Feb 8 23:40:43.055150 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 8 23:40:43.055158 kernel: NET: Registered PF_XDP protocol family Feb 8 23:40:43.055274 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 8 23:40:43.055389 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 8 23:40:43.055510 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 8 23:40:43.055593 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Feb 8 23:40:43.055675 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Feb 8 23:40:43.055778 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Feb 8 23:40:43.055868 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Feb 8 23:40:43.055956 kernel: pci 0000:00:01.0: Activating ISA DMA hang workarounds Feb 8 23:40:43.055969 kernel: PCI: CLS 0 bytes, default 64 Feb 8 23:40:43.055978 kernel: Initialise system trusted keyrings Feb 8 23:40:43.055987 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Feb 8 23:40:43.055995 kernel: Key type asymmetric registered Feb 8 23:40:43.056007 kernel: Asymmetric key parser 'x509' registered Feb 8 23:40:43.056016 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Feb 8 23:40:43.056025 kernel: io scheduler mq-deadline registered Feb 8 23:40:43.056034 kernel: io scheduler kyber registered Feb 8 23:40:43.056042 kernel: io scheduler bfq registered Feb 8 23:40:43.056127 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 8 23:40:43.056139 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Feb 8 23:40:43.056149 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Feb 8 23:40:43.056157 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Feb 8 23:40:43.056169 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Feb 8 23:40:43.056178 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 8 23:40:43.056187 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 8 23:40:43.056196 kernel: random: crng init done Feb 8 23:40:43.056204 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Feb 8 23:40:43.056213 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Feb 8 23:40:43.056221 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Feb 8 23:40:43.056341 kernel: rtc_cmos 00:04: RTC can wake from S4 Feb 8 23:40:43.056359 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Feb 8 23:40:43.056439 kernel: rtc_cmos 00:04: registered as rtc0 Feb 8 23:40:43.056534 kernel: rtc_cmos 00:04: setting system clock to 2024-02-08T23:40:42 UTC (1707435642) Feb 8 23:40:43.056613 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Feb 8 23:40:43.056625 kernel: NET: Registered PF_INET6 protocol family Feb 8 23:40:43.056633 kernel: Segment Routing with IPv6 Feb 8 23:40:43.056641 kernel: In-situ OAM (IOAM) with IPv6 Feb 8 23:40:43.056649 kernel: NET: Registered PF_PACKET protocol family Feb 8 23:40:43.056658 kernel: Key type dns_resolver registered Feb 8 23:40:43.056670 kernel: IPI shorthand broadcast: enabled Feb 8 23:40:43.056679 kernel: sched_clock: Marking stable (697809890, 116910964)->(839022527, -24301673) Feb 8 23:40:43.056688 kernel: registered taskstats version 1 Feb 8 23:40:43.056697 kernel: Loading compiled-in X.509 certificates Feb 8 23:40:43.056706 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.148-flatcar: e9d857ae0e8100c174221878afd1046acbb054a6' Feb 8 23:40:43.056714 kernel: Key type .fscrypt registered Feb 8 23:40:43.056723 kernel: Key type fscrypt-provisioning registered Feb 8 23:40:43.056732 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 8 23:40:43.056743 kernel: ima: Allocated hash algorithm: sha1 Feb 8 23:40:43.056751 kernel: ima: No architecture policies found Feb 8 23:40:43.056760 kernel: Freeing unused kernel image (initmem) memory: 45496K Feb 8 23:40:43.056769 kernel: Write protecting the kernel read-only data: 28672k Feb 8 23:40:43.056778 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Feb 8 23:40:43.056786 kernel: Freeing unused kernel image (rodata/data gap) memory: 636K Feb 8 23:40:43.056795 kernel: Run /init as init process Feb 8 23:40:43.056804 kernel: with arguments: Feb 8 23:40:43.056812 kernel: /init Feb 8 23:40:43.056821 kernel: with environment: Feb 8 23:40:43.056831 kernel: HOME=/ Feb 8 23:40:43.056839 kernel: TERM=linux Feb 8 23:40:43.056847 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 8 23:40:43.056859 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 8 23:40:43.056871 systemd[1]: Detected virtualization kvm. Feb 8 23:40:43.056881 systemd[1]: Detected architecture x86-64. Feb 8 23:40:43.056890 systemd[1]: Running in initrd. Feb 8 23:40:43.056901 systemd[1]: No hostname configured, using default hostname. Feb 8 23:40:43.056910 systemd[1]: Hostname set to . Feb 8 23:40:43.056919 systemd[1]: Initializing machine ID from VM UUID. Feb 8 23:40:43.056929 systemd[1]: Queued start job for default target initrd.target. Feb 8 23:40:43.056938 systemd[1]: Started systemd-ask-password-console.path. Feb 8 23:40:43.056948 systemd[1]: Reached target cryptsetup.target. Feb 8 23:40:43.056957 systemd[1]: Reached target paths.target. Feb 8 23:40:43.056966 systemd[1]: Reached target slices.target. Feb 8 23:40:43.056976 systemd[1]: Reached target swap.target. Feb 8 23:40:43.056985 systemd[1]: Reached target timers.target. Feb 8 23:40:43.056996 systemd[1]: Listening on iscsid.socket. Feb 8 23:40:43.057005 systemd[1]: Listening on iscsiuio.socket. Feb 8 23:40:43.057014 systemd[1]: Listening on systemd-journald-audit.socket. Feb 8 23:40:43.057024 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 8 23:40:43.057033 systemd[1]: Listening on systemd-journald.socket. Feb 8 23:40:43.057043 systemd[1]: Listening on systemd-networkd.socket. Feb 8 23:40:43.057070 systemd[1]: Listening on systemd-udevd-control.socket. Feb 8 23:40:43.057081 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 8 23:40:43.057090 systemd[1]: Reached target sockets.target. Feb 8 23:40:43.057099 systemd[1]: Starting kmod-static-nodes.service... Feb 8 23:40:43.057117 systemd[1]: Finished network-cleanup.service. Feb 8 23:40:43.057128 systemd[1]: Starting systemd-fsck-usr.service... Feb 8 23:40:43.057139 systemd[1]: Starting systemd-journald.service... Feb 8 23:40:43.057148 systemd[1]: Starting systemd-modules-load.service... Feb 8 23:40:43.057158 systemd[1]: Starting systemd-resolved.service... Feb 8 23:40:43.057168 systemd[1]: Starting systemd-vconsole-setup.service... Feb 8 23:40:43.057177 systemd[1]: Finished kmod-static-nodes.service. Feb 8 23:40:43.057187 systemd[1]: Finished systemd-fsck-usr.service. Feb 8 23:40:43.057201 systemd-journald[185]: Journal started Feb 8 23:40:43.057260 systemd-journald[185]: Runtime Journal (/run/log/journal/b4bce1e2efee4fc694474f52cb3d7acf) is 4.9M, max 39.5M, 34.5M free. Feb 8 23:40:43.046453 systemd-modules-load[186]: Inserted module 'overlay' Feb 8 23:40:43.078297 systemd[1]: Started systemd-journald.service. Feb 8 23:40:43.078372 kernel: audit: type=1130 audit(1707435643.072:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:43.072000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:43.055566 systemd-resolved[187]: Positive Trust Anchors: Feb 8 23:40:43.085746 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 8 23:40:43.085767 kernel: Bridge firewalling registered Feb 8 23:40:43.055580 systemd-resolved[187]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 8 23:40:43.091363 kernel: audit: type=1130 audit(1707435643.082:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:43.082000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:43.055619 systemd-resolved[187]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 8 23:40:43.059738 systemd-resolved[187]: Defaulting to hostname 'linux'. Feb 8 23:40:43.082483 systemd-modules-load[186]: Inserted module 'br_netfilter' Feb 8 23:40:43.083069 systemd[1]: Started systemd-resolved.service. Feb 8 23:40:43.106353 kernel: audit: type=1130 audit(1707435643.083:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:43.106376 kernel: audit: type=1130 audit(1707435643.096:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:43.083000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:43.096000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:43.083589 systemd[1]: Reached target nss-lookup.target. Feb 8 23:40:43.110894 kernel: audit: type=1130 audit(1707435643.104:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:43.104000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:43.085113 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 8 23:40:43.112430 kernel: SCSI subsystem initialized Feb 8 23:40:43.091611 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 8 23:40:43.096959 systemd[1]: Finished systemd-vconsole-setup.service. Feb 8 23:40:43.105688 systemd[1]: Starting dracut-cmdline-ask.service... Feb 8 23:40:43.127101 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 8 23:40:43.127162 kernel: device-mapper: uevent: version 1.0.3 Feb 8 23:40:43.129092 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Feb 8 23:40:43.128953 systemd[1]: Finished dracut-cmdline-ask.service. Feb 8 23:40:43.129000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:43.134087 kernel: audit: type=1130 audit(1707435643.129:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:43.135272 systemd[1]: Starting dracut-cmdline.service... Feb 8 23:40:43.137582 systemd-modules-load[186]: Inserted module 'dm_multipath' Feb 8 23:40:43.138312 systemd[1]: Finished systemd-modules-load.service. Feb 8 23:40:43.139913 systemd[1]: Starting systemd-sysctl.service... Feb 8 23:40:43.147085 kernel: audit: type=1130 audit(1707435643.139:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:43.139000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:43.149262 systemd[1]: Finished systemd-sysctl.service. Feb 8 23:40:43.150000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:43.154074 kernel: audit: type=1130 audit(1707435643.150:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:43.154593 dracut-cmdline[201]: dracut-dracut-053 Feb 8 23:40:43.156557 dracut-cmdline[201]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=ae7db544026ede4699ee2036449b75950d3fb7929b25a6731d0ad396f1aa37c9 Feb 8 23:40:43.223100 kernel: Loading iSCSI transport class v2.0-870. Feb 8 23:40:43.238102 kernel: iscsi: registered transport (tcp) Feb 8 23:40:43.263097 kernel: iscsi: registered transport (qla4xxx) Feb 8 23:40:43.263168 kernel: QLogic iSCSI HBA Driver Feb 8 23:40:43.303750 systemd[1]: Finished dracut-cmdline.service. Feb 8 23:40:43.304000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:43.305467 systemd[1]: Starting dracut-pre-udev.service... Feb 8 23:40:43.310343 kernel: audit: type=1130 audit(1707435643.304:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:43.362151 kernel: raid6: sse2x4 gen() 11748 MB/s Feb 8 23:40:43.379243 kernel: raid6: sse2x4 xor() 6271 MB/s Feb 8 23:40:43.396157 kernel: raid6: sse2x2 gen() 13189 MB/s Feb 8 23:40:43.413158 kernel: raid6: sse2x2 xor() 7940 MB/s Feb 8 23:40:43.430132 kernel: raid6: sse2x1 gen() 10211 MB/s Feb 8 23:40:43.448032 kernel: raid6: sse2x1 xor() 6443 MB/s Feb 8 23:40:43.448155 kernel: raid6: using algorithm sse2x2 gen() 13189 MB/s Feb 8 23:40:43.448187 kernel: raid6: .... xor() 7940 MB/s, rmw enabled Feb 8 23:40:43.449098 kernel: raid6: using ssse3x2 recovery algorithm Feb 8 23:40:43.465107 kernel: xor: measuring software checksum speed Feb 8 23:40:43.469111 kernel: prefetch64-sse : 4179 MB/sec Feb 8 23:40:43.474429 kernel: generic_sse : 2546 MB/sec Feb 8 23:40:43.474489 kernel: xor: using function: prefetch64-sse (4179 MB/sec) Feb 8 23:40:43.598136 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Feb 8 23:40:43.614883 systemd[1]: Finished dracut-pre-udev.service. Feb 8 23:40:43.615000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:43.615000 audit: BPF prog-id=7 op=LOAD Feb 8 23:40:43.616000 audit: BPF prog-id=8 op=LOAD Feb 8 23:40:43.616848 systemd[1]: Starting systemd-udevd.service... Feb 8 23:40:43.640406 systemd-udevd[384]: Using default interface naming scheme 'v252'. Feb 8 23:40:43.646000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:43.645661 systemd[1]: Started systemd-udevd.service. Feb 8 23:40:43.651870 systemd[1]: Starting dracut-pre-trigger.service... Feb 8 23:40:43.672391 dracut-pre-trigger[401]: rd.md=0: removing MD RAID activation Feb 8 23:40:43.717293 systemd[1]: Finished dracut-pre-trigger.service. Feb 8 23:40:43.718000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:43.720418 systemd[1]: Starting systemd-udev-trigger.service... Feb 8 23:40:43.785000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:43.784590 systemd[1]: Finished systemd-udev-trigger.service. Feb 8 23:40:43.841099 kernel: virtio_blk virtio2: [vda] 41943040 512-byte logical blocks (21.5 GB/20.0 GiB) Feb 8 23:40:43.887095 kernel: libata version 3.00 loaded. Feb 8 23:40:43.890083 kernel: ata_piix 0000:00:01.1: version 2.13 Feb 8 23:40:43.892081 kernel: scsi host0: ata_piix Feb 8 23:40:43.892236 kernel: scsi host1: ata_piix Feb 8 23:40:43.892352 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc120 irq 14 Feb 8 23:40:43.892365 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc128 irq 15 Feb 8 23:40:43.942865 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 8 23:40:43.942933 kernel: GPT:17805311 != 41943039 Feb 8 23:40:43.942947 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 8 23:40:43.947633 kernel: GPT:17805311 != 41943039 Feb 8 23:40:43.947676 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 8 23:40:43.952467 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 8 23:40:44.243325 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (440) Feb 8 23:40:44.274114 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Feb 8 23:40:44.283410 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 8 23:40:44.291223 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Feb 8 23:40:44.292440 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Feb 8 23:40:44.303259 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Feb 8 23:40:44.305981 systemd[1]: Starting disk-uuid.service... Feb 8 23:40:44.333125 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 8 23:40:44.333320 disk-uuid[461]: Primary Header is updated. Feb 8 23:40:44.333320 disk-uuid[461]: Secondary Entries is updated. Feb 8 23:40:44.333320 disk-uuid[461]: Secondary Header is updated. Feb 8 23:40:44.346120 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 8 23:40:45.412138 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 8 23:40:45.412590 disk-uuid[462]: The operation has completed successfully. Feb 8 23:40:45.505509 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 8 23:40:45.507434 systemd[1]: Finished disk-uuid.service. Feb 8 23:40:45.509000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:45.509000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:45.535314 systemd[1]: Starting verity-setup.service... Feb 8 23:40:45.560129 kernel: device-mapper: verity: sha256 using implementation "sha256-ssse3" Feb 8 23:40:45.656318 systemd[1]: Found device dev-mapper-usr.device. Feb 8 23:40:45.660617 systemd[1]: Mounting sysusr-usr.mount... Feb 8 23:40:45.666357 systemd[1]: Finished verity-setup.service. Feb 8 23:40:45.667000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:45.811141 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Feb 8 23:40:45.812097 systemd[1]: Mounted sysusr-usr.mount. Feb 8 23:40:45.813432 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Feb 8 23:40:45.815140 systemd[1]: Starting ignition-setup.service... Feb 8 23:40:45.817829 systemd[1]: Starting parse-ip-for-networkd.service... Feb 8 23:40:45.831517 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 8 23:40:45.831657 kernel: BTRFS info (device vda6): using free space tree Feb 8 23:40:45.831688 kernel: BTRFS info (device vda6): has skinny extents Feb 8 23:40:45.851354 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 8 23:40:45.865008 systemd[1]: Finished ignition-setup.service. Feb 8 23:40:45.865000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:45.866377 systemd[1]: Starting ignition-fetch-offline.service... Feb 8 23:40:45.959042 systemd[1]: Finished parse-ip-for-networkd.service. Feb 8 23:40:45.959000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:45.960000 audit: BPF prog-id=9 op=LOAD Feb 8 23:40:45.961243 systemd[1]: Starting systemd-networkd.service... Feb 8 23:40:46.005333 systemd-networkd[632]: lo: Link UP Feb 8 23:40:46.005346 systemd-networkd[632]: lo: Gained carrier Feb 8 23:40:46.005872 systemd-networkd[632]: Enumeration completed Feb 8 23:40:46.006115 systemd-networkd[632]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 8 23:40:46.008000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:46.006993 systemd[1]: Started systemd-networkd.service. Feb 8 23:40:46.008072 systemd-networkd[632]: eth0: Link UP Feb 8 23:40:46.008082 systemd-networkd[632]: eth0: Gained carrier Feb 8 23:40:46.008701 systemd[1]: Reached target network.target. Feb 8 23:40:46.011417 systemd[1]: Starting iscsiuio.service... Feb 8 23:40:46.017151 systemd-networkd[632]: eth0: DHCPv4 address 172.24.4.229/24, gateway 172.24.4.1 acquired from 172.24.4.1 Feb 8 23:40:46.021992 systemd[1]: Started iscsiuio.service. Feb 8 23:40:46.022000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:46.023368 systemd[1]: Starting iscsid.service... Feb 8 23:40:46.027303 iscsid[637]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Feb 8 23:40:46.027303 iscsid[637]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Feb 8 23:40:46.027303 iscsid[637]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Feb 8 23:40:46.027303 iscsid[637]: If using hardware iscsi like qla4xxx this message can be ignored. Feb 8 23:40:46.030000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:46.033891 iscsid[637]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Feb 8 23:40:46.033891 iscsid[637]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Feb 8 23:40:46.029204 systemd[1]: Started iscsid.service. Feb 8 23:40:46.031774 systemd[1]: Starting dracut-initqueue.service... Feb 8 23:40:46.043000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:46.043442 systemd[1]: Finished dracut-initqueue.service. Feb 8 23:40:46.043996 systemd[1]: Reached target remote-fs-pre.target. Feb 8 23:40:46.044477 systemd[1]: Reached target remote-cryptsetup.target. Feb 8 23:40:46.045024 systemd[1]: Reached target remote-fs.target. Feb 8 23:40:46.046274 systemd[1]: Starting dracut-pre-mount.service... Feb 8 23:40:46.055263 systemd[1]: Finished dracut-pre-mount.service. Feb 8 23:40:46.055000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:46.427002 ignition[544]: Ignition 2.14.0 Feb 8 23:40:46.427033 ignition[544]: Stage: fetch-offline Feb 8 23:40:46.427194 ignition[544]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 8 23:40:46.427246 ignition[544]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Feb 8 23:40:46.432642 systemd[1]: Finished ignition-fetch-offline.service. Feb 8 23:40:46.434000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:46.429556 ignition[544]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 8 23:40:46.429771 ignition[544]: parsed url from cmdline: "" Feb 8 23:40:46.435999 systemd[1]: Starting ignition-fetch.service... Feb 8 23:40:46.429780 ignition[544]: no config URL provided Feb 8 23:40:46.429793 ignition[544]: reading system config file "/usr/lib/ignition/user.ign" Feb 8 23:40:46.429820 ignition[544]: no config at "/usr/lib/ignition/user.ign" Feb 8 23:40:46.429832 ignition[544]: failed to fetch config: resource requires networking Feb 8 23:40:46.430334 ignition[544]: Ignition finished successfully Feb 8 23:40:46.454789 ignition[656]: Ignition 2.14.0 Feb 8 23:40:46.454817 ignition[656]: Stage: fetch Feb 8 23:40:46.455141 ignition[656]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 8 23:40:46.455199 ignition[656]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Feb 8 23:40:46.457546 ignition[656]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 8 23:40:46.457771 ignition[656]: parsed url from cmdline: "" Feb 8 23:40:46.457781 ignition[656]: no config URL provided Feb 8 23:40:46.457795 ignition[656]: reading system config file "/usr/lib/ignition/user.ign" Feb 8 23:40:46.457813 ignition[656]: no config at "/usr/lib/ignition/user.ign" Feb 8 23:40:46.469183 ignition[656]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Feb 8 23:40:46.469353 ignition[656]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Feb 8 23:40:46.469477 ignition[656]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Feb 8 23:40:46.752116 ignition[656]: GET result: OK Feb 8 23:40:46.752298 ignition[656]: parsing config with SHA512: c52185a03029736fa6911385d2c64f40993eebfbaf776977599380421337ca4b670d08168d04ac7959fea2c9105fa038e06b2525edef571de24d525d2ac5b6a4 Feb 8 23:40:46.845276 unknown[656]: fetched base config from "system" Feb 8 23:40:46.845310 unknown[656]: fetched base config from "system" Feb 8 23:40:46.846453 ignition[656]: fetch: fetch complete Feb 8 23:40:46.845325 unknown[656]: fetched user config from "openstack" Feb 8 23:40:46.846467 ignition[656]: fetch: fetch passed Feb 8 23:40:46.851000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:46.849530 systemd[1]: Finished ignition-fetch.service. Feb 8 23:40:46.846576 ignition[656]: Ignition finished successfully Feb 8 23:40:46.853764 systemd[1]: Starting ignition-kargs.service... Feb 8 23:40:46.874411 ignition[662]: Ignition 2.14.0 Feb 8 23:40:46.874452 ignition[662]: Stage: kargs Feb 8 23:40:46.874701 ignition[662]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 8 23:40:46.889000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:46.887941 systemd[1]: Finished ignition-kargs.service. Feb 8 23:40:46.874744 ignition[662]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Feb 8 23:40:46.876950 ignition[662]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 8 23:40:46.879681 ignition[662]: kargs: kargs passed Feb 8 23:40:46.890937 systemd[1]: Starting ignition-disks.service... Feb 8 23:40:46.879772 ignition[662]: Ignition finished successfully Feb 8 23:40:46.904566 ignition[667]: Ignition 2.14.0 Feb 8 23:40:46.904579 ignition[667]: Stage: disks Feb 8 23:40:46.904704 ignition[667]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 8 23:40:46.909000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:46.908592 systemd[1]: Finished ignition-disks.service. Feb 8 23:40:46.904727 ignition[667]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Feb 8 23:40:46.909991 systemd[1]: Reached target initrd-root-device.target. Feb 8 23:40:46.905689 ignition[667]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 8 23:40:46.911120 systemd[1]: Reached target local-fs-pre.target. Feb 8 23:40:46.906785 ignition[667]: disks: disks passed Feb 8 23:40:46.912230 systemd[1]: Reached target local-fs.target. Feb 8 23:40:46.906829 ignition[667]: Ignition finished successfully Feb 8 23:40:46.913935 systemd[1]: Reached target sysinit.target. Feb 8 23:40:46.915384 systemd[1]: Reached target basic.target. Feb 8 23:40:46.918632 systemd[1]: Starting systemd-fsck-root.service... Feb 8 23:40:46.948247 systemd-fsck[675]: ROOT: clean, 602/1628000 files, 124051/1617920 blocks Feb 8 23:40:46.960532 systemd[1]: Finished systemd-fsck-root.service. Feb 8 23:40:46.961000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:46.963483 systemd[1]: Mounting sysroot.mount... Feb 8 23:40:46.985132 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Feb 8 23:40:46.986696 systemd[1]: Mounted sysroot.mount. Feb 8 23:40:46.989198 systemd[1]: Reached target initrd-root-fs.target. Feb 8 23:40:46.993895 systemd[1]: Mounting sysroot-usr.mount... Feb 8 23:40:46.995649 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Feb 8 23:40:46.996971 systemd[1]: Starting flatcar-openstack-hostname.service... Feb 8 23:40:47.002860 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 8 23:40:47.002921 systemd[1]: Reached target ignition-diskful.target. Feb 8 23:40:47.009284 systemd[1]: Mounted sysroot-usr.mount. Feb 8 23:40:47.017407 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 8 23:40:47.024611 systemd[1]: Starting initrd-setup-root.service... Feb 8 23:40:47.047023 initrd-setup-root[687]: cut: /sysroot/etc/passwd: No such file or directory Feb 8 23:40:47.048243 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (682) Feb 8 23:40:47.055358 initrd-setup-root[695]: cut: /sysroot/etc/group: No such file or directory Feb 8 23:40:47.059788 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 8 23:40:47.059810 kernel: BTRFS info (device vda6): using free space tree Feb 8 23:40:47.059821 kernel: BTRFS info (device vda6): has skinny extents Feb 8 23:40:47.064286 initrd-setup-root[719]: cut: /sysroot/etc/shadow: No such file or directory Feb 8 23:40:47.069818 initrd-setup-root[727]: cut: /sysroot/etc/gshadow: No such file or directory Feb 8 23:40:47.075876 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 8 23:40:47.153888 systemd[1]: Finished initrd-setup-root.service. Feb 8 23:40:47.164022 kernel: kauditd_printk_skb: 22 callbacks suppressed Feb 8 23:40:47.164109 kernel: audit: type=1130 audit(1707435647.155:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:47.155000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:47.157018 systemd[1]: Starting ignition-mount.service... Feb 8 23:40:47.166798 systemd[1]: Starting sysroot-boot.service... Feb 8 23:40:47.190271 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Feb 8 23:40:47.190459 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Feb 8 23:40:47.210135 ignition[750]: INFO : Ignition 2.14.0 Feb 8 23:40:47.210135 ignition[750]: INFO : Stage: mount Feb 8 23:40:47.212184 ignition[750]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 8 23:40:47.212184 ignition[750]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Feb 8 23:40:47.212184 ignition[750]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 8 23:40:47.221357 kernel: audit: type=1130 audit(1707435647.214:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:47.214000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:47.221500 ignition[750]: INFO : mount: mount passed Feb 8 23:40:47.221500 ignition[750]: INFO : Ignition finished successfully Feb 8 23:40:47.213305 systemd[1]: Finished ignition-mount.service. Feb 8 23:40:47.226000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:47.226550 systemd[1]: Finished sysroot-boot.service. Feb 8 23:40:47.231091 kernel: audit: type=1130 audit(1707435647.226:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:47.254824 coreos-metadata[681]: Feb 08 23:40:47.254 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Feb 8 23:40:47.269820 coreos-metadata[681]: Feb 08 23:40:47.269 INFO Fetch successful Feb 8 23:40:47.270665 coreos-metadata[681]: Feb 08 23:40:47.270 INFO wrote hostname ci-3510-3-2-5-de7ca92588.novalocal to /sysroot/etc/hostname Feb 8 23:40:47.275596 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Feb 8 23:40:47.275805 systemd[1]: Finished flatcar-openstack-hostname.service. Feb 8 23:40:47.277000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:47.279288 systemd[1]: Starting ignition-files.service... Feb 8 23:40:47.285685 kernel: audit: type=1130 audit(1707435647.277:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:47.285711 kernel: audit: type=1131 audit(1707435647.277:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:47.277000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:47.295027 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 8 23:40:47.397209 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (758) Feb 8 23:40:47.426416 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 8 23:40:47.426487 kernel: BTRFS info (device vda6): using free space tree Feb 8 23:40:47.426515 kernel: BTRFS info (device vda6): has skinny extents Feb 8 23:40:47.544527 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 8 23:40:47.566885 ignition[777]: INFO : Ignition 2.14.0 Feb 8 23:40:47.566885 ignition[777]: INFO : Stage: files Feb 8 23:40:47.570180 ignition[777]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 8 23:40:47.570180 ignition[777]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Feb 8 23:40:47.570180 ignition[777]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 8 23:40:47.581275 systemd-networkd[632]: eth0: Gained IPv6LL Feb 8 23:40:47.584887 ignition[777]: DEBUG : files: compiled without relabeling support, skipping Feb 8 23:40:47.599698 ignition[777]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 8 23:40:47.602306 ignition[777]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 8 23:40:47.644967 ignition[777]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 8 23:40:47.646947 ignition[777]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 8 23:40:47.654213 unknown[777]: wrote ssh authorized keys file for user: core Feb 8 23:40:47.655831 ignition[777]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 8 23:40:47.655831 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.1.1.tgz" Feb 8 23:40:47.655831 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-amd64-v1.1.1.tgz: attempt #1 Feb 8 23:40:48.093686 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 8 23:40:48.847810 ignition[777]: DEBUG : files: createFilesystemsFiles: createFiles: op(3): file matches expected sum of: 4d0ed0abb5951b9cf83cba938ef84bdc5b681f4ac869da8143974f6a53a3ff30c666389fa462b9d14d30af09bf03f6cdf77598c572f8fb3ea00cecdda467a48d Feb 8 23:40:48.850124 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.1.1.tgz" Feb 8 23:40:48.850124 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/crictl-v1.26.0-linux-amd64.tar.gz" Feb 8 23:40:48.850124 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.26.0/crictl-v1.26.0-linux-amd64.tar.gz: attempt #1 Feb 8 23:40:49.187959 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 8 23:40:49.673624 ignition[777]: DEBUG : files: createFilesystemsFiles: createFiles: op(4): file matches expected sum of: a3a2c02a90b008686c20babaf272e703924db2a3e2a0d4e2a7c81d994cbc68c47458a4a354ecc243af095b390815c7f203348b9749351ae817bd52a522300449 Feb 8 23:40:49.673624 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/crictl-v1.26.0-linux-amd64.tar.gz" Feb 8 23:40:49.691542 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/kubeadm" Feb 8 23:40:49.691542 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubeadm: attempt #1 Feb 8 23:40:49.829001 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Feb 8 23:40:50.775758 ignition[777]: DEBUG : files: createFilesystemsFiles: createFiles: op(5): file matches expected sum of: 1c324cd645a7bf93d19d24c87498d9a17878eb1cc927e2680200ffeab2f85051ddec47d85b79b8e774042dc6726299ad3d7caf52c060701f00deba30dc33f660 Feb 8 23:40:50.777612 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/kubeadm" Feb 8 23:40:50.777612 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/bin/kubelet" Feb 8 23:40:50.777612 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubelet: attempt #1 Feb 8 23:40:50.887366 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Feb 8 23:40:53.192563 ignition[777]: DEBUG : files: createFilesystemsFiles: createFiles: op(6): file matches expected sum of: 40daf2a9b9e666c14b10e627da931bd79978628b1f23ef6429c1cb4fcba261f86ccff440c0dbb0070ee760fe55772b4fd279c4582dfbb17fa30bc94b7f00126b Feb 8 23:40:53.194303 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/bin/kubelet" Feb 8 23:40:53.195144 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/install.sh" Feb 8 23:40:53.196111 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/install.sh" Feb 8 23:40:53.196928 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/docker/daemon.json" Feb 8 23:40:53.197899 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/docker/daemon.json" Feb 8 23:40:53.238563 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 8 23:40:53.238563 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 8 23:40:53.238563 ignition[777]: INFO : files: op(a): [started] processing unit "coreos-metadata-sshkeys@.service" Feb 8 23:40:53.258582 ignition[777]: INFO : files: op(a): op(b): [started] writing systemd drop-in "20-clct-provider-override.conf" at "/sysroot/etc/systemd/system/coreos-metadata-sshkeys@.service.d/20-clct-provider-override.conf" Feb 8 23:40:53.258582 ignition[777]: INFO : files: op(a): op(b): [finished] writing systemd drop-in "20-clct-provider-override.conf" at "/sysroot/etc/systemd/system/coreos-metadata-sshkeys@.service.d/20-clct-provider-override.conf" Feb 8 23:40:53.258582 ignition[777]: INFO : files: op(a): [finished] processing unit "coreos-metadata-sshkeys@.service" Feb 8 23:40:53.258582 ignition[777]: INFO : files: op(c): [started] processing unit "coreos-metadata.service" Feb 8 23:40:53.273130 ignition[777]: INFO : files: op(c): op(d): [started] writing systemd drop-in "20-clct-provider-override.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/20-clct-provider-override.conf" Feb 8 23:40:53.273130 ignition[777]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "20-clct-provider-override.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/20-clct-provider-override.conf" Feb 8 23:40:53.273130 ignition[777]: INFO : files: op(c): [finished] processing unit "coreos-metadata.service" Feb 8 23:40:53.273130 ignition[777]: INFO : files: op(e): [started] processing unit "prepare-cni-plugins.service" Feb 8 23:40:53.273130 ignition[777]: INFO : files: op(e): op(f): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 8 23:40:53.273130 ignition[777]: INFO : files: op(e): op(f): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 8 23:40:53.273130 ignition[777]: INFO : files: op(e): [finished] processing unit "prepare-cni-plugins.service" Feb 8 23:40:53.273130 ignition[777]: INFO : files: op(10): [started] processing unit "prepare-critools.service" Feb 8 23:40:53.273130 ignition[777]: INFO : files: op(10): op(11): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 8 23:40:53.273130 ignition[777]: INFO : files: op(10): op(11): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 8 23:40:53.273130 ignition[777]: INFO : files: op(10): [finished] processing unit "prepare-critools.service" Feb 8 23:40:53.273130 ignition[777]: INFO : files: op(12): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Feb 8 23:40:53.273130 ignition[777]: INFO : files: op(12): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Feb 8 23:40:53.273130 ignition[777]: INFO : files: op(13): [started] setting preset to enabled for "prepare-cni-plugins.service" Feb 8 23:40:53.273130 ignition[777]: INFO : files: op(13): [finished] setting preset to enabled for "prepare-cni-plugins.service" Feb 8 23:40:53.273130 ignition[777]: INFO : files: op(14): [started] setting preset to enabled for "prepare-critools.service" Feb 8 23:40:53.273130 ignition[777]: INFO : files: op(14): [finished] setting preset to enabled for "prepare-critools.service" Feb 8 23:40:53.365303 kernel: audit: type=1130 audit(1707435653.290:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:53.365363 kernel: audit: type=1130 audit(1707435653.318:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:53.365396 kernel: audit: type=1131 audit(1707435653.318:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:53.365428 kernel: audit: type=1130 audit(1707435653.340:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:53.290000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:53.318000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:53.318000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:53.340000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:53.365800 ignition[777]: INFO : files: createResultFile: createFiles: op(15): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 8 23:40:53.365800 ignition[777]: INFO : files: createResultFile: createFiles: op(15): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 8 23:40:53.365800 ignition[777]: INFO : files: files passed Feb 8 23:40:53.365800 ignition[777]: INFO : Ignition finished successfully Feb 8 23:40:53.384653 kernel: audit: type=1130 audit(1707435653.370:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:53.384678 kernel: audit: type=1131 audit(1707435653.370:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:53.370000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:53.370000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:53.284579 systemd[1]: Finished ignition-files.service. Feb 8 23:40:53.295153 systemd[1]: Starting initrd-setup-root-after-ignition.service... Feb 8 23:40:53.390470 initrd-setup-root-after-ignition[802]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 8 23:40:53.302993 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Feb 8 23:40:53.306519 systemd[1]: Starting ignition-quench.service... Feb 8 23:40:53.317100 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 8 23:40:53.317393 systemd[1]: Finished ignition-quench.service. Feb 8 23:40:53.338696 systemd[1]: Finished initrd-setup-root-after-ignition.service. Feb 8 23:40:53.340630 systemd[1]: Reached target ignition-complete.target. Feb 8 23:40:53.353223 systemd[1]: Starting initrd-parse-etc.service... Feb 8 23:40:53.369235 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 8 23:40:53.369344 systemd[1]: Finished initrd-parse-etc.service. Feb 8 23:40:53.370559 systemd[1]: Reached target initrd-fs.target. Feb 8 23:40:53.384089 systemd[1]: Reached target initrd.target. Feb 8 23:40:53.406719 kernel: audit: type=1130 audit(1707435653.399:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:53.399000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:53.385126 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Feb 8 23:40:53.385894 systemd[1]: Starting dracut-pre-pivot.service... Feb 8 23:40:53.399473 systemd[1]: Finished dracut-pre-pivot.service. Feb 8 23:40:53.401294 systemd[1]: Starting initrd-cleanup.service... Feb 8 23:40:53.416420 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 8 23:40:53.416545 systemd[1]: Finished initrd-cleanup.service. Feb 8 23:40:53.424251 kernel: audit: type=1130 audit(1707435653.417:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:53.424273 kernel: audit: type=1131 audit(1707435653.417:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:53.417000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:53.417000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:53.418407 systemd[1]: Stopped target nss-lookup.target. Feb 8 23:40:53.424656 systemd[1]: Stopped target remote-cryptsetup.target. Feb 8 23:40:53.425543 systemd[1]: Stopped target timers.target. Feb 8 23:40:53.426407 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 8 23:40:53.434801 kernel: audit: type=1131 audit(1707435653.430:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:53.430000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:53.426456 systemd[1]: Stopped dracut-pre-pivot.service. Feb 8 23:40:53.430926 systemd[1]: Stopped target initrd.target. Feb 8 23:40:53.435265 systemd[1]: Stopped target basic.target. Feb 8 23:40:53.436251 systemd[1]: Stopped target ignition-complete.target. Feb 8 23:40:53.437230 systemd[1]: Stopped target ignition-diskful.target. Feb 8 23:40:53.438090 systemd[1]: Stopped target initrd-root-device.target. Feb 8 23:40:53.438949 systemd[1]: Stopped target remote-fs.target. Feb 8 23:40:53.439816 systemd[1]: Stopped target remote-fs-pre.target. Feb 8 23:40:53.440764 systemd[1]: Stopped target sysinit.target. Feb 8 23:40:53.441608 systemd[1]: Stopped target local-fs.target. Feb 8 23:40:53.442432 systemd[1]: Stopped target local-fs-pre.target. Feb 8 23:40:53.443317 systemd[1]: Stopped target swap.target. Feb 8 23:40:53.445000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:53.444199 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 8 23:40:53.444256 systemd[1]: Stopped dracut-pre-mount.service. Feb 8 23:40:53.446000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:53.445203 systemd[1]: Stopped target cryptsetup.target. Feb 8 23:40:53.447000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:53.445946 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 8 23:40:53.448000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:53.445998 systemd[1]: Stopped dracut-initqueue.service. Feb 8 23:40:53.446948 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 8 23:40:53.446990 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Feb 8 23:40:53.447875 systemd[1]: ignition-files.service: Deactivated successfully. Feb 8 23:40:53.447914 systemd[1]: Stopped ignition-files.service. Feb 8 23:40:53.449505 systemd[1]: Stopping ignition-mount.service... Feb 8 23:40:53.454240 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 8 23:40:53.454315 systemd[1]: Stopped kmod-static-nodes.service. Feb 8 23:40:53.457000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:53.466414 ignition[815]: INFO : Ignition 2.14.0 Feb 8 23:40:53.466414 ignition[815]: INFO : Stage: umount Feb 8 23:40:53.466414 ignition[815]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 8 23:40:53.466414 ignition[815]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Feb 8 23:40:53.466414 ignition[815]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 8 23:40:53.466414 ignition[815]: INFO : umount: umount passed Feb 8 23:40:53.466414 ignition[815]: INFO : Ignition finished successfully Feb 8 23:40:53.466000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:53.467000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:53.469000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:53.471000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:53.472000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:53.473000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:53.458813 systemd[1]: Stopping sysroot-boot.service... Feb 8 23:40:53.459534 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 8 23:40:53.477000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:53.459601 systemd[1]: Stopped systemd-udev-trigger.service. Feb 8 23:40:53.466974 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 8 23:40:53.467074 systemd[1]: Stopped dracut-pre-trigger.service. Feb 8 23:40:53.468269 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 8 23:40:53.468381 systemd[1]: Stopped ignition-mount.service. Feb 8 23:40:53.469659 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 8 23:40:53.469706 systemd[1]: Stopped ignition-disks.service. Feb 8 23:40:53.471226 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 8 23:40:53.471289 systemd[1]: Stopped ignition-kargs.service. Feb 8 23:40:53.472384 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 8 23:40:53.472442 systemd[1]: Stopped ignition-fetch.service. Feb 8 23:40:53.473948 systemd[1]: Stopped target network.target. Feb 8 23:40:53.474884 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 8 23:40:53.488000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:53.474957 systemd[1]: Stopped ignition-fetch-offline.service. Feb 8 23:40:53.477415 systemd[1]: Stopped target paths.target. Feb 8 23:40:53.478445 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 8 23:40:53.482121 systemd[1]: Stopped systemd-ask-password-console.path. Feb 8 23:40:53.483373 systemd[1]: Stopped target slices.target. Feb 8 23:40:53.484418 systemd[1]: Stopped target sockets.target. Feb 8 23:40:53.485482 systemd[1]: iscsid.socket: Deactivated successfully. Feb 8 23:40:53.485511 systemd[1]: Closed iscsid.socket. Feb 8 23:40:53.498000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:53.486961 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 8 23:40:53.486992 systemd[1]: Closed iscsiuio.socket. Feb 8 23:40:53.500000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:53.487846 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 8 23:40:53.487908 systemd[1]: Stopped ignition-setup.service. Feb 8 23:40:53.503000 audit: BPF prog-id=6 op=UNLOAD Feb 8 23:40:53.488887 systemd[1]: Stopping systemd-networkd.service... Feb 8 23:40:53.490327 systemd[1]: Stopping systemd-resolved.service... Feb 8 23:40:53.506000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:53.492877 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 8 23:40:53.507000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:53.494108 systemd-networkd[632]: eth0: DHCPv6 lease lost Feb 8 23:40:53.508000 audit: BPF prog-id=9 op=UNLOAD Feb 8 23:40:53.508000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:53.495192 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 8 23:40:53.495300 systemd[1]: Stopped systemd-networkd.service. Feb 8 23:40:53.499715 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 8 23:40:53.499864 systemd[1]: Stopped systemd-resolved.service. Feb 8 23:40:53.512000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:53.501300 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 8 23:40:53.501353 systemd[1]: Closed systemd-networkd.socket. Feb 8 23:40:53.515000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:53.503570 systemd[1]: Stopping network-cleanup.service... Feb 8 23:40:53.505874 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 8 23:40:53.505931 systemd[1]: Stopped parse-ip-for-networkd.service. Feb 8 23:40:53.506810 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 8 23:40:53.506858 systemd[1]: Stopped systemd-sysctl.service. Feb 8 23:40:53.520000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:53.507898 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 8 23:40:53.521000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:53.507938 systemd[1]: Stopped systemd-modules-load.service. Feb 8 23:40:53.522000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:53.508776 systemd[1]: Stopping systemd-udevd.service... Feb 8 23:40:53.523000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:53.511256 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 8 23:40:53.511853 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 8 23:40:53.526000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:53.527000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:53.512006 systemd[1]: Stopped systemd-udevd.service. Feb 8 23:40:53.533000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:53.533000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:53.514407 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 8 23:40:53.514534 systemd[1]: Stopped sysroot-boot.service. Feb 8 23:40:53.516185 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 8 23:40:53.516238 systemd[1]: Closed systemd-udevd-control.socket. Feb 8 23:40:53.518723 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 8 23:40:53.518764 systemd[1]: Closed systemd-udevd-kernel.socket. Feb 8 23:40:53.519585 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 8 23:40:53.519643 systemd[1]: Stopped dracut-pre-udev.service. Feb 8 23:40:53.520571 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 8 23:40:53.520611 systemd[1]: Stopped dracut-cmdline.service. Feb 8 23:40:53.521537 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 8 23:40:53.521578 systemd[1]: Stopped dracut-cmdline-ask.service. Feb 8 23:40:53.522701 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 8 23:40:53.522743 systemd[1]: Stopped initrd-setup-root.service. Feb 8 23:40:53.524691 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Feb 8 23:40:53.525952 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 8 23:40:53.526016 systemd[1]: Stopped systemd-vconsole-setup.service. Feb 8 23:40:53.526923 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 8 23:40:53.527036 systemd[1]: Stopped network-cleanup.service. Feb 8 23:40:53.532223 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 8 23:40:53.532302 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Feb 8 23:40:53.533308 systemd[1]: Reached target initrd-switch-root.target. Feb 8 23:40:53.534881 systemd[1]: Starting initrd-switch-root.service... Feb 8 23:40:53.553735 systemd[1]: Switching root. Feb 8 23:40:53.572955 iscsid[637]: iscsid shutting down. Feb 8 23:40:53.573570 systemd-journald[185]: Received SIGTERM from PID 1 (systemd). Feb 8 23:40:53.573624 systemd-journald[185]: Journal stopped Feb 8 23:40:58.667694 kernel: SELinux: Class mctp_socket not defined in policy. Feb 8 23:40:58.667744 kernel: SELinux: Class anon_inode not defined in policy. Feb 8 23:40:58.667758 kernel: SELinux: the above unknown classes and permissions will be allowed Feb 8 23:40:58.667774 kernel: SELinux: policy capability network_peer_controls=1 Feb 8 23:40:58.667785 kernel: SELinux: policy capability open_perms=1 Feb 8 23:40:58.667797 kernel: SELinux: policy capability extended_socket_class=1 Feb 8 23:40:58.667808 kernel: SELinux: policy capability always_check_network=0 Feb 8 23:40:58.667821 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 8 23:40:58.667835 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 8 23:40:58.667847 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 8 23:40:58.667858 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 8 23:40:58.667872 systemd[1]: Successfully loaded SELinux policy in 95.823ms. Feb 8 23:40:58.667890 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 26.814ms. Feb 8 23:40:58.667905 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 8 23:40:58.667919 systemd[1]: Detected virtualization kvm. Feb 8 23:40:58.667931 systemd[1]: Detected architecture x86-64. Feb 8 23:40:58.667945 systemd[1]: Detected first boot. Feb 8 23:40:58.667958 systemd[1]: Hostname set to . Feb 8 23:40:58.667971 systemd[1]: Initializing machine ID from VM UUID. Feb 8 23:40:58.667984 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Feb 8 23:40:58.667996 systemd[1]: Populated /etc with preset unit settings. Feb 8 23:40:58.668009 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 8 23:40:58.668023 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 8 23:40:58.668038 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 8 23:40:58.668199 kernel: kauditd_printk_skb: 46 callbacks suppressed Feb 8 23:40:58.668216 kernel: audit: type=1334 audit(1707435658.420:87): prog-id=12 op=LOAD Feb 8 23:40:58.668228 kernel: audit: type=1334 audit(1707435658.420:88): prog-id=3 op=UNLOAD Feb 8 23:40:58.668240 kernel: audit: type=1334 audit(1707435658.421:89): prog-id=13 op=LOAD Feb 8 23:40:58.668251 kernel: audit: type=1334 audit(1707435658.423:90): prog-id=14 op=LOAD Feb 8 23:40:58.668263 kernel: audit: type=1334 audit(1707435658.423:91): prog-id=4 op=UNLOAD Feb 8 23:40:58.668274 kernel: audit: type=1334 audit(1707435658.423:92): prog-id=5 op=UNLOAD Feb 8 23:40:58.668286 kernel: audit: type=1334 audit(1707435658.425:93): prog-id=15 op=LOAD Feb 8 23:40:58.668301 kernel: audit: type=1334 audit(1707435658.425:94): prog-id=12 op=UNLOAD Feb 8 23:40:58.668313 kernel: audit: type=1334 audit(1707435658.428:95): prog-id=16 op=LOAD Feb 8 23:40:58.668324 kernel: audit: type=1334 audit(1707435658.432:96): prog-id=17 op=LOAD Feb 8 23:40:58.668337 systemd[1]: iscsiuio.service: Deactivated successfully. Feb 8 23:40:58.668350 systemd[1]: Stopped iscsiuio.service. Feb 8 23:40:58.668363 systemd[1]: iscsid.service: Deactivated successfully. Feb 8 23:40:58.668375 systemd[1]: Stopped iscsid.service. Feb 8 23:40:58.668387 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 8 23:40:58.668405 systemd[1]: Stopped initrd-switch-root.service. Feb 8 23:40:58.668418 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 8 23:40:58.668432 systemd[1]: Created slice system-addon\x2dconfig.slice. Feb 8 23:40:58.668444 systemd[1]: Created slice system-addon\x2drun.slice. Feb 8 23:40:58.668457 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Feb 8 23:40:58.668493 systemd[1]: Created slice system-getty.slice. Feb 8 23:40:58.668533 systemd[1]: Created slice system-modprobe.slice. Feb 8 23:40:58.668571 systemd[1]: Created slice system-serial\x2dgetty.slice. Feb 8 23:40:58.668600 systemd[1]: Created slice system-system\x2dcloudinit.slice. Feb 8 23:40:58.668631 systemd[1]: Created slice system-systemd\x2dfsck.slice. Feb 8 23:40:58.668663 systemd[1]: Created slice user.slice. Feb 8 23:40:58.668696 systemd[1]: Started systemd-ask-password-console.path. Feb 8 23:40:58.668727 systemd[1]: Started systemd-ask-password-wall.path. Feb 8 23:40:58.668760 systemd[1]: Set up automount boot.automount. Feb 8 23:40:58.668774 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Feb 8 23:40:58.668790 systemd[1]: Stopped target initrd-switch-root.target. Feb 8 23:40:58.668803 systemd[1]: Stopped target initrd-fs.target. Feb 8 23:40:58.668815 systemd[1]: Stopped target initrd-root-fs.target. Feb 8 23:40:58.668829 systemd[1]: Reached target integritysetup.target. Feb 8 23:40:58.668842 systemd[1]: Reached target remote-cryptsetup.target. Feb 8 23:40:58.668854 systemd[1]: Reached target remote-fs.target. Feb 8 23:40:58.668866 systemd[1]: Reached target slices.target. Feb 8 23:40:58.668879 systemd[1]: Reached target swap.target. Feb 8 23:40:58.668891 systemd[1]: Reached target torcx.target. Feb 8 23:40:58.668903 systemd[1]: Reached target veritysetup.target. Feb 8 23:40:58.668917 systemd[1]: Listening on systemd-coredump.socket. Feb 8 23:40:58.668934 systemd[1]: Listening on systemd-initctl.socket. Feb 8 23:40:58.668946 systemd[1]: Listening on systemd-networkd.socket. Feb 8 23:40:58.668959 systemd[1]: Listening on systemd-udevd-control.socket. Feb 8 23:40:58.668970 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 8 23:40:58.668984 systemd[1]: Listening on systemd-userdbd.socket. Feb 8 23:40:58.668996 systemd[1]: Mounting dev-hugepages.mount... Feb 8 23:40:58.669008 systemd[1]: Mounting dev-mqueue.mount... Feb 8 23:40:58.669020 systemd[1]: Mounting media.mount... Feb 8 23:40:58.669035 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 8 23:40:58.669047 systemd[1]: Mounting sys-kernel-debug.mount... Feb 8 23:40:58.670792 systemd[1]: Mounting sys-kernel-tracing.mount... Feb 8 23:40:58.670808 systemd[1]: Mounting tmp.mount... Feb 8 23:40:58.670822 systemd[1]: Starting flatcar-tmpfiles.service... Feb 8 23:40:58.670835 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Feb 8 23:40:58.670847 systemd[1]: Starting kmod-static-nodes.service... Feb 8 23:40:58.670859 systemd[1]: Starting modprobe@configfs.service... Feb 8 23:40:58.670873 systemd[1]: Starting modprobe@dm_mod.service... Feb 8 23:40:58.670889 systemd[1]: Starting modprobe@drm.service... Feb 8 23:40:58.670901 systemd[1]: Starting modprobe@efi_pstore.service... Feb 8 23:40:58.670913 systemd[1]: Starting modprobe@fuse.service... Feb 8 23:40:58.670926 systemd[1]: Starting modprobe@loop.service... Feb 8 23:40:58.670940 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 8 23:40:58.670953 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 8 23:40:58.670965 systemd[1]: Stopped systemd-fsck-root.service. Feb 8 23:40:58.670978 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 8 23:40:58.670990 systemd[1]: Stopped systemd-fsck-usr.service. Feb 8 23:40:58.671005 systemd[1]: Stopped systemd-journald.service. Feb 8 23:40:58.671019 kernel: loop: module loaded Feb 8 23:40:58.671031 systemd[1]: Starting systemd-journald.service... Feb 8 23:40:58.671044 systemd[1]: Starting systemd-modules-load.service... Feb 8 23:40:58.671077 systemd[1]: Starting systemd-network-generator.service... Feb 8 23:40:58.671091 kernel: fuse: init (API version 7.34) Feb 8 23:40:58.671104 systemd[1]: Starting systemd-remount-fs.service... Feb 8 23:40:58.671117 systemd[1]: Starting systemd-udev-trigger.service... Feb 8 23:40:58.671129 systemd[1]: verity-setup.service: Deactivated successfully. Feb 8 23:40:58.671144 systemd[1]: Stopped verity-setup.service. Feb 8 23:40:58.671157 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 8 23:40:58.671170 systemd[1]: Mounted dev-hugepages.mount. Feb 8 23:40:58.671182 systemd[1]: Mounted dev-mqueue.mount. Feb 8 23:40:58.671194 systemd[1]: Mounted media.mount. Feb 8 23:40:58.671210 systemd[1]: Mounted sys-kernel-debug.mount. Feb 8 23:40:58.671223 systemd[1]: Mounted sys-kernel-tracing.mount. Feb 8 23:40:58.671235 systemd[1]: Mounted tmp.mount. Feb 8 23:40:58.671247 systemd[1]: Finished kmod-static-nodes.service. Feb 8 23:40:58.671263 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 8 23:40:58.671278 systemd-journald[919]: Journal started Feb 8 23:40:58.671323 systemd-journald[919]: Runtime Journal (/run/log/journal/b4bce1e2efee4fc694474f52cb3d7acf) is 4.9M, max 39.5M, 34.5M free. Feb 8 23:40:54.054000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 8 23:40:54.196000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 8 23:40:54.196000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 8 23:40:54.196000 audit: BPF prog-id=10 op=LOAD Feb 8 23:40:54.196000 audit: BPF prog-id=10 op=UNLOAD Feb 8 23:40:54.197000 audit: BPF prog-id=11 op=LOAD Feb 8 23:40:54.197000 audit: BPF prog-id=11 op=UNLOAD Feb 8 23:40:54.357000 audit[847]: AVC avc: denied { associate } for pid=847 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Feb 8 23:40:54.357000 audit[847]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c0001178d2 a1=c00002ae40 a2=c000029100 a3=32 items=0 ppid=830 pid=847 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:40:54.357000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 8 23:40:54.359000 audit[847]: AVC avc: denied { associate } for pid=847 comm="torcx-generator" name="bin" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Feb 8 23:40:54.359000 audit[847]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c0001179a9 a2=1ed a3=0 items=2 ppid=830 pid=847 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:40:54.359000 audit: CWD cwd="/" Feb 8 23:40:54.359000 audit: PATH item=0 name=(null) inode=2 dev=00:1a mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:40:54.359000 audit: PATH item=1 name=(null) inode=3 dev=00:1a mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:40:54.359000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 8 23:40:58.420000 audit: BPF prog-id=12 op=LOAD Feb 8 23:40:58.420000 audit: BPF prog-id=3 op=UNLOAD Feb 8 23:40:58.421000 audit: BPF prog-id=13 op=LOAD Feb 8 23:40:58.423000 audit: BPF prog-id=14 op=LOAD Feb 8 23:40:58.423000 audit: BPF prog-id=4 op=UNLOAD Feb 8 23:40:58.423000 audit: BPF prog-id=5 op=UNLOAD Feb 8 23:40:58.425000 audit: BPF prog-id=15 op=LOAD Feb 8 23:40:58.425000 audit: BPF prog-id=12 op=UNLOAD Feb 8 23:40:58.428000 audit: BPF prog-id=16 op=LOAD Feb 8 23:40:58.432000 audit: BPF prog-id=17 op=LOAD Feb 8 23:40:58.432000 audit: BPF prog-id=13 op=UNLOAD Feb 8 23:40:58.432000 audit: BPF prog-id=14 op=UNLOAD Feb 8 23:40:58.434000 audit: BPF prog-id=18 op=LOAD Feb 8 23:40:58.434000 audit: BPF prog-id=15 op=UNLOAD Feb 8 23:40:58.435000 audit: BPF prog-id=19 op=LOAD Feb 8 23:40:58.437000 audit: BPF prog-id=20 op=LOAD Feb 8 23:40:58.437000 audit: BPF prog-id=16 op=UNLOAD Feb 8 23:40:58.437000 audit: BPF prog-id=17 op=UNLOAD Feb 8 23:40:58.438000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:58.442000 audit: BPF prog-id=18 op=UNLOAD Feb 8 23:40:58.443000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:58.447000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:58.451000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:58.451000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:58.588000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:58.598000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:58.603000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:58.603000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:58.605000 audit: BPF prog-id=21 op=LOAD Feb 8 23:40:58.606000 audit: BPF prog-id=22 op=LOAD Feb 8 23:40:58.607000 audit: BPF prog-id=23 op=LOAD Feb 8 23:40:58.608000 audit: BPF prog-id=19 op=UNLOAD Feb 8 23:40:58.608000 audit: BPF prog-id=20 op=UNLOAD Feb 8 23:40:58.645000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:58.666000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 8 23:40:58.666000 audit[919]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7ffd540258b0 a2=4000 a3=7ffd5402594c items=0 ppid=1 pid=919 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:40:58.666000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Feb 8 23:40:58.666000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:54.353473 /usr/lib/systemd/system-generators/torcx-generator[847]: time="2024-02-08T23:40:54Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 8 23:40:58.418006 systemd[1]: Queued start job for default target multi-user.target. Feb 8 23:40:58.680206 systemd[1]: Finished modprobe@configfs.service. Feb 8 23:40:58.680234 systemd[1]: Started systemd-journald.service. Feb 8 23:40:58.674000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:58.674000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:58.676000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:58.677000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:58.677000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:58.677000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:58.677000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:58.679000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:58.679000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:54.354345 /usr/lib/systemd/system-generators/torcx-generator[847]: time="2024-02-08T23:40:54Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 8 23:40:58.418043 systemd[1]: Unnecessary job was removed for dev-vda6.device. Feb 8 23:40:54.354368 /usr/lib/systemd/system-generators/torcx-generator[847]: time="2024-02-08T23:40:54Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 8 23:40:58.438220 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 8 23:40:58.680000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:58.680000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:54.354454 /usr/lib/systemd/system-generators/torcx-generator[847]: time="2024-02-08T23:40:54Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Feb 8 23:40:58.676567 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 8 23:40:54.354467 /usr/lib/systemd/system-generators/torcx-generator[847]: time="2024-02-08T23:40:54Z" level=debug msg="skipped missing lower profile" missing profile=oem Feb 8 23:40:58.676705 systemd[1]: Finished modprobe@dm_mod.service. Feb 8 23:40:54.354502 /usr/lib/systemd/system-generators/torcx-generator[847]: time="2024-02-08T23:40:54Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Feb 8 23:40:58.677400 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 8 23:40:54.354517 /usr/lib/systemd/system-generators/torcx-generator[847]: time="2024-02-08T23:40:54Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Feb 8 23:40:58.677515 systemd[1]: Finished modprobe@drm.service. Feb 8 23:40:54.354775 /usr/lib/systemd/system-generators/torcx-generator[847]: time="2024-02-08T23:40:54Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Feb 8 23:40:58.678219 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 8 23:40:58.681000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:58.681000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:54.354816 /usr/lib/systemd/system-generators/torcx-generator[847]: time="2024-02-08T23:40:54Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 8 23:40:58.679319 systemd[1]: Finished modprobe@efi_pstore.service. Feb 8 23:40:54.354831 /usr/lib/systemd/system-generators/torcx-generator[847]: time="2024-02-08T23:40:54Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 8 23:40:58.680313 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 8 23:40:54.355763 /usr/lib/systemd/system-generators/torcx-generator[847]: time="2024-02-08T23:40:54Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Feb 8 23:40:58.680434 systemd[1]: Finished modprobe@fuse.service. Feb 8 23:40:54.355800 /usr/lib/systemd/system-generators/torcx-generator[847]: time="2024-02-08T23:40:54Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Feb 8 23:40:58.681184 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 8 23:40:54.355822 /usr/lib/systemd/system-generators/torcx-generator[847]: time="2024-02-08T23:40:54Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.2: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.2 Feb 8 23:40:58.681312 systemd[1]: Finished modprobe@loop.service. Feb 8 23:40:54.355839 /usr/lib/systemd/system-generators/torcx-generator[847]: time="2024-02-08T23:40:54Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Feb 8 23:40:54.355858 /usr/lib/systemd/system-generators/torcx-generator[847]: time="2024-02-08T23:40:54Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.2: no such file or directory" path=/var/lib/torcx/store/3510.3.2 Feb 8 23:40:54.355874 /usr/lib/systemd/system-generators/torcx-generator[847]: time="2024-02-08T23:40:54Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Feb 8 23:40:57.827143 /usr/lib/systemd/system-generators/torcx-generator[847]: time="2024-02-08T23:40:57Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 8 23:40:57.827449 /usr/lib/systemd/system-generators/torcx-generator[847]: time="2024-02-08T23:40:57Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 8 23:40:57.827583 /usr/lib/systemd/system-generators/torcx-generator[847]: time="2024-02-08T23:40:57Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 8 23:40:57.827784 /usr/lib/systemd/system-generators/torcx-generator[847]: time="2024-02-08T23:40:57Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 8 23:40:57.827852 /usr/lib/systemd/system-generators/torcx-generator[847]: time="2024-02-08T23:40:57Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Feb 8 23:40:57.827916 /usr/lib/systemd/system-generators/torcx-generator[847]: time="2024-02-08T23:40:57Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Feb 8 23:40:58.684000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:58.684233 systemd[1]: Finished systemd-modules-load.service. Feb 8 23:40:58.687264 systemd[1]: Finished systemd-network-generator.service. Feb 8 23:40:58.687000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:58.688180 systemd[1]: Finished systemd-remount-fs.service. Feb 8 23:40:58.689079 systemd[1]: Reached target network-pre.target. Feb 8 23:40:58.688000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:58.691044 systemd[1]: Mounting sys-fs-fuse-connections.mount... Feb 8 23:40:58.692531 systemd[1]: Mounting sys-kernel-config.mount... Feb 8 23:40:58.693009 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 8 23:40:58.697783 systemd[1]: Starting systemd-hwdb-update.service... Feb 8 23:40:58.705908 systemd[1]: Starting systemd-journal-flush.service... Feb 8 23:40:58.706485 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 8 23:40:58.707533 systemd[1]: Starting systemd-random-seed.service... Feb 8 23:40:58.708274 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Feb 8 23:40:58.716336 systemd[1]: Starting systemd-sysctl.service... Feb 8 23:40:58.718135 systemd[1]: Mounted sys-fs-fuse-connections.mount. Feb 8 23:40:58.718701 systemd[1]: Mounted sys-kernel-config.mount. Feb 8 23:40:58.722155 systemd-journald[919]: Time spent on flushing to /var/log/journal/b4bce1e2efee4fc694474f52cb3d7acf is 56.742ms for 1129 entries. Feb 8 23:40:58.722155 systemd-journald[919]: System Journal (/var/log/journal/b4bce1e2efee4fc694474f52cb3d7acf) is 8.0M, max 584.8M, 576.8M free. Feb 8 23:40:59.083793 systemd-journald[919]: Received client request to flush runtime journal. Feb 8 23:40:58.726000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:58.750000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:58.937000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:59.026000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:59.053000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:58.726298 systemd[1]: Finished flatcar-tmpfiles.service. Feb 8 23:40:58.728141 systemd[1]: Starting systemd-sysusers.service... Feb 8 23:40:59.085000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:58.749568 systemd[1]: Finished systemd-udev-trigger.service. Feb 8 23:40:59.085944 udevadm[957]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 8 23:40:58.751247 systemd[1]: Starting systemd-udev-settle.service... Feb 8 23:40:58.936230 systemd[1]: Finished systemd-sysctl.service. Feb 8 23:40:59.024963 systemd[1]: Finished systemd-random-seed.service. Feb 8 23:40:59.026445 systemd[1]: Reached target first-boot-complete.target. Feb 8 23:40:59.052879 systemd[1]: Finished systemd-sysusers.service. Feb 8 23:40:59.085028 systemd[1]: Finished systemd-journal-flush.service. Feb 8 23:40:59.627403 systemd[1]: Finished systemd-hwdb-update.service. Feb 8 23:40:59.628000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:59.629000 audit: BPF prog-id=24 op=LOAD Feb 8 23:40:59.630000 audit: BPF prog-id=25 op=LOAD Feb 8 23:40:59.630000 audit: BPF prog-id=7 op=UNLOAD Feb 8 23:40:59.630000 audit: BPF prog-id=8 op=UNLOAD Feb 8 23:40:59.632520 systemd[1]: Starting systemd-udevd.service... Feb 8 23:40:59.678487 systemd-udevd[961]: Using default interface naming scheme 'v252'. Feb 8 23:40:59.747533 systemd[1]: Started systemd-udevd.service. Feb 8 23:40:59.750000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:59.752000 audit: BPF prog-id=26 op=LOAD Feb 8 23:40:59.755485 systemd[1]: Starting systemd-networkd.service... Feb 8 23:40:59.772000 audit: BPF prog-id=27 op=LOAD Feb 8 23:40:59.772000 audit: BPF prog-id=28 op=LOAD Feb 8 23:40:59.772000 audit: BPF prog-id=29 op=LOAD Feb 8 23:40:59.774366 systemd[1]: Starting systemd-userdbd.service... Feb 8 23:40:59.821000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:59.820787 systemd[1]: Started systemd-userdbd.service. Feb 8 23:40:59.838256 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Feb 8 23:40:59.910098 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Feb 8 23:40:59.914095 kernel: ACPI: button: Power Button [PWRF] Feb 8 23:40:59.921155 systemd-networkd[975]: lo: Link UP Feb 8 23:40:59.921165 systemd-networkd[975]: lo: Gained carrier Feb 8 23:40:59.921666 systemd-networkd[975]: Enumeration completed Feb 8 23:40:59.921798 systemd[1]: Started systemd-networkd.service. Feb 8 23:40:59.922000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:59.922523 systemd-networkd[975]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 8 23:40:59.924414 systemd-networkd[975]: eth0: Link UP Feb 8 23:40:59.924425 systemd-networkd[975]: eth0: Gained carrier Feb 8 23:40:59.934217 systemd-networkd[975]: eth0: DHCPv4 address 172.24.4.229/24, gateway 172.24.4.1 acquired from 172.24.4.1 Feb 8 23:40:59.940000 audit[965]: AVC avc: denied { confidentiality } for pid=965 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 8 23:40:59.940000 audit[965]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=5596f5058f70 a1=32194 a2=7f8abc0e0bc5 a3=5 items=108 ppid=961 pid=965 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:40:59.940000 audit: CWD cwd="/" Feb 8 23:40:59.940000 audit: PATH item=0 name=(null) inode=1038 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:40:59.940000 audit: PATH item=1 name=(null) inode=13198 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:40:59.940000 audit: PATH item=2 name=(null) inode=13198 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:40:59.940000 audit: PATH item=3 name=(null) inode=13199 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:40:59.940000 audit: PATH item=4 name=(null) inode=13198 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:40:59.940000 audit: PATH item=5 name=(null) inode=13200 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:40:59.940000 audit: PATH item=6 name=(null) inode=13198 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:40:59.940000 audit: PATH item=7 name=(null) inode=13201 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:40:59.940000 audit: PATH item=8 name=(null) inode=13201 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:40:59.940000 audit: PATH item=9 name=(null) inode=13202 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:40:59.940000 audit: PATH item=10 name=(null) inode=13201 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:40:59.940000 audit: PATH item=11 name=(null) inode=13203 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:40:59.940000 audit: PATH item=12 name=(null) inode=13201 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:40:59.940000 audit: PATH item=13 name=(null) inode=13204 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:40:59.940000 audit: PATH item=14 name=(null) inode=13201 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:40:59.940000 audit: PATH item=15 name=(null) inode=13205 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:40:59.940000 audit: PATH item=16 name=(null) inode=13201 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:40:59.940000 audit: PATH item=17 name=(null) inode=13206 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:40:59.940000 audit: PATH item=18 name=(null) inode=13198 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:40:59.940000 audit: PATH item=19 name=(null) inode=13207 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:40:59.940000 audit: PATH item=20 name=(null) inode=13207 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:40:59.940000 audit: PATH item=21 name=(null) inode=13208 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:40:59.940000 audit: PATH item=22 name=(null) inode=13207 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:40:59.940000 audit: PATH item=23 name=(null) inode=13209 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:40:59.940000 audit: PATH item=24 name=(null) inode=13207 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:40:59.940000 audit: PATH item=25 name=(null) inode=13210 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:40:59.940000 audit: PATH item=26 name=(null) inode=13207 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:40:59.940000 audit: PATH item=27 name=(null) inode=13211 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:40:59.940000 audit: PATH item=28 name=(null) inode=13207 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:40:59.940000 audit: PATH item=29 name=(null) inode=13212 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:40:59.940000 audit: PATH item=30 name=(null) inode=13198 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:40:59.940000 audit: PATH item=31 name=(null) inode=13213 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:40:59.940000 audit: PATH item=32 name=(null) inode=13213 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:40:59.940000 audit: PATH item=33 name=(null) inode=13214 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:40:59.940000 audit: PATH item=34 name=(null) inode=13213 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:40:59.940000 audit: PATH item=35 name=(null) inode=13215 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:40:59.940000 audit: PATH item=36 name=(null) inode=13213 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:40:59.940000 audit: PATH item=37 name=(null) inode=13216 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:40:59.940000 audit: PATH item=38 name=(null) inode=13213 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:40:59.940000 audit: PATH item=39 name=(null) inode=13217 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:40:59.940000 audit: PATH item=40 name=(null) inode=13213 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:40:59.940000 audit: PATH item=41 name=(null) inode=13218 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:40:59.940000 audit: PATH item=42 name=(null) inode=13198 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:40:59.940000 audit: PATH item=43 name=(null) inode=13219 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:40:59.940000 audit: PATH item=44 name=(null) inode=13219 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:40:59.940000 audit: PATH item=45 name=(null) inode=13220 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:40:59.940000 audit: PATH item=46 name=(null) inode=13219 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:40:59.940000 audit: PATH item=47 name=(null) inode=13221 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:40:59.940000 audit: PATH item=48 name=(null) inode=13219 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:40:59.940000 audit: PATH item=49 name=(null) inode=13222 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:40:59.940000 audit: PATH item=50 name=(null) inode=13219 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:40:59.940000 audit: PATH item=51 name=(null) inode=13223 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:40:59.940000 audit: PATH item=52 name=(null) inode=13219 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:40:59.940000 audit: PATH item=53 name=(null) inode=13224 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:40:59.940000 audit: PATH item=54 name=(null) inode=1038 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:40:59.940000 audit: PATH item=55 name=(null) inode=13225 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:40:59.940000 audit: PATH item=56 name=(null) inode=13225 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:40:59.940000 audit: PATH item=57 name=(null) inode=13226 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:40:59.940000 audit: PATH item=58 name=(null) inode=13225 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:40:59.940000 audit: PATH item=59 name=(null) inode=13227 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:40:59.940000 audit: PATH item=60 name=(null) inode=13225 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:40:59.940000 audit: PATH item=61 name=(null) inode=13228 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:40:59.940000 audit: PATH item=62 name=(null) inode=13228 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:40:59.940000 audit: PATH item=63 name=(null) inode=13229 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:40:59.940000 audit: PATH item=64 name=(null) inode=13228 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:40:59.940000 audit: PATH item=65 name=(null) inode=13230 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:40:59.940000 audit: PATH item=66 name=(null) inode=13228 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:40:59.940000 audit: PATH item=67 name=(null) inode=13231 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:40:59.940000 audit: PATH item=68 name=(null) inode=13228 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:40:59.940000 audit: PATH item=69 name=(null) inode=13232 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:40:59.940000 audit: PATH item=70 name=(null) inode=13228 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:40:59.940000 audit: PATH item=71 name=(null) inode=13233 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:40:59.940000 audit: PATH item=72 name=(null) inode=13225 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:40:59.940000 audit: PATH item=73 name=(null) inode=13234 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:40:59.940000 audit: PATH item=74 name=(null) inode=13234 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:40:59.940000 audit: PATH item=75 name=(null) inode=13235 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:40:59.940000 audit: PATH item=76 name=(null) inode=13234 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:40:59.940000 audit: PATH item=77 name=(null) inode=13236 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:40:59.940000 audit: PATH item=78 name=(null) inode=13234 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:40:59.940000 audit: PATH item=79 name=(null) inode=13237 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:40:59.940000 audit: PATH item=80 name=(null) inode=13234 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:40:59.940000 audit: PATH item=81 name=(null) inode=13238 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:40:59.940000 audit: PATH item=82 name=(null) inode=13234 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:40:59.940000 audit: PATH item=83 name=(null) inode=13239 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:40:59.940000 audit: PATH item=84 name=(null) inode=13225 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:40:59.940000 audit: PATH item=85 name=(null) inode=13240 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:40:59.940000 audit: PATH item=86 name=(null) inode=13240 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:40:59.940000 audit: PATH item=87 name=(null) inode=13241 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:40:59.940000 audit: PATH item=88 name=(null) inode=13240 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:40:59.940000 audit: PATH item=89 name=(null) inode=13242 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:40:59.940000 audit: PATH item=90 name=(null) inode=13240 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:40:59.940000 audit: PATH item=91 name=(null) inode=13243 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:40:59.940000 audit: PATH item=92 name=(null) inode=13240 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:40:59.940000 audit: PATH item=93 name=(null) inode=13244 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:40:59.940000 audit: PATH item=94 name=(null) inode=13240 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:40:59.940000 audit: PATH item=95 name=(null) inode=13245 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:40:59.940000 audit: PATH item=96 name=(null) inode=13225 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:40:59.940000 audit: PATH item=97 name=(null) inode=13246 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:40:59.940000 audit: PATH item=98 name=(null) inode=13246 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:40:59.940000 audit: PATH item=99 name=(null) inode=13247 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:40:59.940000 audit: PATH item=100 name=(null) inode=13246 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:40:59.940000 audit: PATH item=101 name=(null) inode=13248 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:40:59.940000 audit: PATH item=102 name=(null) inode=13246 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:40:59.940000 audit: PATH item=103 name=(null) inode=13249 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:40:59.940000 audit: PATH item=104 name=(null) inode=13246 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:40:59.940000 audit: PATH item=105 name=(null) inode=13250 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:40:59.940000 audit: PATH item=106 name=(null) inode=13246 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:40:59.940000 audit: PATH item=107 name=(null) inode=13251 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:40:59.940000 audit: PROCTITLE proctitle="(udev-worker)" Feb 8 23:40:59.950098 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Feb 8 23:40:59.969314 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 8 23:40:59.987099 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Feb 8 23:40:59.992116 kernel: mousedev: PS/2 mouse device common for all mice Feb 8 23:41:00.030449 systemd[1]: Finished systemd-udev-settle.service. Feb 8 23:41:00.030000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:41:00.032103 systemd[1]: Starting lvm2-activation-early.service... Feb 8 23:41:00.065764 lvm[990]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 8 23:41:00.106910 systemd[1]: Finished lvm2-activation-early.service. Feb 8 23:41:00.108000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:41:00.108334 systemd[1]: Reached target cryptsetup.target. Feb 8 23:41:00.111591 systemd[1]: Starting lvm2-activation.service... Feb 8 23:41:00.121092 lvm[991]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 8 23:41:00.155037 systemd[1]: Finished lvm2-activation.service. Feb 8 23:41:00.156000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:41:00.156390 systemd[1]: Reached target local-fs-pre.target. Feb 8 23:41:00.157517 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 8 23:41:00.157590 systemd[1]: Reached target local-fs.target. Feb 8 23:41:00.158641 systemd[1]: Reached target machines.target. Feb 8 23:41:00.162176 systemd[1]: Starting ldconfig.service... Feb 8 23:41:00.164512 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Feb 8 23:41:00.164639 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 8 23:41:00.166748 systemd[1]: Starting systemd-boot-update.service... Feb 8 23:41:00.170524 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Feb 8 23:41:00.178902 systemd[1]: Starting systemd-machine-id-commit.service... Feb 8 23:41:00.181708 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Feb 8 23:41:00.181821 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Feb 8 23:41:00.188005 systemd[1]: Starting systemd-tmpfiles-setup.service... Feb 8 23:41:00.190854 systemd[1]: boot.automount: Got automount request for /boot, triggered by 993 (bootctl) Feb 8 23:41:00.195613 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Feb 8 23:41:00.220000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:41:00.219431 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Feb 8 23:41:00.587451 systemd-tmpfiles[996]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Feb 8 23:41:00.967655 systemd-tmpfiles[996]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 8 23:41:01.014039 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 8 23:41:01.016200 systemd[1]: Finished systemd-machine-id-commit.service. Feb 8 23:41:01.017000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:41:01.021519 systemd-tmpfiles[996]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 8 23:41:01.189273 systemd-fsck[1001]: fsck.fat 4.2 (2021-01-31) Feb 8 23:41:01.189273 systemd-fsck[1001]: /dev/vda1: 789 files, 115332/258078 clusters Feb 8 23:41:01.194642 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Feb 8 23:41:01.195000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:41:01.198576 systemd[1]: Mounting boot.mount... Feb 8 23:41:01.213404 systemd-networkd[975]: eth0: Gained IPv6LL Feb 8 23:41:01.230632 systemd[1]: Mounted boot.mount. Feb 8 23:41:01.271830 systemd[1]: Finished systemd-boot-update.service. Feb 8 23:41:01.273000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:41:01.364646 systemd[1]: Finished systemd-tmpfiles-setup.service. Feb 8 23:41:01.365000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:41:01.368862 systemd[1]: Starting audit-rules.service... Feb 8 23:41:01.372837 systemd[1]: Starting clean-ca-certificates.service... Feb 8 23:41:01.380235 systemd[1]: Starting systemd-journal-catalog-update.service... Feb 8 23:41:01.387000 audit: BPF prog-id=30 op=LOAD Feb 8 23:41:01.393404 systemd[1]: Starting systemd-resolved.service... Feb 8 23:41:01.399000 audit: BPF prog-id=31 op=LOAD Feb 8 23:41:01.403514 systemd[1]: Starting systemd-timesyncd.service... Feb 8 23:41:01.408671 systemd[1]: Starting systemd-update-utmp.service... Feb 8 23:41:01.410447 systemd[1]: Finished clean-ca-certificates.service. Feb 8 23:41:01.410000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:41:01.411200 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 8 23:41:01.419000 audit[1016]: SYSTEM_BOOT pid=1016 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Feb 8 23:41:01.422450 systemd[1]: Finished systemd-update-utmp.service. Feb 8 23:41:01.422000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:41:01.458078 systemd[1]: Finished systemd-journal-catalog-update.service. Feb 8 23:41:01.458000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:41:01.475000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 8 23:41:01.475000 audit[1024]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe0a808070 a2=420 a3=0 items=0 ppid=1004 pid=1024 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:41:01.475000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Feb 8 23:41:01.475469 augenrules[1024]: No rules Feb 8 23:41:01.476336 systemd[1]: Finished audit-rules.service. Feb 8 23:41:01.483703 systemd[1]: Started systemd-timesyncd.service. Feb 8 23:41:01.484278 systemd[1]: Reached target time-set.target. Feb 8 23:41:02.378784 systemd-timesyncd[1014]: Contacted time server 51.68.44.27:123 (0.flatcar.pool.ntp.org). Feb 8 23:41:02.378902 systemd-timesyncd[1014]: Initial clock synchronization to Thu 2024-02-08 23:41:02.378616 UTC. Feb 8 23:41:02.390728 systemd-resolved[1013]: Positive Trust Anchors: Feb 8 23:41:02.390757 systemd-resolved[1013]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 8 23:41:02.390795 systemd-resolved[1013]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 8 23:41:02.408084 systemd-resolved[1013]: Using system hostname 'ci-3510-3-2-5-de7ca92588.novalocal'. Feb 8 23:41:02.410760 systemd[1]: Started systemd-resolved.service. Feb 8 23:41:02.412106 systemd[1]: Reached target network.target. Feb 8 23:41:02.413196 systemd[1]: Reached target nss-lookup.target. Feb 8 23:41:02.620390 ldconfig[992]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 8 23:41:02.655708 systemd[1]: Finished ldconfig.service. Feb 8 23:41:02.660171 systemd[1]: Starting systemd-update-done.service... Feb 8 23:41:02.674249 systemd[1]: Finished systemd-update-done.service. Feb 8 23:41:02.676167 systemd[1]: Reached target sysinit.target. Feb 8 23:41:02.678257 systemd[1]: Started motdgen.path. Feb 8 23:41:02.679784 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Feb 8 23:41:02.682069 systemd[1]: Started logrotate.timer. Feb 8 23:41:02.683798 systemd[1]: Started mdadm.timer. Feb 8 23:41:02.685275 systemd[1]: Started systemd-tmpfiles-clean.timer. Feb 8 23:41:02.686708 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 8 23:41:02.686935 systemd[1]: Reached target paths.target. Feb 8 23:41:02.688373 systemd[1]: Reached target timers.target. Feb 8 23:41:02.692992 systemd[1]: Listening on dbus.socket. Feb 8 23:41:02.696430 systemd[1]: Starting docker.socket... Feb 8 23:41:02.703122 systemd[1]: Listening on sshd.socket. Feb 8 23:41:02.704398 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 8 23:41:02.705405 systemd[1]: Listening on docker.socket. Feb 8 23:41:02.706636 systemd[1]: Reached target sockets.target. Feb 8 23:41:02.707703 systemd[1]: Reached target basic.target. Feb 8 23:41:02.708899 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 8 23:41:02.708967 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 8 23:41:02.711064 systemd[1]: Starting containerd.service... Feb 8 23:41:02.714150 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Feb 8 23:41:02.717520 systemd[1]: Starting dbus.service... Feb 8 23:41:02.723639 systemd[1]: Starting enable-oem-cloudinit.service... Feb 8 23:41:02.735621 systemd[1]: Starting extend-filesystems.service... Feb 8 23:41:02.736961 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Feb 8 23:41:02.742478 systemd[1]: Starting motdgen.service... Feb 8 23:41:02.748480 systemd[1]: Starting prepare-cni-plugins.service... Feb 8 23:41:02.752237 systemd[1]: Starting prepare-critools.service... Feb 8 23:41:02.756723 systemd[1]: Starting ssh-key-proc-cmdline.service... Feb 8 23:41:02.759740 systemd[1]: Starting sshd-keygen.service... Feb 8 23:41:02.764135 systemd[1]: Starting systemd-logind.service... Feb 8 23:41:02.766929 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 8 23:41:02.767008 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 8 23:41:02.768957 jq[1038]: false Feb 8 23:41:02.767533 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 8 23:41:02.768248 systemd[1]: Starting update-engine.service... Feb 8 23:41:02.769840 systemd[1]: Starting update-ssh-keys-after-ignition.service... Feb 8 23:41:02.772668 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 8 23:41:02.772903 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Feb 8 23:41:02.782655 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 8 23:41:02.782887 systemd[1]: Finished ssh-key-proc-cmdline.service. Feb 8 23:41:02.792261 jq[1047]: true Feb 8 23:41:02.806569 tar[1049]: crictl Feb 8 23:41:02.809052 dbus-daemon[1035]: [system] SELinux support is enabled Feb 8 23:41:02.809464 systemd[1]: Started dbus.service. Feb 8 23:41:02.812158 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 8 23:41:02.812196 systemd[1]: Reached target system-config.target. Feb 8 23:41:02.812722 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 8 23:41:02.812750 systemd[1]: Reached target user-config.target. Feb 8 23:41:02.823619 tar[1060]: ./ Feb 8 23:41:02.823619 tar[1060]: ./macvlan Feb 8 23:41:02.831741 jq[1065]: true Feb 8 23:41:02.832227 extend-filesystems[1039]: Found vda Feb 8 23:41:02.837303 extend-filesystems[1039]: Found vda1 Feb 8 23:41:02.838026 extend-filesystems[1039]: Found vda2 Feb 8 23:41:02.839369 extend-filesystems[1039]: Found vda3 Feb 8 23:41:02.843788 extend-filesystems[1039]: Found usr Feb 8 23:41:02.845502 extend-filesystems[1039]: Found vda4 Feb 8 23:41:02.846661 extend-filesystems[1039]: Found vda6 Feb 8 23:41:02.847282 extend-filesystems[1039]: Found vda7 Feb 8 23:41:02.848091 extend-filesystems[1039]: Found vda9 Feb 8 23:41:02.849975 extend-filesystems[1039]: Checking size of /dev/vda9 Feb 8 23:41:02.861416 systemd[1]: motdgen.service: Deactivated successfully. Feb 8 23:41:02.861591 systemd[1]: Finished motdgen.service. Feb 8 23:41:02.899496 extend-filesystems[1039]: Resized partition /dev/vda9 Feb 8 23:41:02.907868 extend-filesystems[1090]: resize2fs 1.46.5 (30-Dec-2021) Feb 8 23:41:02.930343 env[1051]: time="2024-02-08T23:41:02.930279147Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Feb 8 23:41:02.946921 systemd-logind[1045]: Watching system buttons on /dev/input/event1 (Power Button) Feb 8 23:41:02.946964 systemd-logind[1045]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 8 23:41:02.947642 systemd-logind[1045]: New seat seat0. Feb 8 23:41:02.949625 bash[1087]: Updated "/home/core/.ssh/authorized_keys" Feb 8 23:41:02.950018 systemd[1]: Finished update-ssh-keys-after-ignition.service. Feb 8 23:41:02.950852 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 4635643 blocks Feb 8 23:41:02.955955 update_engine[1046]: I0208 23:41:02.954742 1046 main.cc:92] Flatcar Update Engine starting Feb 8 23:41:02.963768 systemd[1]: Started systemd-logind.service. Feb 8 23:41:02.965510 systemd[1]: Started update-engine.service. Feb 8 23:41:02.968016 systemd[1]: Started locksmithd.service. Feb 8 23:41:02.969153 update_engine[1046]: I0208 23:41:02.969120 1046 update_check_scheduler.cc:74] Next update check in 11m49s Feb 8 23:41:03.004834 kernel: EXT4-fs (vda9): resized filesystem to 4635643 Feb 8 23:41:03.024629 coreos-metadata[1034]: Feb 08 23:41:03.024 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Feb 8 23:41:03.056011 env[1051]: time="2024-02-08T23:41:03.034460008Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 8 23:41:03.060851 env[1051]: time="2024-02-08T23:41:03.057016753Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 8 23:41:03.060851 env[1051]: time="2024-02-08T23:41:03.058745896Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.148-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 8 23:41:03.060851 env[1051]: time="2024-02-08T23:41:03.058777676Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 8 23:41:03.060851 env[1051]: time="2024-02-08T23:41:03.059429348Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 8 23:41:03.060851 env[1051]: time="2024-02-08T23:41:03.059452050Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 8 23:41:03.060851 env[1051]: time="2024-02-08T23:41:03.059473010Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Feb 8 23:41:03.060851 env[1051]: time="2024-02-08T23:41:03.059485934Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 8 23:41:03.060851 env[1051]: time="2024-02-08T23:41:03.059574490Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 8 23:41:03.060851 env[1051]: time="2024-02-08T23:41:03.060353651Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 8 23:41:03.060851 env[1051]: time="2024-02-08T23:41:03.060499635Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 8 23:41:03.057170 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 8 23:41:03.061536 extend-filesystems[1090]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 8 23:41:03.061536 extend-filesystems[1090]: old_desc_blocks = 1, new_desc_blocks = 3 Feb 8 23:41:03.061536 extend-filesystems[1090]: The filesystem on /dev/vda9 is now 4635643 (4k) blocks long. Feb 8 23:41:03.064934 env[1051]: time="2024-02-08T23:41:03.060519883Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 8 23:41:03.064934 env[1051]: time="2024-02-08T23:41:03.060580727Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Feb 8 23:41:03.064934 env[1051]: time="2024-02-08T23:41:03.060595605Z" level=info msg="metadata content store policy set" policy=shared Feb 8 23:41:03.057381 systemd[1]: Finished extend-filesystems.service. Feb 8 23:41:03.065094 extend-filesystems[1039]: Resized filesystem in /dev/vda9 Feb 8 23:41:03.072017 env[1051]: time="2024-02-08T23:41:03.071345777Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 8 23:41:03.072017 env[1051]: time="2024-02-08T23:41:03.071394208Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 8 23:41:03.072017 env[1051]: time="2024-02-08T23:41:03.071411891Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 8 23:41:03.072017 env[1051]: time="2024-02-08T23:41:03.071452347Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 8 23:41:03.072017 env[1051]: time="2024-02-08T23:41:03.071471373Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 8 23:41:03.072017 env[1051]: time="2024-02-08T23:41:03.071488244Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 8 23:41:03.072017 env[1051]: time="2024-02-08T23:41:03.071504194Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 8 23:41:03.072017 env[1051]: time="2024-02-08T23:41:03.071521927Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 8 23:41:03.072017 env[1051]: time="2024-02-08T23:41:03.071537667Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Feb 8 23:41:03.072017 env[1051]: time="2024-02-08T23:41:03.071554609Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 8 23:41:03.072017 env[1051]: time="2024-02-08T23:41:03.071569937Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 8 23:41:03.072017 env[1051]: time="2024-02-08T23:41:03.071584755Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 8 23:41:03.072017 env[1051]: time="2024-02-08T23:41:03.071742381Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 8 23:41:03.072017 env[1051]: time="2024-02-08T23:41:03.071862246Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 8 23:41:03.072393 env[1051]: time="2024-02-08T23:41:03.072156728Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 8 23:41:03.072393 env[1051]: time="2024-02-08T23:41:03.072187075Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 8 23:41:03.072393 env[1051]: time="2024-02-08T23:41:03.072204027Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 8 23:41:03.072393 env[1051]: time="2024-02-08T23:41:03.072246957Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 8 23:41:03.072393 env[1051]: time="2024-02-08T23:41:03.072262707Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 8 23:41:03.072393 env[1051]: time="2024-02-08T23:41:03.072278607Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 8 23:41:03.072393 env[1051]: time="2024-02-08T23:41:03.072293004Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 8 23:41:03.072393 env[1051]: time="2024-02-08T23:41:03.072308523Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 8 23:41:03.072393 env[1051]: time="2024-02-08T23:41:03.072323240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 8 23:41:03.072393 env[1051]: time="2024-02-08T23:41:03.072338008Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 8 23:41:03.072393 env[1051]: time="2024-02-08T23:41:03.072368305Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 8 23:41:03.072639 env[1051]: time="2024-02-08T23:41:03.072401527Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 8 23:41:03.072639 env[1051]: time="2024-02-08T23:41:03.072559954Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 8 23:41:03.072639 env[1051]: time="2024-02-08T23:41:03.072585512Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 8 23:41:03.072639 env[1051]: time="2024-02-08T23:41:03.072603065Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 8 23:41:03.072639 env[1051]: time="2024-02-08T23:41:03.072618053Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 8 23:41:03.072755 env[1051]: time="2024-02-08T23:41:03.072637780Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Feb 8 23:41:03.072755 env[1051]: time="2024-02-08T23:41:03.072652358Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 8 23:41:03.072755 env[1051]: time="2024-02-08T23:41:03.072676503Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Feb 8 23:41:03.072755 env[1051]: time="2024-02-08T23:41:03.072715376Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 8 23:41:03.074556 systemd[1]: Started containerd.service. Feb 8 23:41:03.076850 env[1051]: time="2024-02-08T23:41:03.072985572Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 8 23:41:03.076850 env[1051]: time="2024-02-08T23:41:03.073078216Z" level=info msg="Connect containerd service" Feb 8 23:41:03.076850 env[1051]: time="2024-02-08T23:41:03.073131727Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 8 23:41:03.076850 env[1051]: time="2024-02-08T23:41:03.074043697Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 8 23:41:03.076850 env[1051]: time="2024-02-08T23:41:03.074362345Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 8 23:41:03.076850 env[1051]: time="2024-02-08T23:41:03.074410876Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 8 23:41:03.076850 env[1051]: time="2024-02-08T23:41:03.074884704Z" level=info msg="Start subscribing containerd event" Feb 8 23:41:03.076850 env[1051]: time="2024-02-08T23:41:03.074935900Z" level=info msg="Start recovering state" Feb 8 23:41:03.076850 env[1051]: time="2024-02-08T23:41:03.075014307Z" level=info msg="Start event monitor" Feb 8 23:41:03.076850 env[1051]: time="2024-02-08T23:41:03.075047620Z" level=info msg="Start snapshots syncer" Feb 8 23:41:03.076850 env[1051]: time="2024-02-08T23:41:03.075067497Z" level=info msg="Start cni network conf syncer for default" Feb 8 23:41:03.076850 env[1051]: time="2024-02-08T23:41:03.075079480Z" level=info msg="Start streaming server" Feb 8 23:41:03.076850 env[1051]: time="2024-02-08T23:41:03.075313599Z" level=info msg="containerd successfully booted in 0.145934s" Feb 8 23:41:03.081416 tar[1060]: ./static Feb 8 23:41:03.140253 tar[1060]: ./vlan Feb 8 23:41:03.216902 tar[1060]: ./portmap Feb 8 23:41:03.285965 tar[1060]: ./host-local Feb 8 23:41:03.348096 tar[1060]: ./vrf Feb 8 23:41:03.369029 coreos-metadata[1034]: Feb 08 23:41:03.368 INFO Fetch successful Feb 8 23:41:03.369029 coreos-metadata[1034]: Feb 08 23:41:03.369 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Feb 8 23:41:03.386577 coreos-metadata[1034]: Feb 08 23:41:03.386 INFO Fetch successful Feb 8 23:41:03.390752 unknown[1034]: wrote ssh authorized keys file for user: core Feb 8 23:41:03.424695 update-ssh-keys[1098]: Updated "/home/core/.ssh/authorized_keys" Feb 8 23:41:03.425209 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Feb 8 23:41:03.438015 tar[1060]: ./bridge Feb 8 23:41:03.524263 tar[1060]: ./tuning Feb 8 23:41:03.587368 tar[1060]: ./firewall Feb 8 23:41:03.599399 systemd[1]: Finished prepare-critools.service. Feb 8 23:41:03.641844 tar[1060]: ./host-device Feb 8 23:41:03.679407 tar[1060]: ./sbr Feb 8 23:41:03.713940 tar[1060]: ./loopback Feb 8 23:41:03.746797 tar[1060]: ./dhcp Feb 8 23:41:03.841569 tar[1060]: ./ptp Feb 8 23:41:03.883102 tar[1060]: ./ipvlan Feb 8 23:41:03.922416 tar[1060]: ./bandwidth Feb 8 23:41:03.970638 systemd[1]: Finished prepare-cni-plugins.service. Feb 8 23:41:03.993523 locksmithd[1095]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 8 23:41:04.117933 sshd_keygen[1066]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 8 23:41:04.150680 systemd[1]: Finished sshd-keygen.service. Feb 8 23:41:04.152982 systemd[1]: Starting issuegen.service... Feb 8 23:41:04.160803 systemd[1]: issuegen.service: Deactivated successfully. Feb 8 23:41:04.160983 systemd[1]: Finished issuegen.service. Feb 8 23:41:04.163003 systemd[1]: Starting systemd-user-sessions.service... Feb 8 23:41:04.174116 systemd[1]: Finished systemd-user-sessions.service. Feb 8 23:41:04.176229 systemd[1]: Started getty@tty1.service. Feb 8 23:41:04.178048 systemd[1]: Started serial-getty@ttyS0.service. Feb 8 23:41:04.178729 systemd[1]: Reached target getty.target. Feb 8 23:41:04.179376 systemd[1]: Reached target multi-user.target. Feb 8 23:41:04.181037 systemd[1]: Starting systemd-update-utmp-runlevel.service... Feb 8 23:41:04.190721 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Feb 8 23:41:04.191103 systemd[1]: Finished systemd-update-utmp-runlevel.service. Feb 8 23:41:04.202955 systemd[1]: Startup finished in 956ms (kernel) + 11.163s (initrd) + 9.398s (userspace) = 21.518s. Feb 8 23:41:12.693606 systemd[1]: Created slice system-sshd.slice. Feb 8 23:41:12.696545 systemd[1]: Started sshd@0-172.24.4.229:22-172.24.4.1:34236.service. Feb 8 23:41:13.695365 sshd[1122]: Accepted publickey for core from 172.24.4.1 port 34236 ssh2: RSA SHA256:hSCdy28aHh0WFAXHFi8tWlQhiCOOiQrn91fhtzGNenI Feb 8 23:41:13.699739 sshd[1122]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:41:13.723119 systemd[1]: Created slice user-500.slice. Feb 8 23:41:13.726317 systemd[1]: Starting user-runtime-dir@500.service... Feb 8 23:41:13.735617 systemd-logind[1045]: New session 1 of user core. Feb 8 23:41:13.747518 systemd[1]: Finished user-runtime-dir@500.service. Feb 8 23:41:13.751232 systemd[1]: Starting user@500.service... Feb 8 23:41:13.764627 (systemd)[1125]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:41:13.900365 systemd[1125]: Queued start job for default target default.target. Feb 8 23:41:13.901961 systemd[1125]: Reached target paths.target. Feb 8 23:41:13.902116 systemd[1125]: Reached target sockets.target. Feb 8 23:41:13.902226 systemd[1125]: Reached target timers.target. Feb 8 23:41:13.902397 systemd[1125]: Reached target basic.target. Feb 8 23:41:13.902553 systemd[1125]: Reached target default.target. Feb 8 23:41:13.902668 systemd[1]: Started user@500.service. Feb 8 23:41:13.903215 systemd[1125]: Startup finished in 125ms. Feb 8 23:41:13.904621 systemd[1]: Started session-1.scope. Feb 8 23:41:14.382401 systemd[1]: Started sshd@1-172.24.4.229:22-172.24.4.1:34252.service. Feb 8 23:41:15.953714 sshd[1134]: Accepted publickey for core from 172.24.4.1 port 34252 ssh2: RSA SHA256:hSCdy28aHh0WFAXHFi8tWlQhiCOOiQrn91fhtzGNenI Feb 8 23:41:15.958270 sshd[1134]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:41:15.973395 systemd[1]: Started session-2.scope. Feb 8 23:41:15.974717 systemd-logind[1045]: New session 2 of user core. Feb 8 23:41:16.555670 sshd[1134]: pam_unix(sshd:session): session closed for user core Feb 8 23:41:16.563744 systemd[1]: Started sshd@2-172.24.4.229:22-172.24.4.1:49652.service. Feb 8 23:41:16.568308 systemd[1]: sshd@1-172.24.4.229:22-172.24.4.1:34252.service: Deactivated successfully. Feb 8 23:41:16.570448 systemd[1]: session-2.scope: Deactivated successfully. Feb 8 23:41:16.574437 systemd-logind[1045]: Session 2 logged out. Waiting for processes to exit. Feb 8 23:41:16.577616 systemd-logind[1045]: Removed session 2. Feb 8 23:41:17.772561 sshd[1139]: Accepted publickey for core from 172.24.4.1 port 49652 ssh2: RSA SHA256:hSCdy28aHh0WFAXHFi8tWlQhiCOOiQrn91fhtzGNenI Feb 8 23:41:17.776087 sshd[1139]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:41:17.786317 systemd-logind[1045]: New session 3 of user core. Feb 8 23:41:17.786968 systemd[1]: Started session-3.scope. Feb 8 23:41:18.389935 sshd[1139]: pam_unix(sshd:session): session closed for user core Feb 8 23:41:18.396285 systemd[1]: sshd@2-172.24.4.229:22-172.24.4.1:49652.service: Deactivated successfully. Feb 8 23:41:18.397635 systemd[1]: session-3.scope: Deactivated successfully. Feb 8 23:41:18.400072 systemd-logind[1045]: Session 3 logged out. Waiting for processes to exit. Feb 8 23:41:18.402550 systemd[1]: Started sshd@3-172.24.4.229:22-172.24.4.1:49662.service. Feb 8 23:41:18.405676 systemd-logind[1045]: Removed session 3. Feb 8 23:41:19.547965 sshd[1146]: Accepted publickey for core from 172.24.4.1 port 49662 ssh2: RSA SHA256:hSCdy28aHh0WFAXHFi8tWlQhiCOOiQrn91fhtzGNenI Feb 8 23:41:19.550508 sshd[1146]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:41:19.559928 systemd-logind[1045]: New session 4 of user core. Feb 8 23:41:19.561104 systemd[1]: Started session-4.scope. Feb 8 23:41:20.326166 sshd[1146]: pam_unix(sshd:session): session closed for user core Feb 8 23:41:20.331286 systemd[1]: sshd@3-172.24.4.229:22-172.24.4.1:49662.service: Deactivated successfully. Feb 8 23:41:20.332513 systemd[1]: session-4.scope: Deactivated successfully. Feb 8 23:41:20.333507 systemd-logind[1045]: Session 4 logged out. Waiting for processes to exit. Feb 8 23:41:20.335865 systemd[1]: Started sshd@4-172.24.4.229:22-172.24.4.1:49676.service. Feb 8 23:41:20.338451 systemd-logind[1045]: Removed session 4. Feb 8 23:41:21.555384 sshd[1152]: Accepted publickey for core from 172.24.4.1 port 49676 ssh2: RSA SHA256:hSCdy28aHh0WFAXHFi8tWlQhiCOOiQrn91fhtzGNenI Feb 8 23:41:21.558444 sshd[1152]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:41:21.568055 systemd-logind[1045]: New session 5 of user core. Feb 8 23:41:21.568895 systemd[1]: Started session-5.scope. Feb 8 23:41:21.906396 sudo[1155]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 8 23:41:21.906905 sudo[1155]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 8 23:41:22.543350 systemd[1]: Reloading. Feb 8 23:41:22.676204 /usr/lib/systemd/system-generators/torcx-generator[1187]: time="2024-02-08T23:41:22Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 8 23:41:22.676235 /usr/lib/systemd/system-generators/torcx-generator[1187]: time="2024-02-08T23:41:22Z" level=info msg="torcx already run" Feb 8 23:41:22.742452 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 8 23:41:22.742473 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 8 23:41:22.765226 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 8 23:41:22.850978 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 8 23:41:22.859137 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 8 23:41:22.859677 systemd[1]: Reached target network-online.target. Feb 8 23:41:22.861154 systemd[1]: Started kubelet.service. Feb 8 23:41:22.878432 systemd[1]: Starting coreos-metadata.service... Feb 8 23:41:22.953694 coreos-metadata[1239]: Feb 08 23:41:22.953 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Feb 8 23:41:22.976410 kubelet[1231]: E0208 23:41:22.976335 1231 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 8 23:41:22.979603 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 8 23:41:22.979886 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 8 23:41:23.278899 coreos-metadata[1239]: Feb 08 23:41:23.278 INFO Fetch successful Feb 8 23:41:23.278899 coreos-metadata[1239]: Feb 08 23:41:23.278 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Feb 8 23:41:23.291544 coreos-metadata[1239]: Feb 08 23:41:23.291 INFO Fetch successful Feb 8 23:41:23.291544 coreos-metadata[1239]: Feb 08 23:41:23.291 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Feb 8 23:41:23.300368 coreos-metadata[1239]: Feb 08 23:41:23.300 INFO Fetch successful Feb 8 23:41:23.300368 coreos-metadata[1239]: Feb 08 23:41:23.300 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Feb 8 23:41:23.314414 coreos-metadata[1239]: Feb 08 23:41:23.314 INFO Fetch successful Feb 8 23:41:23.314414 coreos-metadata[1239]: Feb 08 23:41:23.314 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Feb 8 23:41:23.327967 coreos-metadata[1239]: Feb 08 23:41:23.325 INFO Fetch successful Feb 8 23:41:23.341993 systemd[1]: Finished coreos-metadata.service. Feb 8 23:41:23.989170 systemd[1]: Stopped kubelet.service. Feb 8 23:41:24.026496 systemd[1]: Reloading. Feb 8 23:41:24.166248 /usr/lib/systemd/system-generators/torcx-generator[1297]: time="2024-02-08T23:41:24Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 8 23:41:24.166620 /usr/lib/systemd/system-generators/torcx-generator[1297]: time="2024-02-08T23:41:24Z" level=info msg="torcx already run" Feb 8 23:41:24.254960 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 8 23:41:24.254982 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 8 23:41:24.279995 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 8 23:41:24.387058 systemd[1]: Started kubelet.service. Feb 8 23:41:24.462152 kubelet[1341]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 8 23:41:24.462533 kubelet[1341]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 8 23:41:24.462679 kubelet[1341]: I0208 23:41:24.462645 1341 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 8 23:41:24.464152 kubelet[1341]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 8 23:41:24.464222 kubelet[1341]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 8 23:41:25.531489 kubelet[1341]: I0208 23:41:25.531458 1341 server.go:412] "Kubelet version" kubeletVersion="v1.26.5" Feb 8 23:41:25.531870 kubelet[1341]: I0208 23:41:25.531851 1341 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 8 23:41:25.532188 kubelet[1341]: I0208 23:41:25.532172 1341 server.go:836] "Client rotation is on, will bootstrap in background" Feb 8 23:41:25.535956 kubelet[1341]: I0208 23:41:25.535911 1341 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 8 23:41:25.537190 kubelet[1341]: I0208 23:41:25.537158 1341 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 8 23:41:25.537528 kubelet[1341]: I0208 23:41:25.537514 1341 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 8 23:41:25.537722 kubelet[1341]: I0208 23:41:25.537702 1341 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 8 23:41:25.537948 kubelet[1341]: I0208 23:41:25.537932 1341 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 8 23:41:25.538052 kubelet[1341]: I0208 23:41:25.538038 1341 container_manager_linux.go:308] "Creating device plugin manager" Feb 8 23:41:25.538271 kubelet[1341]: I0208 23:41:25.538252 1341 state_mem.go:36] "Initialized new in-memory state store" Feb 8 23:41:25.542732 kubelet[1341]: I0208 23:41:25.542705 1341 kubelet.go:398] "Attempting to sync node with API server" Feb 8 23:41:25.543007 kubelet[1341]: I0208 23:41:25.542936 1341 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 8 23:41:25.543126 kubelet[1341]: I0208 23:41:25.543110 1341 kubelet.go:297] "Adding apiserver pod source" Feb 8 23:41:25.543238 kubelet[1341]: I0208 23:41:25.543222 1341 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 8 23:41:25.543980 kubelet[1341]: E0208 23:41:25.543962 1341 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:41:25.544369 kubelet[1341]: E0208 23:41:25.544329 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:41:25.544591 kubelet[1341]: I0208 23:41:25.544572 1341 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 8 23:41:25.545160 kubelet[1341]: W0208 23:41:25.545143 1341 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 8 23:41:25.545844 kubelet[1341]: I0208 23:41:25.545799 1341 server.go:1186] "Started kubelet" Feb 8 23:41:25.550210 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Feb 8 23:41:25.551908 kubelet[1341]: I0208 23:41:25.551687 1341 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 8 23:41:25.552945 kubelet[1341]: E0208 23:41:25.552709 1341 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 8 23:41:25.552945 kubelet[1341]: E0208 23:41:25.552740 1341 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 8 23:41:25.554416 kubelet[1341]: I0208 23:41:25.554384 1341 server.go:161] "Starting to listen" address="0.0.0.0" port=10250 Feb 8 23:41:25.555076 kubelet[1341]: I0208 23:41:25.555048 1341 server.go:451] "Adding debug handlers to kubelet server" Feb 8 23:41:25.559355 kubelet[1341]: I0208 23:41:25.559324 1341 volume_manager.go:293] "Starting Kubelet Volume Manager" Feb 8 23:41:25.559501 kubelet[1341]: I0208 23:41:25.559426 1341 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 8 23:41:25.578566 kubelet[1341]: W0208 23:41:25.578480 1341 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "172.24.4.229" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 8 23:41:25.578566 kubelet[1341]: E0208 23:41:25.578569 1341 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "172.24.4.229" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 8 23:41:25.579308 kubelet[1341]: W0208 23:41:25.579268 1341 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 8 23:41:25.579395 kubelet[1341]: E0208 23:41:25.579328 1341 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 8 23:41:25.579470 kubelet[1341]: E0208 23:41:25.579409 1341 controller.go:146] failed to ensure lease exists, will retry in 200ms, error: leases.coordination.k8s.io "172.24.4.229" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 8 23:41:25.580158 kubelet[1341]: W0208 23:41:25.580119 1341 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 8 23:41:25.580239 kubelet[1341]: E0208 23:41:25.580176 1341 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 8 23:41:25.580783 kubelet[1341]: E0208 23:41:25.580258 1341 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.229.17b207b5a9c7dada", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.229", UID:"172.24.4.229", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.229"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 41, 25, 545761498, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 41, 25, 545761498, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 8 23:41:25.608268 kubelet[1341]: E0208 23:41:25.608101 1341 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.229.17b207b5aa322253", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.229", UID:"172.24.4.229", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"InvalidDiskCapacity", Message:"invalid capacity 0 on image filesystem", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.229"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 41, 25, 552726611, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 41, 25, 552726611, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 8 23:41:25.609920 kubelet[1341]: I0208 23:41:25.609885 1341 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 8 23:41:25.609992 kubelet[1341]: I0208 23:41:25.609962 1341 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 8 23:41:25.610028 kubelet[1341]: I0208 23:41:25.610003 1341 state_mem.go:36] "Initialized new in-memory state store" Feb 8 23:41:25.611117 kubelet[1341]: E0208 23:41:25.610811 1341 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.229.17b207b5ad6392a3", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.229", UID:"172.24.4.229", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.24.4.229 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.229"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 41, 25, 606298275, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 41, 25, 606298275, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 8 23:41:25.612712 kubelet[1341]: E0208 23:41:25.612582 1341 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.229.17b207b5ad63bcc9", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.229", UID:"172.24.4.229", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.24.4.229 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.229"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 41, 25, 606309065, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 41, 25, 606309065, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 8 23:41:25.615619 kubelet[1341]: E0208 23:41:25.615525 1341 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.229.17b207b5ad63e9a2", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.229", UID:"172.24.4.229", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.24.4.229 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.229"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 41, 25, 606320546, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 41, 25, 606320546, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 8 23:41:25.619199 kubelet[1341]: I0208 23:41:25.619107 1341 policy_none.go:49] "None policy: Start" Feb 8 23:41:25.621887 kubelet[1341]: I0208 23:41:25.621663 1341 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 8 23:41:25.621887 kubelet[1341]: I0208 23:41:25.621764 1341 state_mem.go:35] "Initializing new in-memory state store" Feb 8 23:41:25.635522 systemd[1]: Created slice kubepods.slice. Feb 8 23:41:25.641411 systemd[1]: Created slice kubepods-burstable.slice. Feb 8 23:41:25.645359 systemd[1]: Created slice kubepods-besteffort.slice. Feb 8 23:41:25.650465 kubelet[1341]: I0208 23:41:25.650442 1341 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 8 23:41:25.650862 kubelet[1341]: I0208 23:41:25.650849 1341 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 8 23:41:25.652723 kubelet[1341]: E0208 23:41:25.652708 1341 eviction_manager.go:261] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172.24.4.229\" not found" Feb 8 23:41:25.656284 kubelet[1341]: E0208 23:41:25.654749 1341 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.229.17b207b5b0227d30", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.229", UID:"172.24.4.229", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeAllocatableEnforced", Message:"Updated Node Allocatable limit across pods", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.229"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 41, 25, 652364592, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 41, 25, 652364592, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 8 23:41:25.660285 kubelet[1341]: I0208 23:41:25.660269 1341 kubelet_node_status.go:70] "Attempting to register node" node="172.24.4.229" Feb 8 23:41:25.661950 kubelet[1341]: E0208 23:41:25.661931 1341 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="172.24.4.229" Feb 8 23:41:25.663291 kubelet[1341]: E0208 23:41:25.663221 1341 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.229.17b207b5ad6392a3", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.229", UID:"172.24.4.229", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.24.4.229 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.229"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 41, 25, 606298275, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 41, 25, 660232128, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.24.4.229.17b207b5ad6392a3" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 8 23:41:25.664693 kubelet[1341]: E0208 23:41:25.664593 1341 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.229.17b207b5ad63bcc9", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.229", UID:"172.24.4.229", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.24.4.229 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.229"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 41, 25, 606309065, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 41, 25, 660237197, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.24.4.229.17b207b5ad63bcc9" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 8 23:41:25.666102 kubelet[1341]: E0208 23:41:25.666000 1341 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.229.17b207b5ad63e9a2", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.229", UID:"172.24.4.229", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.24.4.229 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.229"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 41, 25, 606320546, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 41, 25, 660245042, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.24.4.229.17b207b5ad63e9a2" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 8 23:41:25.765369 kubelet[1341]: I0208 23:41:25.765295 1341 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 8 23:41:25.781108 kubelet[1341]: E0208 23:41:25.781041 1341 controller.go:146] failed to ensure lease exists, will retry in 400ms, error: leases.coordination.k8s.io "172.24.4.229" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 8 23:41:25.792176 kubelet[1341]: I0208 23:41:25.788162 1341 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 8 23:41:25.792176 kubelet[1341]: I0208 23:41:25.788198 1341 status_manager.go:176] "Starting to sync pod status with apiserver" Feb 8 23:41:25.792176 kubelet[1341]: I0208 23:41:25.788225 1341 kubelet.go:2113] "Starting kubelet main sync loop" Feb 8 23:41:25.792176 kubelet[1341]: E0208 23:41:25.788278 1341 kubelet.go:2137] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Feb 8 23:41:25.792678 kubelet[1341]: W0208 23:41:25.792659 1341 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 8 23:41:25.792797 kubelet[1341]: E0208 23:41:25.792787 1341 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 8 23:41:25.863957 kubelet[1341]: I0208 23:41:25.863906 1341 kubelet_node_status.go:70] "Attempting to register node" node="172.24.4.229" Feb 8 23:41:25.865437 kubelet[1341]: E0208 23:41:25.865400 1341 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="172.24.4.229" Feb 8 23:41:25.866359 kubelet[1341]: E0208 23:41:25.866237 1341 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.229.17b207b5ad6392a3", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.229", UID:"172.24.4.229", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.24.4.229 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.229"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 41, 25, 606298275, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 41, 25, 863748863, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.24.4.229.17b207b5ad6392a3" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 8 23:41:25.867970 kubelet[1341]: E0208 23:41:25.867788 1341 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.229.17b207b5ad63bcc9", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.229", UID:"172.24.4.229", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.24.4.229 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.229"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 41, 25, 606309065, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 41, 25, 863779180, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.24.4.229.17b207b5ad63bcc9" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 8 23:41:25.948574 kubelet[1341]: E0208 23:41:25.948437 1341 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.229.17b207b5ad63e9a2", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.229", UID:"172.24.4.229", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.24.4.229 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.229"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 41, 25, 606320546, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 41, 25, 863788227, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.24.4.229.17b207b5ad63e9a2" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 8 23:41:26.183589 kubelet[1341]: E0208 23:41:26.183368 1341 controller.go:146] failed to ensure lease exists, will retry in 800ms, error: leases.coordination.k8s.io "172.24.4.229" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 8 23:41:26.267061 kubelet[1341]: I0208 23:41:26.266956 1341 kubelet_node_status.go:70] "Attempting to register node" node="172.24.4.229" Feb 8 23:41:26.269235 kubelet[1341]: E0208 23:41:26.269149 1341 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="172.24.4.229" Feb 8 23:41:26.269934 kubelet[1341]: E0208 23:41:26.269762 1341 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.229.17b207b5ad6392a3", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.229", UID:"172.24.4.229", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.24.4.229 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.229"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 41, 25, 606298275, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 41, 26, 266853265, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.24.4.229.17b207b5ad6392a3" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 8 23:41:26.348784 kubelet[1341]: E0208 23:41:26.348642 1341 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.229.17b207b5ad63bcc9", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.229", UID:"172.24.4.229", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.24.4.229 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.229"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 41, 25, 606309065, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 41, 26, 266884063, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.24.4.229.17b207b5ad63bcc9" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 8 23:41:26.514944 kubelet[1341]: W0208 23:41:26.514899 1341 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "172.24.4.229" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 8 23:41:26.514944 kubelet[1341]: E0208 23:41:26.514949 1341 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "172.24.4.229" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 8 23:41:26.544475 kubelet[1341]: E0208 23:41:26.544435 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:41:26.548910 kubelet[1341]: E0208 23:41:26.548719 1341 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.229.17b207b5ad63e9a2", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.229", UID:"172.24.4.229", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.24.4.229 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.229"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 41, 25, 606320546, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 41, 26, 266913478, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.24.4.229.17b207b5ad63e9a2" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 8 23:41:26.624375 kubelet[1341]: W0208 23:41:26.624332 1341 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 8 23:41:26.624601 kubelet[1341]: E0208 23:41:26.624579 1341 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 8 23:41:26.986898 kubelet[1341]: E0208 23:41:26.985623 1341 controller.go:146] failed to ensure lease exists, will retry in 1.6s, error: leases.coordination.k8s.io "172.24.4.229" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 8 23:41:27.071173 kubelet[1341]: I0208 23:41:27.071119 1341 kubelet_node_status.go:70] "Attempting to register node" node="172.24.4.229" Feb 8 23:41:27.073973 kubelet[1341]: E0208 23:41:27.073728 1341 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.229.17b207b5ad6392a3", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.229", UID:"172.24.4.229", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.24.4.229 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.229"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 41, 25, 606298275, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 41, 27, 71002106, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.24.4.229.17b207b5ad6392a3" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 8 23:41:27.074515 kubelet[1341]: E0208 23:41:27.074163 1341 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="172.24.4.229" Feb 8 23:41:27.076690 kubelet[1341]: E0208 23:41:27.076579 1341 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.229.17b207b5ad63bcc9", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.229", UID:"172.24.4.229", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.24.4.229 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.229"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 41, 25, 606309065, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 41, 27, 71016493, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.24.4.229.17b207b5ad63bcc9" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 8 23:41:27.118782 kubelet[1341]: W0208 23:41:27.118736 1341 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 8 23:41:27.119071 kubelet[1341]: E0208 23:41:27.119045 1341 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 8 23:41:27.143910 kubelet[1341]: W0208 23:41:27.143867 1341 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 8 23:41:27.144177 kubelet[1341]: E0208 23:41:27.144154 1341 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 8 23:41:27.149400 kubelet[1341]: E0208 23:41:27.149197 1341 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.229.17b207b5ad63e9a2", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.229", UID:"172.24.4.229", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.24.4.229 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.229"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 41, 25, 606320546, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 41, 27, 71022515, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.24.4.229.17b207b5ad63e9a2" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 8 23:41:27.545682 kubelet[1341]: E0208 23:41:27.545582 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:41:28.339505 kubelet[1341]: W0208 23:41:28.339433 1341 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "172.24.4.229" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 8 23:41:28.339909 kubelet[1341]: E0208 23:41:28.339877 1341 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "172.24.4.229" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 8 23:41:28.546720 kubelet[1341]: E0208 23:41:28.546614 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:41:28.588114 kubelet[1341]: E0208 23:41:28.588033 1341 controller.go:146] failed to ensure lease exists, will retry in 3.2s, error: leases.coordination.k8s.io "172.24.4.229" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 8 23:41:28.677775 kubelet[1341]: I0208 23:41:28.676604 1341 kubelet_node_status.go:70] "Attempting to register node" node="172.24.4.229" Feb 8 23:41:28.678376 kubelet[1341]: E0208 23:41:28.678204 1341 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.229.17b207b5ad6392a3", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.229", UID:"172.24.4.229", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.24.4.229 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.229"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 41, 25, 606298275, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 41, 28, 675794785, time.Local), Count:6, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.24.4.229.17b207b5ad6392a3" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 8 23:41:28.679750 kubelet[1341]: E0208 23:41:28.679714 1341 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="172.24.4.229" Feb 8 23:41:28.680353 kubelet[1341]: E0208 23:41:28.680238 1341 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.229.17b207b5ad63bcc9", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.229", UID:"172.24.4.229", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.24.4.229 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.229"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 41, 25, 606309065, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 41, 28, 676473828, time.Local), Count:6, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.24.4.229.17b207b5ad63bcc9" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 8 23:41:28.682549 kubelet[1341]: E0208 23:41:28.682415 1341 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.229.17b207b5ad63e9a2", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.229", UID:"172.24.4.229", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.24.4.229 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.229"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 41, 25, 606320546, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 41, 28, 676513683, time.Local), Count:6, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.24.4.229.17b207b5ad63e9a2" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 8 23:41:29.442545 kubelet[1341]: W0208 23:41:29.442473 1341 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 8 23:41:29.442545 kubelet[1341]: E0208 23:41:29.442534 1341 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 8 23:41:29.503136 kubelet[1341]: W0208 23:41:29.503094 1341 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 8 23:41:29.503421 kubelet[1341]: E0208 23:41:29.503398 1341 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 8 23:41:29.547286 kubelet[1341]: E0208 23:41:29.547228 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:41:29.940738 kubelet[1341]: W0208 23:41:29.940695 1341 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 8 23:41:29.941066 kubelet[1341]: E0208 23:41:29.941040 1341 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 8 23:41:30.548144 kubelet[1341]: E0208 23:41:30.548076 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:41:31.549170 kubelet[1341]: E0208 23:41:31.549049 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:41:31.791082 kubelet[1341]: E0208 23:41:31.790992 1341 controller.go:146] failed to ensure lease exists, will retry in 6.4s, error: leases.coordination.k8s.io "172.24.4.229" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 8 23:41:31.881441 kubelet[1341]: I0208 23:41:31.881312 1341 kubelet_node_status.go:70] "Attempting to register node" node="172.24.4.229" Feb 8 23:41:31.883787 kubelet[1341]: E0208 23:41:31.883752 1341 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="172.24.4.229" Feb 8 23:41:31.884408 kubelet[1341]: E0208 23:41:31.884266 1341 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.229.17b207b5ad6392a3", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.229", UID:"172.24.4.229", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.24.4.229 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.229"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 41, 25, 606298275, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 41, 31, 881256958, time.Local), Count:7, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.24.4.229.17b207b5ad6392a3" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 8 23:41:31.886412 kubelet[1341]: E0208 23:41:31.886304 1341 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.229.17b207b5ad63bcc9", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.229", UID:"172.24.4.229", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.24.4.229 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.229"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 41, 25, 606309065, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 41, 31, 881266266, time.Local), Count:7, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.24.4.229.17b207b5ad63bcc9" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 8 23:41:31.888369 kubelet[1341]: E0208 23:41:31.888264 1341 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.229.17b207b5ad63e9a2", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.229", UID:"172.24.4.229", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.24.4.229 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.229"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 41, 25, 606320546, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 41, 31, 881272467, time.Local), Count:7, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.24.4.229.17b207b5ad63e9a2" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 8 23:41:32.244547 kubelet[1341]: W0208 23:41:32.244428 1341 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "172.24.4.229" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 8 23:41:32.244886 kubelet[1341]: E0208 23:41:32.244788 1341 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "172.24.4.229" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 8 23:41:32.549869 kubelet[1341]: E0208 23:41:32.549741 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:41:33.474619 kubelet[1341]: W0208 23:41:33.474513 1341 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 8 23:41:33.475060 kubelet[1341]: E0208 23:41:33.475033 1341 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 8 23:41:33.550331 kubelet[1341]: E0208 23:41:33.550288 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:41:33.706091 kubelet[1341]: W0208 23:41:33.706045 1341 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 8 23:41:33.706382 kubelet[1341]: E0208 23:41:33.706358 1341 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 8 23:41:34.551686 kubelet[1341]: E0208 23:41:34.551618 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:41:35.534086 kubelet[1341]: I0208 23:41:35.533965 1341 transport.go:135] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 8 23:41:35.551988 kubelet[1341]: E0208 23:41:35.551928 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:41:35.653290 kubelet[1341]: E0208 23:41:35.653186 1341 eviction_manager.go:261] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172.24.4.229\" not found" Feb 8 23:41:35.976651 kubelet[1341]: E0208 23:41:35.966231 1341 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "172.24.4.229" not found Feb 8 23:41:36.554257 kubelet[1341]: E0208 23:41:36.554149 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:41:37.186718 kubelet[1341]: E0208 23:41:37.186666 1341 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "172.24.4.229" not found Feb 8 23:41:37.555666 kubelet[1341]: E0208 23:41:37.555610 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:41:38.201009 kubelet[1341]: E0208 23:41:38.200956 1341 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"172.24.4.229\" not found" node="172.24.4.229" Feb 8 23:41:38.286323 kubelet[1341]: I0208 23:41:38.286263 1341 kubelet_node_status.go:70] "Attempting to register node" node="172.24.4.229" Feb 8 23:41:38.557011 kubelet[1341]: E0208 23:41:38.556936 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:41:38.588730 kubelet[1341]: I0208 23:41:38.588639 1341 kubelet_node_status.go:73] "Successfully registered node" node="172.24.4.229" Feb 8 23:41:38.639352 kubelet[1341]: E0208 23:41:38.639298 1341 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.229\" not found" Feb 8 23:41:38.739923 kubelet[1341]: E0208 23:41:38.739771 1341 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.229\" not found" Feb 8 23:41:38.841539 kubelet[1341]: E0208 23:41:38.841339 1341 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.229\" not found" Feb 8 23:41:38.917380 sudo[1155]: pam_unix(sudo:session): session closed for user root Feb 8 23:41:38.942294 kubelet[1341]: E0208 23:41:38.942229 1341 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.229\" not found" Feb 8 23:41:39.042596 kubelet[1341]: E0208 23:41:39.042512 1341 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.229\" not found" Feb 8 23:41:39.074998 sshd[1152]: pam_unix(sshd:session): session closed for user core Feb 8 23:41:39.080720 systemd[1]: sshd@4-172.24.4.229:22-172.24.4.1:49676.service: Deactivated successfully. Feb 8 23:41:39.082674 systemd[1]: session-5.scope: Deactivated successfully. Feb 8 23:41:39.084264 systemd-logind[1045]: Session 5 logged out. Waiting for processes to exit. Feb 8 23:41:39.086523 systemd-logind[1045]: Removed session 5. Feb 8 23:41:39.145008 kubelet[1341]: E0208 23:41:39.143604 1341 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.229\" not found" Feb 8 23:41:39.245445 kubelet[1341]: E0208 23:41:39.245369 1341 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.229\" not found" Feb 8 23:41:39.346356 kubelet[1341]: E0208 23:41:39.346266 1341 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.229\" not found" Feb 8 23:41:39.447159 kubelet[1341]: E0208 23:41:39.446955 1341 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.229\" not found" Feb 8 23:41:39.547852 kubelet[1341]: E0208 23:41:39.547751 1341 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.229\" not found" Feb 8 23:41:39.558321 kubelet[1341]: E0208 23:41:39.558269 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:41:39.649083 kubelet[1341]: E0208 23:41:39.648977 1341 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.229\" not found" Feb 8 23:41:39.749266 kubelet[1341]: E0208 23:41:39.749177 1341 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.229\" not found" Feb 8 23:41:39.849935 kubelet[1341]: E0208 23:41:39.849878 1341 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.229\" not found" Feb 8 23:41:39.950909 kubelet[1341]: E0208 23:41:39.950785 1341 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.229\" not found" Feb 8 23:41:40.052472 kubelet[1341]: E0208 23:41:40.052262 1341 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.229\" not found" Feb 8 23:41:40.153493 kubelet[1341]: E0208 23:41:40.153380 1341 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.229\" not found" Feb 8 23:41:40.254371 kubelet[1341]: E0208 23:41:40.254286 1341 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.229\" not found" Feb 8 23:41:40.355377 kubelet[1341]: E0208 23:41:40.355160 1341 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.229\" not found" Feb 8 23:41:40.456319 kubelet[1341]: E0208 23:41:40.456235 1341 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.229\" not found" Feb 8 23:41:40.557069 kubelet[1341]: E0208 23:41:40.556995 1341 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.229\" not found" Feb 8 23:41:40.559245 kubelet[1341]: E0208 23:41:40.559203 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:41:40.658069 kubelet[1341]: E0208 23:41:40.657898 1341 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.229\" not found" Feb 8 23:41:40.758766 kubelet[1341]: E0208 23:41:40.758691 1341 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.229\" not found" Feb 8 23:41:40.859228 kubelet[1341]: E0208 23:41:40.859129 1341 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.229\" not found" Feb 8 23:41:40.960397 kubelet[1341]: E0208 23:41:40.960192 1341 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.229\" not found" Feb 8 23:41:41.061376 kubelet[1341]: E0208 23:41:41.061299 1341 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.229\" not found" Feb 8 23:41:41.162084 kubelet[1341]: E0208 23:41:41.162024 1341 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.229\" not found" Feb 8 23:41:41.262895 kubelet[1341]: E0208 23:41:41.262799 1341 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.229\" not found" Feb 8 23:41:41.364277 kubelet[1341]: I0208 23:41:41.364212 1341 kuberuntime_manager.go:1114] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Feb 8 23:41:41.364860 env[1051]: time="2024-02-08T23:41:41.364741923Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 8 23:41:41.365525 kubelet[1341]: I0208 23:41:41.365088 1341 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Feb 8 23:41:41.553560 kubelet[1341]: I0208 23:41:41.553343 1341 apiserver.go:52] "Watching apiserver" Feb 8 23:41:41.560664 kubelet[1341]: E0208 23:41:41.560617 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:41:41.562325 kubelet[1341]: I0208 23:41:41.562300 1341 topology_manager.go:210] "Topology Admit Handler" Feb 8 23:41:41.562568 kubelet[1341]: I0208 23:41:41.562552 1341 topology_manager.go:210] "Topology Admit Handler" Feb 8 23:41:41.572579 systemd[1]: Created slice kubepods-besteffort-pod322197ec_ada6_4888_9767_c33810a92e7a.slice. Feb 8 23:41:41.588610 systemd[1]: Created slice kubepods-burstable-podcd01c2fe_73c2_4c8b_8f4e_859ddac69780.slice. Feb 8 23:41:41.662313 kubelet[1341]: I0208 23:41:41.662140 1341 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 8 23:41:41.665136 kubelet[1341]: I0208 23:41:41.665085 1341 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/cd01c2fe-73c2-4c8b-8f4e-859ddac69780-cilium-run\") pod \"cilium-6z2bk\" (UID: \"cd01c2fe-73c2-4c8b-8f4e-859ddac69780\") " pod="kube-system/cilium-6z2bk" Feb 8 23:41:41.665497 kubelet[1341]: I0208 23:41:41.665471 1341 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cd01c2fe-73c2-4c8b-8f4e-859ddac69780-xtables-lock\") pod \"cilium-6z2bk\" (UID: \"cd01c2fe-73c2-4c8b-8f4e-859ddac69780\") " pod="kube-system/cilium-6z2bk" Feb 8 23:41:41.665917 kubelet[1341]: I0208 23:41:41.665761 1341 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/cd01c2fe-73c2-4c8b-8f4e-859ddac69780-host-proc-sys-net\") pod \"cilium-6z2bk\" (UID: \"cd01c2fe-73c2-4c8b-8f4e-859ddac69780\") " pod="kube-system/cilium-6z2bk" Feb 8 23:41:41.666186 kubelet[1341]: I0208 23:41:41.666161 1341 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/cd01c2fe-73c2-4c8b-8f4e-859ddac69780-host-proc-sys-kernel\") pod \"cilium-6z2bk\" (UID: \"cd01c2fe-73c2-4c8b-8f4e-859ddac69780\") " pod="kube-system/cilium-6z2bk" Feb 8 23:41:41.666461 kubelet[1341]: I0208 23:41:41.666431 1341 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kpq5s\" (UniqueName: \"kubernetes.io/projected/cd01c2fe-73c2-4c8b-8f4e-859ddac69780-kube-api-access-kpq5s\") pod \"cilium-6z2bk\" (UID: \"cd01c2fe-73c2-4c8b-8f4e-859ddac69780\") " pod="kube-system/cilium-6z2bk" Feb 8 23:41:41.666682 kubelet[1341]: I0208 23:41:41.666658 1341 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/cd01c2fe-73c2-4c8b-8f4e-859ddac69780-hubble-tls\") pod \"cilium-6z2bk\" (UID: \"cd01c2fe-73c2-4c8b-8f4e-859ddac69780\") " pod="kube-system/cilium-6z2bk" Feb 8 23:41:41.666978 kubelet[1341]: I0208 23:41:41.666953 1341 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/322197ec-ada6-4888-9767-c33810a92e7a-lib-modules\") pod \"kube-proxy-ltjrb\" (UID: \"322197ec-ada6-4888-9767-c33810a92e7a\") " pod="kube-system/kube-proxy-ltjrb" Feb 8 23:41:41.667197 kubelet[1341]: I0208 23:41:41.667174 1341 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/cd01c2fe-73c2-4c8b-8f4e-859ddac69780-bpf-maps\") pod \"cilium-6z2bk\" (UID: \"cd01c2fe-73c2-4c8b-8f4e-859ddac69780\") " pod="kube-system/cilium-6z2bk" Feb 8 23:41:41.667448 kubelet[1341]: I0208 23:41:41.667414 1341 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/cd01c2fe-73c2-4c8b-8f4e-859ddac69780-cilium-cgroup\") pod \"cilium-6z2bk\" (UID: \"cd01c2fe-73c2-4c8b-8f4e-859ddac69780\") " pod="kube-system/cilium-6z2bk" Feb 8 23:41:41.667718 kubelet[1341]: I0208 23:41:41.667692 1341 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/cd01c2fe-73c2-4c8b-8f4e-859ddac69780-cni-path\") pod \"cilium-6z2bk\" (UID: \"cd01c2fe-73c2-4c8b-8f4e-859ddac69780\") " pod="kube-system/cilium-6z2bk" Feb 8 23:41:41.667958 kubelet[1341]: I0208 23:41:41.667931 1341 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cd01c2fe-73c2-4c8b-8f4e-859ddac69780-lib-modules\") pod \"cilium-6z2bk\" (UID: \"cd01c2fe-73c2-4c8b-8f4e-859ddac69780\") " pod="kube-system/cilium-6z2bk" Feb 8 23:41:41.668184 kubelet[1341]: I0208 23:41:41.668160 1341 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cd01c2fe-73c2-4c8b-8f4e-859ddac69780-cilium-config-path\") pod \"cilium-6z2bk\" (UID: \"cd01c2fe-73c2-4c8b-8f4e-859ddac69780\") " pod="kube-system/cilium-6z2bk" Feb 8 23:41:41.668401 kubelet[1341]: I0208 23:41:41.668377 1341 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/322197ec-ada6-4888-9767-c33810a92e7a-kube-proxy\") pod \"kube-proxy-ltjrb\" (UID: \"322197ec-ada6-4888-9767-c33810a92e7a\") " pod="kube-system/kube-proxy-ltjrb" Feb 8 23:41:41.668640 kubelet[1341]: I0208 23:41:41.668606 1341 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/322197ec-ada6-4888-9767-c33810a92e7a-xtables-lock\") pod \"kube-proxy-ltjrb\" (UID: \"322197ec-ada6-4888-9767-c33810a92e7a\") " pod="kube-system/kube-proxy-ltjrb" Feb 8 23:41:41.668894 kubelet[1341]: I0208 23:41:41.668868 1341 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7lcr\" (UniqueName: \"kubernetes.io/projected/322197ec-ada6-4888-9767-c33810a92e7a-kube-api-access-r7lcr\") pod \"kube-proxy-ltjrb\" (UID: \"322197ec-ada6-4888-9767-c33810a92e7a\") " pod="kube-system/kube-proxy-ltjrb" Feb 8 23:41:41.669140 kubelet[1341]: I0208 23:41:41.669115 1341 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/cd01c2fe-73c2-4c8b-8f4e-859ddac69780-hostproc\") pod \"cilium-6z2bk\" (UID: \"cd01c2fe-73c2-4c8b-8f4e-859ddac69780\") " pod="kube-system/cilium-6z2bk" Feb 8 23:41:41.669395 kubelet[1341]: I0208 23:41:41.669370 1341 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/cd01c2fe-73c2-4c8b-8f4e-859ddac69780-clustermesh-secrets\") pod \"cilium-6z2bk\" (UID: \"cd01c2fe-73c2-4c8b-8f4e-859ddac69780\") " pod="kube-system/cilium-6z2bk" Feb 8 23:41:41.669596 kubelet[1341]: I0208 23:41:41.669575 1341 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cd01c2fe-73c2-4c8b-8f4e-859ddac69780-etc-cni-netd\") pod \"cilium-6z2bk\" (UID: \"cd01c2fe-73c2-4c8b-8f4e-859ddac69780\") " pod="kube-system/cilium-6z2bk" Feb 8 23:41:41.669749 kubelet[1341]: I0208 23:41:41.669728 1341 reconciler.go:41] "Reconciler: start to sync state" Feb 8 23:41:41.902857 env[1051]: time="2024-02-08T23:41:41.898486591Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6z2bk,Uid:cd01c2fe-73c2-4c8b-8f4e-859ddac69780,Namespace:kube-system,Attempt:0,}" Feb 8 23:41:42.186026 env[1051]: time="2024-02-08T23:41:42.185684807Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ltjrb,Uid:322197ec-ada6-4888-9767-c33810a92e7a,Namespace:kube-system,Attempt:0,}" Feb 8 23:41:42.562663 kubelet[1341]: E0208 23:41:42.562572 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:41:42.797389 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4241551214.mount: Deactivated successfully. Feb 8 23:41:42.819191 env[1051]: time="2024-02-08T23:41:42.818458730Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:41:42.822666 env[1051]: time="2024-02-08T23:41:42.822576336Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:41:42.827037 env[1051]: time="2024-02-08T23:41:42.826956206Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:41:42.829628 env[1051]: time="2024-02-08T23:41:42.829540871Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:41:42.832451 env[1051]: time="2024-02-08T23:41:42.832379465Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:41:42.840707 env[1051]: time="2024-02-08T23:41:42.840607875Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:41:42.843799 env[1051]: time="2024-02-08T23:41:42.843732707Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:41:42.845802 env[1051]: time="2024-02-08T23:41:42.845710823Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:41:42.930481 env[1051]: time="2024-02-08T23:41:42.930202904Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 8 23:41:42.930868 env[1051]: time="2024-02-08T23:41:42.930359028Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 8 23:41:42.930868 env[1051]: time="2024-02-08T23:41:42.930587017Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 8 23:41:42.930868 env[1051]: time="2024-02-08T23:41:42.930680602Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 8 23:41:42.931351 env[1051]: time="2024-02-08T23:41:42.931209847Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 8 23:41:42.931664 env[1051]: time="2024-02-08T23:41:42.931570685Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 8 23:41:42.931802 env[1051]: time="2024-02-08T23:41:42.931627451Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5107e4e2037976a648558bb572877ee28fd6460b681717b6ade0d01d49f226fb pid=1441 runtime=io.containerd.runc.v2 Feb 8 23:41:42.932685 env[1051]: time="2024-02-08T23:41:42.932559923Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3a07888993018bd9bdcc48b0477d63c32c607f1c5fa0eef54cd71525515ae41e pid=1434 runtime=io.containerd.runc.v2 Feb 8 23:41:42.959900 systemd[1]: Started cri-containerd-5107e4e2037976a648558bb572877ee28fd6460b681717b6ade0d01d49f226fb.scope. Feb 8 23:41:42.986058 systemd[1]: Started cri-containerd-3a07888993018bd9bdcc48b0477d63c32c607f1c5fa0eef54cd71525515ae41e.scope. Feb 8 23:41:43.019199 env[1051]: time="2024-02-08T23:41:43.019110788Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ltjrb,Uid:322197ec-ada6-4888-9767-c33810a92e7a,Namespace:kube-system,Attempt:0,} returns sandbox id \"5107e4e2037976a648558bb572877ee28fd6460b681717b6ade0d01d49f226fb\"" Feb 8 23:41:43.022584 env[1051]: time="2024-02-08T23:41:43.022478045Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\"" Feb 8 23:41:43.028974 env[1051]: time="2024-02-08T23:41:43.028921509Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6z2bk,Uid:cd01c2fe-73c2-4c8b-8f4e-859ddac69780,Namespace:kube-system,Attempt:0,} returns sandbox id \"3a07888993018bd9bdcc48b0477d63c32c607f1c5fa0eef54cd71525515ae41e\"" Feb 8 23:41:43.563206 kubelet[1341]: E0208 23:41:43.563152 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:41:44.564157 kubelet[1341]: E0208 23:41:44.564093 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:41:44.580769 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2361467068.mount: Deactivated successfully. Feb 8 23:41:45.544119 kubelet[1341]: E0208 23:41:45.544063 1341 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:41:45.565294 kubelet[1341]: E0208 23:41:45.565188 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:41:45.987281 env[1051]: time="2024-02-08T23:41:45.987090486Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:41:45.993438 env[1051]: time="2024-02-08T23:41:45.993356595Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5a7325fa2b6e8d712e4a770abb4a5a5852e87b6de8df34552d67853e9bfb9f9f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:41:45.998678 env[1051]: time="2024-02-08T23:41:45.998593712Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:41:46.002951 env[1051]: time="2024-02-08T23:41:46.002893388Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:f6e0de32a002b910b9b2e0e8d769e2d7b05208240559c745ce4781082ab15f22,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:41:46.003532 env[1051]: time="2024-02-08T23:41:46.003477034Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\" returns image reference \"sha256:5a7325fa2b6e8d712e4a770abb4a5a5852e87b6de8df34552d67853e9bfb9f9f\"" Feb 8 23:41:46.006064 env[1051]: time="2024-02-08T23:41:46.006009480Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 8 23:41:46.008748 env[1051]: time="2024-02-08T23:41:46.008709221Z" level=info msg="CreateContainer within sandbox \"5107e4e2037976a648558bb572877ee28fd6460b681717b6ade0d01d49f226fb\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 8 23:41:46.039827 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2655354012.mount: Deactivated successfully. Feb 8 23:41:46.045959 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1253023631.mount: Deactivated successfully. Feb 8 23:41:46.081842 env[1051]: time="2024-02-08T23:41:46.081762664Z" level=info msg="CreateContainer within sandbox \"5107e4e2037976a648558bb572877ee28fd6460b681717b6ade0d01d49f226fb\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"23bfc264654cd0fe110ced1718518cef2d03671392c7c015e5f43976918ec26b\"" Feb 8 23:41:46.083081 env[1051]: time="2024-02-08T23:41:46.083054620Z" level=info msg="StartContainer for \"23bfc264654cd0fe110ced1718518cef2d03671392c7c015e5f43976918ec26b\"" Feb 8 23:41:46.123307 systemd[1]: Started cri-containerd-23bfc264654cd0fe110ced1718518cef2d03671392c7c015e5f43976918ec26b.scope. Feb 8 23:41:46.180599 env[1051]: time="2024-02-08T23:41:46.180518319Z" level=info msg="StartContainer for \"23bfc264654cd0fe110ced1718518cef2d03671392c7c015e5f43976918ec26b\" returns successfully" Feb 8 23:41:46.566005 kubelet[1341]: E0208 23:41:46.565935 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:41:47.566935 kubelet[1341]: E0208 23:41:47.566890 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:41:48.568243 kubelet[1341]: E0208 23:41:48.568178 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:41:48.686459 update_engine[1046]: I0208 23:41:48.685879 1046 update_attempter.cc:509] Updating boot flags... Feb 8 23:41:49.569091 kubelet[1341]: E0208 23:41:49.569050 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:41:50.570560 kubelet[1341]: E0208 23:41:50.570501 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:41:51.571583 kubelet[1341]: E0208 23:41:51.571504 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:41:52.572611 kubelet[1341]: E0208 23:41:52.572493 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:41:53.327843 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1409800736.mount: Deactivated successfully. Feb 8 23:41:53.573542 kubelet[1341]: E0208 23:41:53.573463 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:41:54.574353 kubelet[1341]: E0208 23:41:54.574250 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:41:55.574686 kubelet[1341]: E0208 23:41:55.574649 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:41:56.575610 kubelet[1341]: E0208 23:41:56.575556 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:41:57.575722 kubelet[1341]: E0208 23:41:57.575671 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:41:58.275216 env[1051]: time="2024-02-08T23:41:58.275088140Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:41:58.284539 env[1051]: time="2024-02-08T23:41:58.284468113Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:41:58.292023 env[1051]: time="2024-02-08T23:41:58.291957619Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:41:58.292848 env[1051]: time="2024-02-08T23:41:58.292734517Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Feb 8 23:41:58.297182 env[1051]: time="2024-02-08T23:41:58.297075955Z" level=info msg="CreateContainer within sandbox \"3a07888993018bd9bdcc48b0477d63c32c607f1c5fa0eef54cd71525515ae41e\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 8 23:41:58.325954 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1449003934.mount: Deactivated successfully. Feb 8 23:41:58.344977 env[1051]: time="2024-02-08T23:41:58.344864580Z" level=info msg="CreateContainer within sandbox \"3a07888993018bd9bdcc48b0477d63c32c607f1c5fa0eef54cd71525515ae41e\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"8c57dc2fd428beb1d15a72867c3dceb7680ad7aadca843f12275585d0f0472de\"" Feb 8 23:41:58.346602 env[1051]: time="2024-02-08T23:41:58.346521890Z" level=info msg="StartContainer for \"8c57dc2fd428beb1d15a72867c3dceb7680ad7aadca843f12275585d0f0472de\"" Feb 8 23:41:58.387690 systemd[1]: Started cri-containerd-8c57dc2fd428beb1d15a72867c3dceb7680ad7aadca843f12275585d0f0472de.scope. Feb 8 23:41:58.434446 systemd[1]: cri-containerd-8c57dc2fd428beb1d15a72867c3dceb7680ad7aadca843f12275585d0f0472de.scope: Deactivated successfully. Feb 8 23:41:58.441047 env[1051]: time="2024-02-08T23:41:58.440784734Z" level=info msg="StartContainer for \"8c57dc2fd428beb1d15a72867c3dceb7680ad7aadca843f12275585d0f0472de\" returns successfully" Feb 8 23:41:58.804184 kubelet[1341]: E0208 23:41:58.576165 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:41:58.972019 kubelet[1341]: I0208 23:41:58.971967 1341 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-ltjrb" podStartSLOduration=-9.223372015882923e+09 pod.CreationTimestamp="2024-02-08 23:41:38 +0000 UTC" firstStartedPulling="2024-02-08 23:41:43.021332001 +0000 UTC m=+18.627912221" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-08 23:41:46.915397969 +0000 UTC m=+22.521978469" watchObservedRunningTime="2024-02-08 23:41:58.971853605 +0000 UTC m=+34.578433875" Feb 8 23:41:59.317698 systemd[1]: run-containerd-runc-k8s.io-8c57dc2fd428beb1d15a72867c3dceb7680ad7aadca843f12275585d0f0472de-runc.oIa0yW.mount: Deactivated successfully. Feb 8 23:41:59.317979 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8c57dc2fd428beb1d15a72867c3dceb7680ad7aadca843f12275585d0f0472de-rootfs.mount: Deactivated successfully. Feb 8 23:41:59.501808 env[1051]: time="2024-02-08T23:41:59.501678392Z" level=info msg="shim disconnected" id=8c57dc2fd428beb1d15a72867c3dceb7680ad7aadca843f12275585d0f0472de Feb 8 23:41:59.501808 env[1051]: time="2024-02-08T23:41:59.501792076Z" level=warning msg="cleaning up after shim disconnected" id=8c57dc2fd428beb1d15a72867c3dceb7680ad7aadca843f12275585d0f0472de namespace=k8s.io Feb 8 23:41:59.502953 env[1051]: time="2024-02-08T23:41:59.501858771Z" level=info msg="cleaning up dead shim" Feb 8 23:41:59.521037 env[1051]: time="2024-02-08T23:41:59.520904132Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:41:59Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1719 runtime=io.containerd.runc.v2\n" Feb 8 23:41:59.577537 kubelet[1341]: E0208 23:41:59.577300 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:41:59.933788 env[1051]: time="2024-02-08T23:41:59.933615079Z" level=info msg="CreateContainer within sandbox \"3a07888993018bd9bdcc48b0477d63c32c607f1c5fa0eef54cd71525515ae41e\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 8 23:42:00.168076 env[1051]: time="2024-02-08T23:42:00.167937060Z" level=info msg="CreateContainer within sandbox \"3a07888993018bd9bdcc48b0477d63c32c607f1c5fa0eef54cd71525515ae41e\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"8508039c6b03832f9714c5e30f7a8381be09f910196d582ae3c792a248381ad8\"" Feb 8 23:42:00.169349 env[1051]: time="2024-02-08T23:42:00.169283497Z" level=info msg="StartContainer for \"8508039c6b03832f9714c5e30f7a8381be09f910196d582ae3c792a248381ad8\"" Feb 8 23:42:00.231876 systemd[1]: Started cri-containerd-8508039c6b03832f9714c5e30f7a8381be09f910196d582ae3c792a248381ad8.scope. Feb 8 23:42:00.281236 env[1051]: time="2024-02-08T23:42:00.281127100Z" level=info msg="StartContainer for \"8508039c6b03832f9714c5e30f7a8381be09f910196d582ae3c792a248381ad8\" returns successfully" Feb 8 23:42:00.290041 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 8 23:42:00.290353 systemd[1]: Stopped systemd-sysctl.service. Feb 8 23:42:00.290886 systemd[1]: Stopping systemd-sysctl.service... Feb 8 23:42:00.295117 systemd[1]: Starting systemd-sysctl.service... Feb 8 23:42:00.295490 systemd[1]: cri-containerd-8508039c6b03832f9714c5e30f7a8381be09f910196d582ae3c792a248381ad8.scope: Deactivated successfully. Feb 8 23:42:00.302165 systemd[1]: Finished systemd-sysctl.service. Feb 8 23:42:00.315089 systemd[1]: run-containerd-runc-k8s.io-8508039c6b03832f9714c5e30f7a8381be09f910196d582ae3c792a248381ad8-runc.lbyCn3.mount: Deactivated successfully. Feb 8 23:42:00.320273 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8508039c6b03832f9714c5e30f7a8381be09f910196d582ae3c792a248381ad8-rootfs.mount: Deactivated successfully. Feb 8 23:42:00.337126 env[1051]: time="2024-02-08T23:42:00.337080510Z" level=info msg="shim disconnected" id=8508039c6b03832f9714c5e30f7a8381be09f910196d582ae3c792a248381ad8 Feb 8 23:42:00.337472 env[1051]: time="2024-02-08T23:42:00.337452488Z" level=warning msg="cleaning up after shim disconnected" id=8508039c6b03832f9714c5e30f7a8381be09f910196d582ae3c792a248381ad8 namespace=k8s.io Feb 8 23:42:00.337550 env[1051]: time="2024-02-08T23:42:00.337535103Z" level=info msg="cleaning up dead shim" Feb 8 23:42:00.347042 env[1051]: time="2024-02-08T23:42:00.346999914Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:42:00Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1786 runtime=io.containerd.runc.v2\n" Feb 8 23:42:00.578756 kubelet[1341]: E0208 23:42:00.578628 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:42:00.938395 env[1051]: time="2024-02-08T23:42:00.938194407Z" level=info msg="CreateContainer within sandbox \"3a07888993018bd9bdcc48b0477d63c32c607f1c5fa0eef54cd71525515ae41e\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 8 23:42:00.966513 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1616654664.mount: Deactivated successfully. Feb 8 23:42:00.987490 env[1051]: time="2024-02-08T23:42:00.987423468Z" level=info msg="CreateContainer within sandbox \"3a07888993018bd9bdcc48b0477d63c32c607f1c5fa0eef54cd71525515ae41e\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"874be25eb43ffe951f0d73fd3f030b22b44f48aa12a3d55787b632c963e03520\"" Feb 8 23:42:00.988552 env[1051]: time="2024-02-08T23:42:00.988503304Z" level=info msg="StartContainer for \"874be25eb43ffe951f0d73fd3f030b22b44f48aa12a3d55787b632c963e03520\"" Feb 8 23:42:01.025963 systemd[1]: Started cri-containerd-874be25eb43ffe951f0d73fd3f030b22b44f48aa12a3d55787b632c963e03520.scope. Feb 8 23:42:01.072965 systemd[1]: cri-containerd-874be25eb43ffe951f0d73fd3f030b22b44f48aa12a3d55787b632c963e03520.scope: Deactivated successfully. Feb 8 23:42:01.076520 env[1051]: time="2024-02-08T23:42:01.076440609Z" level=info msg="StartContainer for \"874be25eb43ffe951f0d73fd3f030b22b44f48aa12a3d55787b632c963e03520\" returns successfully" Feb 8 23:42:01.113024 env[1051]: time="2024-02-08T23:42:01.112969545Z" level=info msg="shim disconnected" id=874be25eb43ffe951f0d73fd3f030b22b44f48aa12a3d55787b632c963e03520 Feb 8 23:42:01.113024 env[1051]: time="2024-02-08T23:42:01.113020581Z" level=warning msg="cleaning up after shim disconnected" id=874be25eb43ffe951f0d73fd3f030b22b44f48aa12a3d55787b632c963e03520 namespace=k8s.io Feb 8 23:42:01.113024 env[1051]: time="2024-02-08T23:42:01.113033776Z" level=info msg="cleaning up dead shim" Feb 8 23:42:01.120678 env[1051]: time="2024-02-08T23:42:01.120640600Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:42:01Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1844 runtime=io.containerd.runc.v2\n" Feb 8 23:42:01.316924 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount715535174.mount: Deactivated successfully. Feb 8 23:42:01.580274 kubelet[1341]: E0208 23:42:01.579691 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:42:01.956137 env[1051]: time="2024-02-08T23:42:01.955546375Z" level=info msg="CreateContainer within sandbox \"3a07888993018bd9bdcc48b0477d63c32c607f1c5fa0eef54cd71525515ae41e\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 8 23:42:01.980715 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2454360260.mount: Deactivated successfully. Feb 8 23:42:01.997801 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2829270055.mount: Deactivated successfully. Feb 8 23:42:02.003177 env[1051]: time="2024-02-08T23:42:02.003102956Z" level=info msg="CreateContainer within sandbox \"3a07888993018bd9bdcc48b0477d63c32c607f1c5fa0eef54cd71525515ae41e\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"e766ba78b8d2d528b3d34bf25c7dfe098fa510c32bd4a8869270a8e31ca47042\"" Feb 8 23:42:02.004562 env[1051]: time="2024-02-08T23:42:02.004468417Z" level=info msg="StartContainer for \"e766ba78b8d2d528b3d34bf25c7dfe098fa510c32bd4a8869270a8e31ca47042\"" Feb 8 23:42:02.044667 systemd[1]: Started cri-containerd-e766ba78b8d2d528b3d34bf25c7dfe098fa510c32bd4a8869270a8e31ca47042.scope. Feb 8 23:42:02.077021 systemd[1]: cri-containerd-e766ba78b8d2d528b3d34bf25c7dfe098fa510c32bd4a8869270a8e31ca47042.scope: Deactivated successfully. Feb 8 23:42:02.078801 env[1051]: time="2024-02-08T23:42:02.078666926Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcd01c2fe_73c2_4c8b_8f4e_859ddac69780.slice/cri-containerd-e766ba78b8d2d528b3d34bf25c7dfe098fa510c32bd4a8869270a8e31ca47042.scope/memory.events\": no such file or directory" Feb 8 23:42:02.084362 env[1051]: time="2024-02-08T23:42:02.083878675Z" level=info msg="StartContainer for \"e766ba78b8d2d528b3d34bf25c7dfe098fa510c32bd4a8869270a8e31ca47042\" returns successfully" Feb 8 23:42:02.112384 env[1051]: time="2024-02-08T23:42:02.112342777Z" level=info msg="shim disconnected" id=e766ba78b8d2d528b3d34bf25c7dfe098fa510c32bd4a8869270a8e31ca47042 Feb 8 23:42:02.112637 env[1051]: time="2024-02-08T23:42:02.112618043Z" level=warning msg="cleaning up after shim disconnected" id=e766ba78b8d2d528b3d34bf25c7dfe098fa510c32bd4a8869270a8e31ca47042 namespace=k8s.io Feb 8 23:42:02.112755 env[1051]: time="2024-02-08T23:42:02.112739400Z" level=info msg="cleaning up dead shim" Feb 8 23:42:02.122874 env[1051]: time="2024-02-08T23:42:02.122805118Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:42:02Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1901 runtime=io.containerd.runc.v2\n" Feb 8 23:42:02.582376 kubelet[1341]: E0208 23:42:02.582247 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:42:02.956284 env[1051]: time="2024-02-08T23:42:02.956095911Z" level=info msg="CreateContainer within sandbox \"3a07888993018bd9bdcc48b0477d63c32c607f1c5fa0eef54cd71525515ae41e\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 8 23:42:02.986497 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3883120801.mount: Deactivated successfully. Feb 8 23:42:03.001738 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1014570822.mount: Deactivated successfully. Feb 8 23:42:03.005881 env[1051]: time="2024-02-08T23:42:03.005768148Z" level=info msg="CreateContainer within sandbox \"3a07888993018bd9bdcc48b0477d63c32c607f1c5fa0eef54cd71525515ae41e\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"d7b1eb4068080423f05f13716bd7e79c05bf36edb2c71caf86fbc45e46451c9a\"" Feb 8 23:42:03.007459 env[1051]: time="2024-02-08T23:42:03.007388318Z" level=info msg="StartContainer for \"d7b1eb4068080423f05f13716bd7e79c05bf36edb2c71caf86fbc45e46451c9a\"" Feb 8 23:42:03.045230 systemd[1]: Started cri-containerd-d7b1eb4068080423f05f13716bd7e79c05bf36edb2c71caf86fbc45e46451c9a.scope. Feb 8 23:42:03.106861 env[1051]: time="2024-02-08T23:42:03.106768512Z" level=info msg="StartContainer for \"d7b1eb4068080423f05f13716bd7e79c05bf36edb2c71caf86fbc45e46451c9a\" returns successfully" Feb 8 23:42:03.243154 kubelet[1341]: I0208 23:42:03.242933 1341 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Feb 8 23:42:03.583510 kubelet[1341]: E0208 23:42:03.583429 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:42:03.653852 kernel: Initializing XFRM netlink socket Feb 8 23:42:03.999761 kubelet[1341]: I0208 23:42:03.999630 1341 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-6z2bk" podStartSLOduration=-9.223372010855253e+09 pod.CreationTimestamp="2024-02-08 23:41:38 +0000 UTC" firstStartedPulling="2024-02-08 23:41:43.030339684 +0000 UTC m=+18.636919904" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-08 23:42:03.999016063 +0000 UTC m=+39.605596363" watchObservedRunningTime="2024-02-08 23:42:03.999523124 +0000 UTC m=+39.606103384" Feb 8 23:42:04.585196 kubelet[1341]: E0208 23:42:04.585119 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:42:05.393924 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Feb 8 23:42:05.394049 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Feb 8 23:42:05.395132 systemd-networkd[975]: cilium_host: Link UP Feb 8 23:42:05.395484 systemd-networkd[975]: cilium_net: Link UP Feb 8 23:42:05.395986 systemd-networkd[975]: cilium_net: Gained carrier Feb 8 23:42:05.396347 systemd-networkd[975]: cilium_host: Gained carrier Feb 8 23:42:05.396637 systemd-networkd[975]: cilium_net: Gained IPv6LL Feb 8 23:42:05.408209 systemd-networkd[975]: cilium_host: Gained IPv6LL Feb 8 23:42:05.541042 systemd-networkd[975]: cilium_vxlan: Link UP Feb 8 23:42:05.541054 systemd-networkd[975]: cilium_vxlan: Gained carrier Feb 8 23:42:05.544256 kubelet[1341]: E0208 23:42:05.544219 1341 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:42:05.585531 kubelet[1341]: E0208 23:42:05.585470 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:42:05.887874 kernel: NET: Registered PF_ALG protocol family Feb 8 23:42:06.586349 kubelet[1341]: E0208 23:42:06.586275 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:42:06.887254 systemd-networkd[975]: lxc_health: Link UP Feb 8 23:42:06.894353 systemd-networkd[975]: lxc_health: Gained carrier Feb 8 23:42:06.894860 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 8 23:42:07.174203 systemd-networkd[975]: cilium_vxlan: Gained IPv6LL Feb 8 23:42:07.587330 kubelet[1341]: E0208 23:42:07.587266 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:42:08.071212 systemd-networkd[975]: lxc_health: Gained IPv6LL Feb 8 23:42:08.588028 kubelet[1341]: E0208 23:42:08.587923 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:42:09.588490 kubelet[1341]: E0208 23:42:09.588379 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:42:09.881482 kubelet[1341]: I0208 23:42:09.881363 1341 prober_manager.go:287] "Failed to trigger a manual run" probe="Readiness" Feb 8 23:42:10.589606 kubelet[1341]: E0208 23:42:10.589217 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:42:11.590018 kubelet[1341]: E0208 23:42:11.589880 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:42:12.590491 kubelet[1341]: E0208 23:42:12.590405 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:42:13.008733 kubelet[1341]: I0208 23:42:13.008649 1341 topology_manager.go:210] "Topology Admit Handler" Feb 8 23:42:13.024936 systemd[1]: Created slice kubepods-besteffort-pod846df9d3_1b88_437a_9a50_2dcf8b74026b.slice. Feb 8 23:42:13.108290 kubelet[1341]: I0208 23:42:13.108242 1341 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5v28d\" (UniqueName: \"kubernetes.io/projected/846df9d3-1b88-437a-9a50-2dcf8b74026b-kube-api-access-5v28d\") pod \"nginx-deployment-8ffc5cf85-cf2mw\" (UID: \"846df9d3-1b88-437a-9a50-2dcf8b74026b\") " pod="default/nginx-deployment-8ffc5cf85-cf2mw" Feb 8 23:42:13.335207 env[1051]: time="2024-02-08T23:42:13.333625279Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8ffc5cf85-cf2mw,Uid:846df9d3-1b88-437a-9a50-2dcf8b74026b,Namespace:default,Attempt:0,}" Feb 8 23:42:13.408200 systemd-networkd[975]: lxc96d9b7cdf06e: Link UP Feb 8 23:42:13.417935 kernel: eth0: renamed from tmp0f2c3 Feb 8 23:42:13.429439 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 8 23:42:13.429618 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc96d9b7cdf06e: link becomes ready Feb 8 23:42:13.429556 systemd-networkd[975]: lxc96d9b7cdf06e: Gained carrier Feb 8 23:42:13.592652 kubelet[1341]: E0208 23:42:13.592230 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:42:13.709179 env[1051]: time="2024-02-08T23:42:13.708953168Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 8 23:42:13.710403 env[1051]: time="2024-02-08T23:42:13.709063696Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 8 23:42:13.710403 env[1051]: time="2024-02-08T23:42:13.709110193Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 8 23:42:13.710403 env[1051]: time="2024-02-08T23:42:13.709522076Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0f2c383df04639491ab3848e74f61547ba045ff0159aa985c1e6a4dccae9a9ab pid=2426 runtime=io.containerd.runc.v2 Feb 8 23:42:13.746583 systemd[1]: Started cri-containerd-0f2c383df04639491ab3848e74f61547ba045ff0159aa985c1e6a4dccae9a9ab.scope. Feb 8 23:42:13.791424 env[1051]: time="2024-02-08T23:42:13.791337097Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8ffc5cf85-cf2mw,Uid:846df9d3-1b88-437a-9a50-2dcf8b74026b,Namespace:default,Attempt:0,} returns sandbox id \"0f2c383df04639491ab3848e74f61547ba045ff0159aa985c1e6a4dccae9a9ab\"" Feb 8 23:42:13.793018 env[1051]: time="2024-02-08T23:42:13.792992733Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 8 23:42:14.235696 systemd[1]: run-containerd-runc-k8s.io-0f2c383df04639491ab3848e74f61547ba045ff0159aa985c1e6a4dccae9a9ab-runc.JdVNiH.mount: Deactivated successfully. Feb 8 23:42:14.592578 kubelet[1341]: E0208 23:42:14.592475 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:42:15.046223 systemd-networkd[975]: lxc96d9b7cdf06e: Gained IPv6LL Feb 8 23:42:15.593704 kubelet[1341]: E0208 23:42:15.593634 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:42:16.594393 kubelet[1341]: E0208 23:42:16.594330 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:42:17.594738 kubelet[1341]: E0208 23:42:17.594678 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:42:18.506528 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3105228514.mount: Deactivated successfully. Feb 8 23:42:18.594868 kubelet[1341]: E0208 23:42:18.594783 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:42:19.595950 kubelet[1341]: E0208 23:42:19.595879 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:42:20.007638 env[1051]: time="2024-02-08T23:42:20.007566230Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:42:20.012147 env[1051]: time="2024-02-08T23:42:20.012054949Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3a8963c304a2f89d2bfa055e07403bae348b293c891b8ea01f7136642eaa277a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:42:20.022798 env[1051]: time="2024-02-08T23:42:20.022702852Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:42:20.027211 env[1051]: time="2024-02-08T23:42:20.027130227Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx@sha256:e34a272f01984c973b1e034e197c02f77dda18981038e3a54e957554ada4fec6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:42:20.029674 env[1051]: time="2024-02-08T23:42:20.029538715Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:3a8963c304a2f89d2bfa055e07403bae348b293c891b8ea01f7136642eaa277a\"" Feb 8 23:42:20.035411 env[1051]: time="2024-02-08T23:42:20.035347131Z" level=info msg="CreateContainer within sandbox \"0f2c383df04639491ab3848e74f61547ba045ff0159aa985c1e6a4dccae9a9ab\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Feb 8 23:42:20.072908 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3669918794.mount: Deactivated successfully. Feb 8 23:42:20.074890 env[1051]: time="2024-02-08T23:42:20.074406397Z" level=info msg="CreateContainer within sandbox \"0f2c383df04639491ab3848e74f61547ba045ff0159aa985c1e6a4dccae9a9ab\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"493bf6254296a5b96cd3bfa290ea101eca8b8b6c43628a5aa3b1e0d0687e7a98\"" Feb 8 23:42:20.076119 env[1051]: time="2024-02-08T23:42:20.076005688Z" level=info msg="StartContainer for \"493bf6254296a5b96cd3bfa290ea101eca8b8b6c43628a5aa3b1e0d0687e7a98\"" Feb 8 23:42:20.112171 systemd[1]: run-containerd-runc-k8s.io-493bf6254296a5b96cd3bfa290ea101eca8b8b6c43628a5aa3b1e0d0687e7a98-runc.OXQQCl.mount: Deactivated successfully. Feb 8 23:42:20.117964 systemd[1]: Started cri-containerd-493bf6254296a5b96cd3bfa290ea101eca8b8b6c43628a5aa3b1e0d0687e7a98.scope. Feb 8 23:42:20.335439 env[1051]: time="2024-02-08T23:42:20.335033259Z" level=info msg="StartContainer for \"493bf6254296a5b96cd3bfa290ea101eca8b8b6c43628a5aa3b1e0d0687e7a98\" returns successfully" Feb 8 23:42:20.597055 kubelet[1341]: E0208 23:42:20.596851 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:42:21.108396 kubelet[1341]: I0208 23:42:21.108344 1341 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nginx-deployment-8ffc5cf85-cf2mw" podStartSLOduration=-9.223372027746523e+09 pod.CreationTimestamp="2024-02-08 23:42:12 +0000 UTC" firstStartedPulling="2024-02-08 23:42:13.792428815 +0000 UTC m=+49.399009045" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-08 23:42:21.105864555 +0000 UTC m=+56.712444845" watchObservedRunningTime="2024-02-08 23:42:21.108253116 +0000 UTC m=+56.714833416" Feb 8 23:42:21.597875 kubelet[1341]: E0208 23:42:21.597785 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:42:22.599617 kubelet[1341]: E0208 23:42:22.599560 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:42:23.601505 kubelet[1341]: E0208 23:42:23.601377 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:42:24.602131 kubelet[1341]: E0208 23:42:24.601986 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:42:25.544144 kubelet[1341]: E0208 23:42:25.544090 1341 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:42:25.602339 kubelet[1341]: E0208 23:42:25.602295 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:42:26.603946 kubelet[1341]: E0208 23:42:26.603797 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:42:27.604627 kubelet[1341]: E0208 23:42:27.604553 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:42:28.418155 kubelet[1341]: I0208 23:42:28.418080 1341 topology_manager.go:210] "Topology Admit Handler" Feb 8 23:42:28.441053 systemd[1]: Created slice kubepods-besteffort-podd8c30a21_0549_4ef3_b379_cd90e061a2e3.slice. Feb 8 23:42:28.521305 kubelet[1341]: I0208 23:42:28.521191 1341 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/d8c30a21-0549-4ef3-b379-cd90e061a2e3-data\") pod \"nfs-server-provisioner-0\" (UID: \"d8c30a21-0549-4ef3-b379-cd90e061a2e3\") " pod="default/nfs-server-provisioner-0" Feb 8 23:42:28.521305 kubelet[1341]: I0208 23:42:28.521284 1341 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57k9j\" (UniqueName: \"kubernetes.io/projected/d8c30a21-0549-4ef3-b379-cd90e061a2e3-kube-api-access-57k9j\") pod \"nfs-server-provisioner-0\" (UID: \"d8c30a21-0549-4ef3-b379-cd90e061a2e3\") " pod="default/nfs-server-provisioner-0" Feb 8 23:42:28.607139 kubelet[1341]: E0208 23:42:28.606905 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:42:28.752036 env[1051]: time="2024-02-08T23:42:28.749402932Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:d8c30a21-0549-4ef3-b379-cd90e061a2e3,Namespace:default,Attempt:0,}" Feb 8 23:42:28.891334 systemd-networkd[975]: lxccc55703aaacc: Link UP Feb 8 23:42:28.897976 kernel: eth0: renamed from tmpa571b Feb 8 23:42:28.916002 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 8 23:42:28.916217 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxccc55703aaacc: link becomes ready Feb 8 23:42:28.917481 systemd-networkd[975]: lxccc55703aaacc: Gained carrier Feb 8 23:42:29.206297 env[1051]: time="2024-02-08T23:42:29.205395453Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 8 23:42:29.206297 env[1051]: time="2024-02-08T23:42:29.205467137Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 8 23:42:29.206297 env[1051]: time="2024-02-08T23:42:29.205483067Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 8 23:42:29.206958 env[1051]: time="2024-02-08T23:42:29.206732370Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a571b87be1d0aa0e77e469a3ffebc6c92df2d87e6116cd8e84e19ae4be2abacd pid=2598 runtime=io.containerd.runc.v2 Feb 8 23:42:29.227006 systemd[1]: Started cri-containerd-a571b87be1d0aa0e77e469a3ffebc6c92df2d87e6116cd8e84e19ae4be2abacd.scope. Feb 8 23:42:29.290192 env[1051]: time="2024-02-08T23:42:29.290134976Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:d8c30a21-0549-4ef3-b379-cd90e061a2e3,Namespace:default,Attempt:0,} returns sandbox id \"a571b87be1d0aa0e77e469a3ffebc6c92df2d87e6116cd8e84e19ae4be2abacd\"" Feb 8 23:42:29.293055 env[1051]: time="2024-02-08T23:42:29.293008084Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Feb 8 23:42:29.607788 kubelet[1341]: E0208 23:42:29.607543 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:42:29.661968 systemd[1]: run-containerd-runc-k8s.io-a571b87be1d0aa0e77e469a3ffebc6c92df2d87e6116cd8e84e19ae4be2abacd-runc.Z18E1o.mount: Deactivated successfully. Feb 8 23:42:30.150235 systemd-networkd[975]: lxccc55703aaacc: Gained IPv6LL Feb 8 23:42:30.608197 kubelet[1341]: E0208 23:42:30.608119 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:42:31.609302 kubelet[1341]: E0208 23:42:31.609217 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:42:32.610028 kubelet[1341]: E0208 23:42:32.609982 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:42:33.571867 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1393917588.mount: Deactivated successfully. Feb 8 23:42:33.610840 kubelet[1341]: E0208 23:42:33.610772 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:42:34.611991 kubelet[1341]: E0208 23:42:34.611897 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:42:35.612912 kubelet[1341]: E0208 23:42:35.612741 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:42:36.613189 kubelet[1341]: E0208 23:42:36.613116 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:42:36.637259 env[1051]: time="2024-02-08T23:42:36.637123819Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:42:36.642486 env[1051]: time="2024-02-08T23:42:36.642427487Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:42:36.646749 env[1051]: time="2024-02-08T23:42:36.646696133Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:42:36.651302 env[1051]: time="2024-02-08T23:42:36.651245967Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:42:36.653870 env[1051]: time="2024-02-08T23:42:36.653754572Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Feb 8 23:42:36.657746 env[1051]: time="2024-02-08T23:42:36.657701624Z" level=info msg="CreateContainer within sandbox \"a571b87be1d0aa0e77e469a3ffebc6c92df2d87e6116cd8e84e19ae4be2abacd\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Feb 8 23:42:36.670245 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3170983209.mount: Deactivated successfully. Feb 8 23:42:36.676021 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount651241330.mount: Deactivated successfully. Feb 8 23:42:36.685349 env[1051]: time="2024-02-08T23:42:36.685261957Z" level=info msg="CreateContainer within sandbox \"a571b87be1d0aa0e77e469a3ffebc6c92df2d87e6116cd8e84e19ae4be2abacd\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"4ea5cef3cd24d9e35b7b40f0d52de91a0abf38c3d836b88a5f7d5d2c48ca7ac9\"" Feb 8 23:42:36.686907 env[1051]: time="2024-02-08T23:42:36.686800824Z" level=info msg="StartContainer for \"4ea5cef3cd24d9e35b7b40f0d52de91a0abf38c3d836b88a5f7d5d2c48ca7ac9\"" Feb 8 23:42:36.707366 systemd[1]: Started cri-containerd-4ea5cef3cd24d9e35b7b40f0d52de91a0abf38c3d836b88a5f7d5d2c48ca7ac9.scope. Feb 8 23:42:36.754391 env[1051]: time="2024-02-08T23:42:36.754353565Z" level=info msg="StartContainer for \"4ea5cef3cd24d9e35b7b40f0d52de91a0abf38c3d836b88a5f7d5d2c48ca7ac9\" returns successfully" Feb 8 23:42:37.263232 kubelet[1341]: I0208 23:42:37.263099 1341 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=-9.223372027591866e+09 pod.CreationTimestamp="2024-02-08 23:42:28 +0000 UTC" firstStartedPulling="2024-02-08 23:42:29.292255303 +0000 UTC m=+64.898835523" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-08 23:42:37.262477293 +0000 UTC m=+72.869057563" watchObservedRunningTime="2024-02-08 23:42:37.262910235 +0000 UTC m=+72.869490515" Feb 8 23:42:37.613644 kubelet[1341]: E0208 23:42:37.613323 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:42:38.613609 kubelet[1341]: E0208 23:42:38.613553 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:42:39.615043 kubelet[1341]: E0208 23:42:39.614960 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:42:40.617143 kubelet[1341]: E0208 23:42:40.617071 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:42:41.617322 kubelet[1341]: E0208 23:42:41.617278 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:42:42.618633 kubelet[1341]: E0208 23:42:42.618577 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:42:43.620660 kubelet[1341]: E0208 23:42:43.620578 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:42:44.621489 kubelet[1341]: E0208 23:42:44.621406 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:42:45.544328 kubelet[1341]: E0208 23:42:45.544156 1341 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:42:45.622678 kubelet[1341]: E0208 23:42:45.622567 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:42:46.373245 kubelet[1341]: I0208 23:42:46.373205 1341 topology_manager.go:210] "Topology Admit Handler" Feb 8 23:42:46.381251 systemd[1]: Created slice kubepods-besteffort-pod8f3ae7ef_1f5a_4ff3_8b3b_060efa47e52b.slice. Feb 8 23:42:46.557292 kubelet[1341]: I0208 23:42:46.557171 1341 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-0a080fe5-1110-4a0d-8784-e33d2db1aa5f\" (UniqueName: \"kubernetes.io/nfs/8f3ae7ef-1f5a-4ff3-8b3b-060efa47e52b-pvc-0a080fe5-1110-4a0d-8784-e33d2db1aa5f\") pod \"test-pod-1\" (UID: \"8f3ae7ef-1f5a-4ff3-8b3b-060efa47e52b\") " pod="default/test-pod-1" Feb 8 23:42:46.557531 kubelet[1341]: I0208 23:42:46.557341 1341 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgnp7\" (UniqueName: \"kubernetes.io/projected/8f3ae7ef-1f5a-4ff3-8b3b-060efa47e52b-kube-api-access-xgnp7\") pod \"test-pod-1\" (UID: \"8f3ae7ef-1f5a-4ff3-8b3b-060efa47e52b\") " pod="default/test-pod-1" Feb 8 23:42:46.623956 kubelet[1341]: E0208 23:42:46.623178 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:42:46.857930 kernel: FS-Cache: Loaded Feb 8 23:42:46.965280 kernel: RPC: Registered named UNIX socket transport module. Feb 8 23:42:46.965716 kernel: RPC: Registered udp transport module. Feb 8 23:42:46.965783 kernel: RPC: Registered tcp transport module. Feb 8 23:42:46.966016 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Feb 8 23:42:47.027910 kernel: FS-Cache: Netfs 'nfs' registered for caching Feb 8 23:42:47.263516 kernel: NFS: Registering the id_resolver key type Feb 8 23:42:47.263717 kernel: Key type id_resolver registered Feb 8 23:42:47.263801 kernel: Key type id_legacy registered Feb 8 23:42:47.349870 nfsidmap[2742]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'novalocal' Feb 8 23:42:47.360375 nfsidmap[2743]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'novalocal' Feb 8 23:42:47.589216 env[1051]: time="2024-02-08T23:42:47.588278798Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:8f3ae7ef-1f5a-4ff3-8b3b-060efa47e52b,Namespace:default,Attempt:0,}" Feb 8 23:42:47.623460 kubelet[1341]: E0208 23:42:47.623393 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:42:47.675200 systemd-networkd[975]: lxcdaca656f73a9: Link UP Feb 8 23:42:47.684914 kernel: eth0: renamed from tmp37be6 Feb 8 23:42:47.698332 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 8 23:42:47.698504 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcdaca656f73a9: link becomes ready Feb 8 23:42:47.702274 systemd-networkd[975]: lxcdaca656f73a9: Gained carrier Feb 8 23:42:48.045857 env[1051]: time="2024-02-08T23:42:48.045667172Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 8 23:42:48.046194 env[1051]: time="2024-02-08T23:42:48.046158846Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 8 23:42:48.046322 env[1051]: time="2024-02-08T23:42:48.046290844Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 8 23:42:48.047321 env[1051]: time="2024-02-08T23:42:48.047135212Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/37be6c90ec3b061a2aacb949cdc80846c6690c0ec4344ae9317b5baa65b14d0f pid=2774 runtime=io.containerd.runc.v2 Feb 8 23:42:48.079922 systemd[1]: run-containerd-runc-k8s.io-37be6c90ec3b061a2aacb949cdc80846c6690c0ec4344ae9317b5baa65b14d0f-runc.Nz3zGk.mount: Deactivated successfully. Feb 8 23:42:48.083410 systemd[1]: Started cri-containerd-37be6c90ec3b061a2aacb949cdc80846c6690c0ec4344ae9317b5baa65b14d0f.scope. Feb 8 23:42:48.128715 env[1051]: time="2024-02-08T23:42:48.128661418Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:8f3ae7ef-1f5a-4ff3-8b3b-060efa47e52b,Namespace:default,Attempt:0,} returns sandbox id \"37be6c90ec3b061a2aacb949cdc80846c6690c0ec4344ae9317b5baa65b14d0f\"" Feb 8 23:42:48.131186 env[1051]: time="2024-02-08T23:42:48.131124079Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 8 23:42:48.624456 kubelet[1341]: E0208 23:42:48.624340 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:42:48.629521 env[1051]: time="2024-02-08T23:42:48.629450584Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:42:48.632881 env[1051]: time="2024-02-08T23:42:48.632774855Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:3a8963c304a2f89d2bfa055e07403bae348b293c891b8ea01f7136642eaa277a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:42:48.636700 env[1051]: time="2024-02-08T23:42:48.636649721Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:42:48.640676 env[1051]: time="2024-02-08T23:42:48.640594148Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx@sha256:e34a272f01984c973b1e034e197c02f77dda18981038e3a54e957554ada4fec6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:42:48.642938 env[1051]: time="2024-02-08T23:42:48.642855009Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:3a8963c304a2f89d2bfa055e07403bae348b293c891b8ea01f7136642eaa277a\"" Feb 8 23:42:48.647857 env[1051]: time="2024-02-08T23:42:48.647757818Z" level=info msg="CreateContainer within sandbox \"37be6c90ec3b061a2aacb949cdc80846c6690c0ec4344ae9317b5baa65b14d0f\" for container &ContainerMetadata{Name:test,Attempt:0,}" Feb 8 23:42:48.674734 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3098839276.mount: Deactivated successfully. Feb 8 23:42:48.684101 env[1051]: time="2024-02-08T23:42:48.683993824Z" level=info msg="CreateContainer within sandbox \"37be6c90ec3b061a2aacb949cdc80846c6690c0ec4344ae9317b5baa65b14d0f\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"6e14e9c6ae4c8c6ddb2dfaf308f42d34d96b8529f7ee814c3458971029fdd6ef\"" Feb 8 23:42:48.685593 env[1051]: time="2024-02-08T23:42:48.685535003Z" level=info msg="StartContainer for \"6e14e9c6ae4c8c6ddb2dfaf308f42d34d96b8529f7ee814c3458971029fdd6ef\"" Feb 8 23:42:48.724980 systemd[1]: Started cri-containerd-6e14e9c6ae4c8c6ddb2dfaf308f42d34d96b8529f7ee814c3458971029fdd6ef.scope. Feb 8 23:42:48.782087 env[1051]: time="2024-02-08T23:42:48.782039354Z" level=info msg="StartContainer for \"6e14e9c6ae4c8c6ddb2dfaf308f42d34d96b8529f7ee814c3458971029fdd6ef\" returns successfully" Feb 8 23:42:49.478974 systemd-networkd[975]: lxcdaca656f73a9: Gained IPv6LL Feb 8 23:42:49.625171 kubelet[1341]: E0208 23:42:49.625067 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:42:50.626692 kubelet[1341]: E0208 23:42:50.626591 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:42:51.626877 kubelet[1341]: E0208 23:42:51.626788 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:42:52.627768 kubelet[1341]: E0208 23:42:52.627664 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:42:53.628489 kubelet[1341]: E0208 23:42:53.628384 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:42:54.629345 kubelet[1341]: E0208 23:42:54.629206 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:42:55.629901 kubelet[1341]: E0208 23:42:55.629649 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:42:56.630668 kubelet[1341]: E0208 23:42:56.630572 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:42:57.630939 kubelet[1341]: E0208 23:42:57.630790 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:42:58.632194 kubelet[1341]: E0208 23:42:58.632120 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:42:59.303422 kubelet[1341]: I0208 23:42:59.303251 1341 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=-9.223372008551739e+09 pod.CreationTimestamp="2024-02-08 23:42:31 +0000 UTC" firstStartedPulling="2024-02-08 23:42:48.13045434 +0000 UTC m=+83.737034560" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-08 23:42:49.311606281 +0000 UTC m=+84.918186551" watchObservedRunningTime="2024-02-08 23:42:59.303036615 +0000 UTC m=+94.909616925" Feb 8 23:42:59.391665 env[1051]: time="2024-02-08T23:42:59.391532579Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 8 23:42:59.405024 env[1051]: time="2024-02-08T23:42:59.404945223Z" level=info msg="StopContainer for \"d7b1eb4068080423f05f13716bd7e79c05bf36edb2c71caf86fbc45e46451c9a\" with timeout 1 (s)" Feb 8 23:42:59.406072 env[1051]: time="2024-02-08T23:42:59.405992772Z" level=info msg="Stop container \"d7b1eb4068080423f05f13716bd7e79c05bf36edb2c71caf86fbc45e46451c9a\" with signal terminated" Feb 8 23:42:59.421764 systemd-networkd[975]: lxc_health: Link DOWN Feb 8 23:42:59.421780 systemd-networkd[975]: lxc_health: Lost carrier Feb 8 23:42:59.458643 systemd[1]: cri-containerd-d7b1eb4068080423f05f13716bd7e79c05bf36edb2c71caf86fbc45e46451c9a.scope: Deactivated successfully. Feb 8 23:42:59.459147 systemd[1]: cri-containerd-d7b1eb4068080423f05f13716bd7e79c05bf36edb2c71caf86fbc45e46451c9a.scope: Consumed 9.391s CPU time. Feb 8 23:42:59.486364 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d7b1eb4068080423f05f13716bd7e79c05bf36edb2c71caf86fbc45e46451c9a-rootfs.mount: Deactivated successfully. Feb 8 23:42:59.497578 env[1051]: time="2024-02-08T23:42:59.497530441Z" level=info msg="shim disconnected" id=d7b1eb4068080423f05f13716bd7e79c05bf36edb2c71caf86fbc45e46451c9a Feb 8 23:42:59.497795 env[1051]: time="2024-02-08T23:42:59.497774410Z" level=warning msg="cleaning up after shim disconnected" id=d7b1eb4068080423f05f13716bd7e79c05bf36edb2c71caf86fbc45e46451c9a namespace=k8s.io Feb 8 23:42:59.497908 env[1051]: time="2024-02-08T23:42:59.497891350Z" level=info msg="cleaning up dead shim" Feb 8 23:42:59.509031 env[1051]: time="2024-02-08T23:42:59.508979947Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:42:59Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2911 runtime=io.containerd.runc.v2\n" Feb 8 23:42:59.514214 env[1051]: time="2024-02-08T23:42:59.514154501Z" level=info msg="StopContainer for \"d7b1eb4068080423f05f13716bd7e79c05bf36edb2c71caf86fbc45e46451c9a\" returns successfully" Feb 8 23:42:59.515338 env[1051]: time="2024-02-08T23:42:59.515297268Z" level=info msg="StopPodSandbox for \"3a07888993018bd9bdcc48b0477d63c32c607f1c5fa0eef54cd71525515ae41e\"" Feb 8 23:42:59.515438 env[1051]: time="2024-02-08T23:42:59.515371026Z" level=info msg="Container to stop \"8508039c6b03832f9714c5e30f7a8381be09f910196d582ae3c792a248381ad8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 8 23:42:59.515438 env[1051]: time="2024-02-08T23:42:59.515397035Z" level=info msg="Container to stop \"874be25eb43ffe951f0d73fd3f030b22b44f48aa12a3d55787b632c963e03520\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 8 23:42:59.515438 env[1051]: time="2024-02-08T23:42:59.515415780Z" level=info msg="Container to stop \"8c57dc2fd428beb1d15a72867c3dceb7680ad7aadca843f12275585d0f0472de\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 8 23:42:59.518650 env[1051]: time="2024-02-08T23:42:59.515432502Z" level=info msg="Container to stop \"e766ba78b8d2d528b3d34bf25c7dfe098fa510c32bd4a8869270a8e31ca47042\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 8 23:42:59.518650 env[1051]: time="2024-02-08T23:42:59.515448842Z" level=info msg="Container to stop \"d7b1eb4068080423f05f13716bd7e79c05bf36edb2c71caf86fbc45e46451c9a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 8 23:42:59.517800 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3a07888993018bd9bdcc48b0477d63c32c607f1c5fa0eef54cd71525515ae41e-shm.mount: Deactivated successfully. Feb 8 23:42:59.526958 systemd[1]: cri-containerd-3a07888993018bd9bdcc48b0477d63c32c607f1c5fa0eef54cd71525515ae41e.scope: Deactivated successfully. Feb 8 23:42:59.558217 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3a07888993018bd9bdcc48b0477d63c32c607f1c5fa0eef54cd71525515ae41e-rootfs.mount: Deactivated successfully. Feb 8 23:42:59.568037 env[1051]: time="2024-02-08T23:42:59.567955739Z" level=info msg="shim disconnected" id=3a07888993018bd9bdcc48b0477d63c32c607f1c5fa0eef54cd71525515ae41e Feb 8 23:42:59.568037 env[1051]: time="2024-02-08T23:42:59.568029838Z" level=warning msg="cleaning up after shim disconnected" id=3a07888993018bd9bdcc48b0477d63c32c607f1c5fa0eef54cd71525515ae41e namespace=k8s.io Feb 8 23:42:59.568431 env[1051]: time="2024-02-08T23:42:59.568050167Z" level=info msg="cleaning up dead shim" Feb 8 23:42:59.581411 env[1051]: time="2024-02-08T23:42:59.581350780Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:42:59Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2943 runtime=io.containerd.runc.v2\n" Feb 8 23:42:59.581835 env[1051]: time="2024-02-08T23:42:59.581778304Z" level=info msg="TearDown network for sandbox \"3a07888993018bd9bdcc48b0477d63c32c607f1c5fa0eef54cd71525515ae41e\" successfully" Feb 8 23:42:59.581887 env[1051]: time="2024-02-08T23:42:59.581863193Z" level=info msg="StopPodSandbox for \"3a07888993018bd9bdcc48b0477d63c32c607f1c5fa0eef54cd71525515ae41e\" returns successfully" Feb 8 23:42:59.632904 kubelet[1341]: E0208 23:42:59.632860 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:42:59.763779 kubelet[1341]: I0208 23:42:59.763728 1341 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/cd01c2fe-73c2-4c8b-8f4e-859ddac69780-cni-path\") pod \"cd01c2fe-73c2-4c8b-8f4e-859ddac69780\" (UID: \"cd01c2fe-73c2-4c8b-8f4e-859ddac69780\") " Feb 8 23:42:59.766903 kubelet[1341]: I0208 23:42:59.764288 1341 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/cd01c2fe-73c2-4c8b-8f4e-859ddac69780-hostproc\") pod \"cd01c2fe-73c2-4c8b-8f4e-859ddac69780\" (UID: \"cd01c2fe-73c2-4c8b-8f4e-859ddac69780\") " Feb 8 23:42:59.766903 kubelet[1341]: I0208 23:42:59.764362 1341 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cd01c2fe-73c2-4c8b-8f4e-859ddac69780-etc-cni-netd\") pod \"cd01c2fe-73c2-4c8b-8f4e-859ddac69780\" (UID: \"cd01c2fe-73c2-4c8b-8f4e-859ddac69780\") " Feb 8 23:42:59.766903 kubelet[1341]: I0208 23:42:59.764353 1341 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cd01c2fe-73c2-4c8b-8f4e-859ddac69780-hostproc" (OuterVolumeSpecName: "hostproc") pod "cd01c2fe-73c2-4c8b-8f4e-859ddac69780" (UID: "cd01c2fe-73c2-4c8b-8f4e-859ddac69780"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:42:59.766903 kubelet[1341]: I0208 23:42:59.763965 1341 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cd01c2fe-73c2-4c8b-8f4e-859ddac69780-cni-path" (OuterVolumeSpecName: "cni-path") pod "cd01c2fe-73c2-4c8b-8f4e-859ddac69780" (UID: "cd01c2fe-73c2-4c8b-8f4e-859ddac69780"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:42:59.766903 kubelet[1341]: I0208 23:42:59.764432 1341 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/cd01c2fe-73c2-4c8b-8f4e-859ddac69780-bpf-maps\") pod \"cd01c2fe-73c2-4c8b-8f4e-859ddac69780\" (UID: \"cd01c2fe-73c2-4c8b-8f4e-859ddac69780\") " Feb 8 23:42:59.767378 kubelet[1341]: I0208 23:42:59.764449 1341 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cd01c2fe-73c2-4c8b-8f4e-859ddac69780-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "cd01c2fe-73c2-4c8b-8f4e-859ddac69780" (UID: "cd01c2fe-73c2-4c8b-8f4e-859ddac69780"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:42:59.767378 kubelet[1341]: I0208 23:42:59.764492 1341 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/cd01c2fe-73c2-4c8b-8f4e-859ddac69780-host-proc-sys-kernel\") pod \"cd01c2fe-73c2-4c8b-8f4e-859ddac69780\" (UID: \"cd01c2fe-73c2-4c8b-8f4e-859ddac69780\") " Feb 8 23:42:59.767378 kubelet[1341]: I0208 23:42:59.764492 1341 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cd01c2fe-73c2-4c8b-8f4e-859ddac69780-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "cd01c2fe-73c2-4c8b-8f4e-859ddac69780" (UID: "cd01c2fe-73c2-4c8b-8f4e-859ddac69780"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:42:59.767378 kubelet[1341]: I0208 23:42:59.764556 1341 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/cd01c2fe-73c2-4c8b-8f4e-859ddac69780-hubble-tls\") pod \"cd01c2fe-73c2-4c8b-8f4e-859ddac69780\" (UID: \"cd01c2fe-73c2-4c8b-8f4e-859ddac69780\") " Feb 8 23:42:59.767378 kubelet[1341]: I0208 23:42:59.764620 1341 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cd01c2fe-73c2-4c8b-8f4e-859ddac69780-cilium-config-path\") pod \"cd01c2fe-73c2-4c8b-8f4e-859ddac69780\" (UID: \"cd01c2fe-73c2-4c8b-8f4e-859ddac69780\") " Feb 8 23:42:59.767751 kubelet[1341]: I0208 23:42:59.764639 1341 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cd01c2fe-73c2-4c8b-8f4e-859ddac69780-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "cd01c2fe-73c2-4c8b-8f4e-859ddac69780" (UID: "cd01c2fe-73c2-4c8b-8f4e-859ddac69780"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:42:59.767751 kubelet[1341]: I0208 23:42:59.764679 1341 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/cd01c2fe-73c2-4c8b-8f4e-859ddac69780-clustermesh-secrets\") pod \"cd01c2fe-73c2-4c8b-8f4e-859ddac69780\" (UID: \"cd01c2fe-73c2-4c8b-8f4e-859ddac69780\") " Feb 8 23:42:59.767751 kubelet[1341]: I0208 23:42:59.764737 1341 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/cd01c2fe-73c2-4c8b-8f4e-859ddac69780-host-proc-sys-net\") pod \"cd01c2fe-73c2-4c8b-8f4e-859ddac69780\" (UID: \"cd01c2fe-73c2-4c8b-8f4e-859ddac69780\") " Feb 8 23:42:59.767751 kubelet[1341]: I0208 23:42:59.764857 1341 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/cd01c2fe-73c2-4c8b-8f4e-859ddac69780-cilium-cgroup\") pod \"cd01c2fe-73c2-4c8b-8f4e-859ddac69780\" (UID: \"cd01c2fe-73c2-4c8b-8f4e-859ddac69780\") " Feb 8 23:42:59.767751 kubelet[1341]: I0208 23:42:59.764914 1341 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/cd01c2fe-73c2-4c8b-8f4e-859ddac69780-cilium-run\") pod \"cd01c2fe-73c2-4c8b-8f4e-859ddac69780\" (UID: \"cd01c2fe-73c2-4c8b-8f4e-859ddac69780\") " Feb 8 23:42:59.767751 kubelet[1341]: I0208 23:42:59.764973 1341 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kpq5s\" (UniqueName: \"kubernetes.io/projected/cd01c2fe-73c2-4c8b-8f4e-859ddac69780-kube-api-access-kpq5s\") pod \"cd01c2fe-73c2-4c8b-8f4e-859ddac69780\" (UID: \"cd01c2fe-73c2-4c8b-8f4e-859ddac69780\") " Feb 8 23:42:59.768401 kubelet[1341]: I0208 23:42:59.765028 1341 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cd01c2fe-73c2-4c8b-8f4e-859ddac69780-lib-modules\") pod \"cd01c2fe-73c2-4c8b-8f4e-859ddac69780\" (UID: \"cd01c2fe-73c2-4c8b-8f4e-859ddac69780\") " Feb 8 23:42:59.768401 kubelet[1341]: I0208 23:42:59.765082 1341 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cd01c2fe-73c2-4c8b-8f4e-859ddac69780-xtables-lock\") pod \"cd01c2fe-73c2-4c8b-8f4e-859ddac69780\" (UID: \"cd01c2fe-73c2-4c8b-8f4e-859ddac69780\") " Feb 8 23:42:59.768401 kubelet[1341]: I0208 23:42:59.765145 1341 reconciler_common.go:295] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/cd01c2fe-73c2-4c8b-8f4e-859ddac69780-bpf-maps\") on node \"172.24.4.229\" DevicePath \"\"" Feb 8 23:42:59.768401 kubelet[1341]: I0208 23:42:59.765175 1341 reconciler_common.go:295] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/cd01c2fe-73c2-4c8b-8f4e-859ddac69780-cni-path\") on node \"172.24.4.229\" DevicePath \"\"" Feb 8 23:42:59.768401 kubelet[1341]: I0208 23:42:59.765204 1341 reconciler_common.go:295] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/cd01c2fe-73c2-4c8b-8f4e-859ddac69780-hostproc\") on node \"172.24.4.229\" DevicePath \"\"" Feb 8 23:42:59.768401 kubelet[1341]: I0208 23:42:59.765232 1341 reconciler_common.go:295] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cd01c2fe-73c2-4c8b-8f4e-859ddac69780-etc-cni-netd\") on node \"172.24.4.229\" DevicePath \"\"" Feb 8 23:42:59.768401 kubelet[1341]: I0208 23:42:59.765275 1341 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/cd01c2fe-73c2-4c8b-8f4e-859ddac69780-host-proc-sys-kernel\") on node \"172.24.4.229\" DevicePath \"\"" Feb 8 23:42:59.768890 kubelet[1341]: I0208 23:42:59.765362 1341 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cd01c2fe-73c2-4c8b-8f4e-859ddac69780-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "cd01c2fe-73c2-4c8b-8f4e-859ddac69780" (UID: "cd01c2fe-73c2-4c8b-8f4e-859ddac69780"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:42:59.768890 kubelet[1341]: W0208 23:42:59.765646 1341 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/cd01c2fe-73c2-4c8b-8f4e-859ddac69780/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 8 23:42:59.768890 kubelet[1341]: I0208 23:42:59.765798 1341 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cd01c2fe-73c2-4c8b-8f4e-859ddac69780-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "cd01c2fe-73c2-4c8b-8f4e-859ddac69780" (UID: "cd01c2fe-73c2-4c8b-8f4e-859ddac69780"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:42:59.768890 kubelet[1341]: I0208 23:42:59.767003 1341 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cd01c2fe-73c2-4c8b-8f4e-859ddac69780-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "cd01c2fe-73c2-4c8b-8f4e-859ddac69780" (UID: "cd01c2fe-73c2-4c8b-8f4e-859ddac69780"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:42:59.768890 kubelet[1341]: I0208 23:42:59.767953 1341 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cd01c2fe-73c2-4c8b-8f4e-859ddac69780-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "cd01c2fe-73c2-4c8b-8f4e-859ddac69780" (UID: "cd01c2fe-73c2-4c8b-8f4e-859ddac69780"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:42:59.769237 kubelet[1341]: I0208 23:42:59.768083 1341 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cd01c2fe-73c2-4c8b-8f4e-859ddac69780-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "cd01c2fe-73c2-4c8b-8f4e-859ddac69780" (UID: "cd01c2fe-73c2-4c8b-8f4e-859ddac69780"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:42:59.773684 kubelet[1341]: I0208 23:42:59.773628 1341 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd01c2fe-73c2-4c8b-8f4e-859ddac69780-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "cd01c2fe-73c2-4c8b-8f4e-859ddac69780" (UID: "cd01c2fe-73c2-4c8b-8f4e-859ddac69780"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 8 23:42:59.781023 systemd[1]: var-lib-kubelet-pods-cd01c2fe\x2d73c2\x2d4c8b\x2d8f4e\x2d859ddac69780-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dkpq5s.mount: Deactivated successfully. Feb 8 23:42:59.783687 kubelet[1341]: I0208 23:42:59.783597 1341 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd01c2fe-73c2-4c8b-8f4e-859ddac69780-kube-api-access-kpq5s" (OuterVolumeSpecName: "kube-api-access-kpq5s") pod "cd01c2fe-73c2-4c8b-8f4e-859ddac69780" (UID: "cd01c2fe-73c2-4c8b-8f4e-859ddac69780"). InnerVolumeSpecName "kube-api-access-kpq5s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 8 23:42:59.785554 kubelet[1341]: I0208 23:42:59.785482 1341 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd01c2fe-73c2-4c8b-8f4e-859ddac69780-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "cd01c2fe-73c2-4c8b-8f4e-859ddac69780" (UID: "cd01c2fe-73c2-4c8b-8f4e-859ddac69780"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 8 23:42:59.786127 kubelet[1341]: I0208 23:42:59.786052 1341 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd01c2fe-73c2-4c8b-8f4e-859ddac69780-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "cd01c2fe-73c2-4c8b-8f4e-859ddac69780" (UID: "cd01c2fe-73c2-4c8b-8f4e-859ddac69780"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 8 23:42:59.802552 systemd[1]: Removed slice kubepods-burstable-podcd01c2fe_73c2_4c8b_8f4e_859ddac69780.slice. Feb 8 23:42:59.802807 systemd[1]: kubepods-burstable-podcd01c2fe_73c2_4c8b_8f4e_859ddac69780.slice: Consumed 9.527s CPU time. Feb 8 23:42:59.866692 kubelet[1341]: I0208 23:42:59.865524 1341 reconciler_common.go:295] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/cd01c2fe-73c2-4c8b-8f4e-859ddac69780-clustermesh-secrets\") on node \"172.24.4.229\" DevicePath \"\"" Feb 8 23:42:59.866692 kubelet[1341]: I0208 23:42:59.866139 1341 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/cd01c2fe-73c2-4c8b-8f4e-859ddac69780-host-proc-sys-net\") on node \"172.24.4.229\" DevicePath \"\"" Feb 8 23:42:59.866692 kubelet[1341]: I0208 23:42:59.866226 1341 reconciler_common.go:295] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/cd01c2fe-73c2-4c8b-8f4e-859ddac69780-hubble-tls\") on node \"172.24.4.229\" DevicePath \"\"" Feb 8 23:42:59.866692 kubelet[1341]: I0208 23:42:59.866296 1341 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cd01c2fe-73c2-4c8b-8f4e-859ddac69780-cilium-config-path\") on node \"172.24.4.229\" DevicePath \"\"" Feb 8 23:42:59.866692 kubelet[1341]: I0208 23:42:59.866345 1341 reconciler_common.go:295] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/cd01c2fe-73c2-4c8b-8f4e-859ddac69780-cilium-run\") on node \"172.24.4.229\" DevicePath \"\"" Feb 8 23:42:59.866692 kubelet[1341]: I0208 23:42:59.866423 1341 reconciler_common.go:295] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/cd01c2fe-73c2-4c8b-8f4e-859ddac69780-cilium-cgroup\") on node \"172.24.4.229\" DevicePath \"\"" Feb 8 23:42:59.866692 kubelet[1341]: I0208 23:42:59.866491 1341 reconciler_common.go:295] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cd01c2fe-73c2-4c8b-8f4e-859ddac69780-xtables-lock\") on node \"172.24.4.229\" DevicePath \"\"" Feb 8 23:42:59.866692 kubelet[1341]: I0208 23:42:59.866526 1341 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-kpq5s\" (UniqueName: \"kubernetes.io/projected/cd01c2fe-73c2-4c8b-8f4e-859ddac69780-kube-api-access-kpq5s\") on node \"172.24.4.229\" DevicePath \"\"" Feb 8 23:42:59.867527 kubelet[1341]: I0208 23:42:59.866592 1341 reconciler_common.go:295] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cd01c2fe-73c2-4c8b-8f4e-859ddac69780-lib-modules\") on node \"172.24.4.229\" DevicePath \"\"" Feb 8 23:43:00.331483 kubelet[1341]: I0208 23:43:00.331435 1341 scope.go:115] "RemoveContainer" containerID="d7b1eb4068080423f05f13716bd7e79c05bf36edb2c71caf86fbc45e46451c9a" Feb 8 23:43:00.342486 systemd[1]: var-lib-kubelet-pods-cd01c2fe\x2d73c2\x2d4c8b\x2d8f4e\x2d859ddac69780-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 8 23:43:00.342787 systemd[1]: var-lib-kubelet-pods-cd01c2fe\x2d73c2\x2d4c8b\x2d8f4e\x2d859ddac69780-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 8 23:43:00.347881 env[1051]: time="2024-02-08T23:43:00.347723815Z" level=info msg="RemoveContainer for \"d7b1eb4068080423f05f13716bd7e79c05bf36edb2c71caf86fbc45e46451c9a\"" Feb 8 23:43:00.359579 env[1051]: time="2024-02-08T23:43:00.359504200Z" level=info msg="RemoveContainer for \"d7b1eb4068080423f05f13716bd7e79c05bf36edb2c71caf86fbc45e46451c9a\" returns successfully" Feb 8 23:43:00.360492 kubelet[1341]: I0208 23:43:00.360414 1341 scope.go:115] "RemoveContainer" containerID="e766ba78b8d2d528b3d34bf25c7dfe098fa510c32bd4a8869270a8e31ca47042" Feb 8 23:43:00.363901 env[1051]: time="2024-02-08T23:43:00.363755398Z" level=info msg="RemoveContainer for \"e766ba78b8d2d528b3d34bf25c7dfe098fa510c32bd4a8869270a8e31ca47042\"" Feb 8 23:43:00.369646 env[1051]: time="2024-02-08T23:43:00.369544797Z" level=info msg="RemoveContainer for \"e766ba78b8d2d528b3d34bf25c7dfe098fa510c32bd4a8869270a8e31ca47042\" returns successfully" Feb 8 23:43:00.370164 kubelet[1341]: I0208 23:43:00.370071 1341 scope.go:115] "RemoveContainer" containerID="874be25eb43ffe951f0d73fd3f030b22b44f48aa12a3d55787b632c963e03520" Feb 8 23:43:00.372710 env[1051]: time="2024-02-08T23:43:00.372615397Z" level=info msg="RemoveContainer for \"874be25eb43ffe951f0d73fd3f030b22b44f48aa12a3d55787b632c963e03520\"" Feb 8 23:43:00.381226 env[1051]: time="2024-02-08T23:43:00.381058001Z" level=info msg="RemoveContainer for \"874be25eb43ffe951f0d73fd3f030b22b44f48aa12a3d55787b632c963e03520\" returns successfully" Feb 8 23:43:00.381771 kubelet[1341]: I0208 23:43:00.381680 1341 scope.go:115] "RemoveContainer" containerID="8508039c6b03832f9714c5e30f7a8381be09f910196d582ae3c792a248381ad8" Feb 8 23:43:00.386992 env[1051]: time="2024-02-08T23:43:00.386505006Z" level=info msg="RemoveContainer for \"8508039c6b03832f9714c5e30f7a8381be09f910196d582ae3c792a248381ad8\"" Feb 8 23:43:00.392759 env[1051]: time="2024-02-08T23:43:00.392680109Z" level=info msg="RemoveContainer for \"8508039c6b03832f9714c5e30f7a8381be09f910196d582ae3c792a248381ad8\" returns successfully" Feb 8 23:43:00.394271 kubelet[1341]: I0208 23:43:00.394215 1341 scope.go:115] "RemoveContainer" containerID="8c57dc2fd428beb1d15a72867c3dceb7680ad7aadca843f12275585d0f0472de" Feb 8 23:43:00.398244 env[1051]: time="2024-02-08T23:43:00.398160366Z" level=info msg="RemoveContainer for \"8c57dc2fd428beb1d15a72867c3dceb7680ad7aadca843f12275585d0f0472de\"" Feb 8 23:43:00.404267 env[1051]: time="2024-02-08T23:43:00.404173084Z" level=info msg="RemoveContainer for \"8c57dc2fd428beb1d15a72867c3dceb7680ad7aadca843f12275585d0f0472de\" returns successfully" Feb 8 23:43:00.404716 kubelet[1341]: I0208 23:43:00.404675 1341 scope.go:115] "RemoveContainer" containerID="d7b1eb4068080423f05f13716bd7e79c05bf36edb2c71caf86fbc45e46451c9a" Feb 8 23:43:00.405610 env[1051]: time="2024-02-08T23:43:00.405464792Z" level=error msg="ContainerStatus for \"d7b1eb4068080423f05f13716bd7e79c05bf36edb2c71caf86fbc45e46451c9a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d7b1eb4068080423f05f13716bd7e79c05bf36edb2c71caf86fbc45e46451c9a\": not found" Feb 8 23:43:00.406154 kubelet[1341]: E0208 23:43:00.406118 1341 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d7b1eb4068080423f05f13716bd7e79c05bf36edb2c71caf86fbc45e46451c9a\": not found" containerID="d7b1eb4068080423f05f13716bd7e79c05bf36edb2c71caf86fbc45e46451c9a" Feb 8 23:43:00.406467 kubelet[1341]: I0208 23:43:00.406427 1341 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:d7b1eb4068080423f05f13716bd7e79c05bf36edb2c71caf86fbc45e46451c9a} err="failed to get container status \"d7b1eb4068080423f05f13716bd7e79c05bf36edb2c71caf86fbc45e46451c9a\": rpc error: code = NotFound desc = an error occurred when try to find container \"d7b1eb4068080423f05f13716bd7e79c05bf36edb2c71caf86fbc45e46451c9a\": not found" Feb 8 23:43:00.406698 kubelet[1341]: I0208 23:43:00.406668 1341 scope.go:115] "RemoveContainer" containerID="e766ba78b8d2d528b3d34bf25c7dfe098fa510c32bd4a8869270a8e31ca47042" Feb 8 23:43:00.407708 env[1051]: time="2024-02-08T23:43:00.407555099Z" level=error msg="ContainerStatus for \"e766ba78b8d2d528b3d34bf25c7dfe098fa510c32bd4a8869270a8e31ca47042\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e766ba78b8d2d528b3d34bf25c7dfe098fa510c32bd4a8869270a8e31ca47042\": not found" Feb 8 23:43:00.408230 kubelet[1341]: E0208 23:43:00.408191 1341 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e766ba78b8d2d528b3d34bf25c7dfe098fa510c32bd4a8869270a8e31ca47042\": not found" containerID="e766ba78b8d2d528b3d34bf25c7dfe098fa510c32bd4a8869270a8e31ca47042" Feb 8 23:43:00.408537 kubelet[1341]: I0208 23:43:00.408503 1341 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:e766ba78b8d2d528b3d34bf25c7dfe098fa510c32bd4a8869270a8e31ca47042} err="failed to get container status \"e766ba78b8d2d528b3d34bf25c7dfe098fa510c32bd4a8869270a8e31ca47042\": rpc error: code = NotFound desc = an error occurred when try to find container \"e766ba78b8d2d528b3d34bf25c7dfe098fa510c32bd4a8869270a8e31ca47042\": not found" Feb 8 23:43:00.408770 kubelet[1341]: I0208 23:43:00.408738 1341 scope.go:115] "RemoveContainer" containerID="874be25eb43ffe951f0d73fd3f030b22b44f48aa12a3d55787b632c963e03520" Feb 8 23:43:00.409735 env[1051]: time="2024-02-08T23:43:00.409478323Z" level=error msg="ContainerStatus for \"874be25eb43ffe951f0d73fd3f030b22b44f48aa12a3d55787b632c963e03520\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"874be25eb43ffe951f0d73fd3f030b22b44f48aa12a3d55787b632c963e03520\": not found" Feb 8 23:43:00.410181 kubelet[1341]: E0208 23:43:00.410142 1341 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"874be25eb43ffe951f0d73fd3f030b22b44f48aa12a3d55787b632c963e03520\": not found" containerID="874be25eb43ffe951f0d73fd3f030b22b44f48aa12a3d55787b632c963e03520" Feb 8 23:43:00.410469 kubelet[1341]: I0208 23:43:00.410435 1341 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:874be25eb43ffe951f0d73fd3f030b22b44f48aa12a3d55787b632c963e03520} err="failed to get container status \"874be25eb43ffe951f0d73fd3f030b22b44f48aa12a3d55787b632c963e03520\": rpc error: code = NotFound desc = an error occurred when try to find container \"874be25eb43ffe951f0d73fd3f030b22b44f48aa12a3d55787b632c963e03520\": not found" Feb 8 23:43:00.410757 kubelet[1341]: I0208 23:43:00.410718 1341 scope.go:115] "RemoveContainer" containerID="8508039c6b03832f9714c5e30f7a8381be09f910196d582ae3c792a248381ad8" Feb 8 23:43:00.411728 env[1051]: time="2024-02-08T23:43:00.411550647Z" level=error msg="ContainerStatus for \"8508039c6b03832f9714c5e30f7a8381be09f910196d582ae3c792a248381ad8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8508039c6b03832f9714c5e30f7a8381be09f910196d582ae3c792a248381ad8\": not found" Feb 8 23:43:00.412320 kubelet[1341]: E0208 23:43:00.412283 1341 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8508039c6b03832f9714c5e30f7a8381be09f910196d582ae3c792a248381ad8\": not found" containerID="8508039c6b03832f9714c5e30f7a8381be09f910196d582ae3c792a248381ad8" Feb 8 23:43:00.412649 kubelet[1341]: I0208 23:43:00.412621 1341 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:8508039c6b03832f9714c5e30f7a8381be09f910196d582ae3c792a248381ad8} err="failed to get container status \"8508039c6b03832f9714c5e30f7a8381be09f910196d582ae3c792a248381ad8\": rpc error: code = NotFound desc = an error occurred when try to find container \"8508039c6b03832f9714c5e30f7a8381be09f910196d582ae3c792a248381ad8\": not found" Feb 8 23:43:00.412961 kubelet[1341]: I0208 23:43:00.412904 1341 scope.go:115] "RemoveContainer" containerID="8c57dc2fd428beb1d15a72867c3dceb7680ad7aadca843f12275585d0f0472de" Feb 8 23:43:00.414189 env[1051]: time="2024-02-08T23:43:00.414046296Z" level=error msg="ContainerStatus for \"8c57dc2fd428beb1d15a72867c3dceb7680ad7aadca843f12275585d0f0472de\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8c57dc2fd428beb1d15a72867c3dceb7680ad7aadca843f12275585d0f0472de\": not found" Feb 8 23:43:00.414687 kubelet[1341]: E0208 23:43:00.414654 1341 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8c57dc2fd428beb1d15a72867c3dceb7680ad7aadca843f12275585d0f0472de\": not found" containerID="8c57dc2fd428beb1d15a72867c3dceb7680ad7aadca843f12275585d0f0472de" Feb 8 23:43:00.415062 kubelet[1341]: I0208 23:43:00.415031 1341 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:8c57dc2fd428beb1d15a72867c3dceb7680ad7aadca843f12275585d0f0472de} err="failed to get container status \"8c57dc2fd428beb1d15a72867c3dceb7680ad7aadca843f12275585d0f0472de\": rpc error: code = NotFound desc = an error occurred when try to find container \"8c57dc2fd428beb1d15a72867c3dceb7680ad7aadca843f12275585d0f0472de\": not found" Feb 8 23:43:00.634589 kubelet[1341]: E0208 23:43:00.634364 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:43:00.760747 kubelet[1341]: E0208 23:43:00.760622 1341 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 8 23:43:01.636433 kubelet[1341]: E0208 23:43:01.636345 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:43:01.795127 kubelet[1341]: I0208 23:43:01.794612 1341 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=cd01c2fe-73c2-4c8b-8f4e-859ddac69780 path="/var/lib/kubelet/pods/cd01c2fe-73c2-4c8b-8f4e-859ddac69780/volumes" Feb 8 23:43:02.637081 kubelet[1341]: E0208 23:43:02.636936 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:43:03.637996 kubelet[1341]: E0208 23:43:03.637916 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:43:03.964398 kubelet[1341]: I0208 23:43:03.963809 1341 topology_manager.go:210] "Topology Admit Handler" Feb 8 23:43:03.964857 kubelet[1341]: E0208 23:43:03.964777 1341 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="cd01c2fe-73c2-4c8b-8f4e-859ddac69780" containerName="mount-bpf-fs" Feb 8 23:43:03.965092 kubelet[1341]: E0208 23:43:03.965063 1341 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="cd01c2fe-73c2-4c8b-8f4e-859ddac69780" containerName="cilium-agent" Feb 8 23:43:03.965369 kubelet[1341]: E0208 23:43:03.965303 1341 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="cd01c2fe-73c2-4c8b-8f4e-859ddac69780" containerName="apply-sysctl-overwrites" Feb 8 23:43:03.965593 kubelet[1341]: E0208 23:43:03.965565 1341 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="cd01c2fe-73c2-4c8b-8f4e-859ddac69780" containerName="clean-cilium-state" Feb 8 23:43:03.965843 kubelet[1341]: E0208 23:43:03.965785 1341 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="cd01c2fe-73c2-4c8b-8f4e-859ddac69780" containerName="mount-cgroup" Feb 8 23:43:03.966123 kubelet[1341]: I0208 23:43:03.966086 1341 memory_manager.go:346] "RemoveStaleState removing state" podUID="cd01c2fe-73c2-4c8b-8f4e-859ddac69780" containerName="cilium-agent" Feb 8 23:43:03.980211 systemd[1]: Created slice kubepods-besteffort-pod34aec365_7e61_4395_a367_e32f5d62a4ee.slice. Feb 8 23:43:04.022646 kubelet[1341]: I0208 23:43:04.021719 1341 topology_manager.go:210] "Topology Admit Handler" Feb 8 23:43:04.035246 systemd[1]: Created slice kubepods-burstable-podb18a0295_aa77_43fd_a1b8_83dd50fa0417.slice. Feb 8 23:43:04.098700 kubelet[1341]: I0208 23:43:04.098594 1341 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b18a0295-aa77-43fd-a1b8-83dd50fa0417-bpf-maps\") pod \"cilium-jt9vd\" (UID: \"b18a0295-aa77-43fd-a1b8-83dd50fa0417\") " pod="kube-system/cilium-jt9vd" Feb 8 23:43:04.099077 kubelet[1341]: I0208 23:43:04.098723 1341 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b18a0295-aa77-43fd-a1b8-83dd50fa0417-etc-cni-netd\") pod \"cilium-jt9vd\" (UID: \"b18a0295-aa77-43fd-a1b8-83dd50fa0417\") " pod="kube-system/cilium-jt9vd" Feb 8 23:43:04.099077 kubelet[1341]: I0208 23:43:04.098803 1341 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b18a0295-aa77-43fd-a1b8-83dd50fa0417-cilium-config-path\") pod \"cilium-jt9vd\" (UID: \"b18a0295-aa77-43fd-a1b8-83dd50fa0417\") " pod="kube-system/cilium-jt9vd" Feb 8 23:43:04.099077 kubelet[1341]: I0208 23:43:04.098939 1341 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b18a0295-aa77-43fd-a1b8-83dd50fa0417-host-proc-sys-kernel\") pod \"cilium-jt9vd\" (UID: \"b18a0295-aa77-43fd-a1b8-83dd50fa0417\") " pod="kube-system/cilium-jt9vd" Feb 8 23:43:04.099077 kubelet[1341]: I0208 23:43:04.099034 1341 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbm8m\" (UniqueName: \"kubernetes.io/projected/b18a0295-aa77-43fd-a1b8-83dd50fa0417-kube-api-access-rbm8m\") pod \"cilium-jt9vd\" (UID: \"b18a0295-aa77-43fd-a1b8-83dd50fa0417\") " pod="kube-system/cilium-jt9vd" Feb 8 23:43:04.099592 kubelet[1341]: I0208 23:43:04.099110 1341 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b18a0295-aa77-43fd-a1b8-83dd50fa0417-cilium-cgroup\") pod \"cilium-jt9vd\" (UID: \"b18a0295-aa77-43fd-a1b8-83dd50fa0417\") " pod="kube-system/cilium-jt9vd" Feb 8 23:43:04.099592 kubelet[1341]: I0208 23:43:04.099192 1341 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/b18a0295-aa77-43fd-a1b8-83dd50fa0417-cilium-ipsec-secrets\") pod \"cilium-jt9vd\" (UID: \"b18a0295-aa77-43fd-a1b8-83dd50fa0417\") " pod="kube-system/cilium-jt9vd" Feb 8 23:43:04.099592 kubelet[1341]: I0208 23:43:04.099275 1341 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b18a0295-aa77-43fd-a1b8-83dd50fa0417-hubble-tls\") pod \"cilium-jt9vd\" (UID: \"b18a0295-aa77-43fd-a1b8-83dd50fa0417\") " pod="kube-system/cilium-jt9vd" Feb 8 23:43:04.099592 kubelet[1341]: I0208 23:43:04.099370 1341 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b18a0295-aa77-43fd-a1b8-83dd50fa0417-cilium-run\") pod \"cilium-jt9vd\" (UID: \"b18a0295-aa77-43fd-a1b8-83dd50fa0417\") " pod="kube-system/cilium-jt9vd" Feb 8 23:43:04.099592 kubelet[1341]: I0208 23:43:04.099457 1341 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b18a0295-aa77-43fd-a1b8-83dd50fa0417-hostproc\") pod \"cilium-jt9vd\" (UID: \"b18a0295-aa77-43fd-a1b8-83dd50fa0417\") " pod="kube-system/cilium-jt9vd" Feb 8 23:43:04.099592 kubelet[1341]: I0208 23:43:04.099550 1341 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b18a0295-aa77-43fd-a1b8-83dd50fa0417-clustermesh-secrets\") pod \"cilium-jt9vd\" (UID: \"b18a0295-aa77-43fd-a1b8-83dd50fa0417\") " pod="kube-system/cilium-jt9vd" Feb 8 23:43:04.100351 kubelet[1341]: I0208 23:43:04.099640 1341 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/34aec365-7e61-4395-a367-e32f5d62a4ee-cilium-config-path\") pod \"cilium-operator-f59cbd8c6-cj4sd\" (UID: \"34aec365-7e61-4395-a367-e32f5d62a4ee\") " pod="kube-system/cilium-operator-f59cbd8c6-cj4sd" Feb 8 23:43:04.100351 kubelet[1341]: I0208 23:43:04.099733 1341 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b18a0295-aa77-43fd-a1b8-83dd50fa0417-cni-path\") pod \"cilium-jt9vd\" (UID: \"b18a0295-aa77-43fd-a1b8-83dd50fa0417\") " pod="kube-system/cilium-jt9vd" Feb 8 23:43:04.100689 kubelet[1341]: I0208 23:43:04.099812 1341 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b18a0295-aa77-43fd-a1b8-83dd50fa0417-lib-modules\") pod \"cilium-jt9vd\" (UID: \"b18a0295-aa77-43fd-a1b8-83dd50fa0417\") " pod="kube-system/cilium-jt9vd" Feb 8 23:43:04.101871 kubelet[1341]: I0208 23:43:04.101683 1341 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b18a0295-aa77-43fd-a1b8-83dd50fa0417-xtables-lock\") pod \"cilium-jt9vd\" (UID: \"b18a0295-aa77-43fd-a1b8-83dd50fa0417\") " pod="kube-system/cilium-jt9vd" Feb 8 23:43:04.101871 kubelet[1341]: I0208 23:43:04.101808 1341 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b18a0295-aa77-43fd-a1b8-83dd50fa0417-host-proc-sys-net\") pod \"cilium-jt9vd\" (UID: \"b18a0295-aa77-43fd-a1b8-83dd50fa0417\") " pod="kube-system/cilium-jt9vd" Feb 8 23:43:04.102182 kubelet[1341]: I0208 23:43:04.102077 1341 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgxxk\" (UniqueName: \"kubernetes.io/projected/34aec365-7e61-4395-a367-e32f5d62a4ee-kube-api-access-mgxxk\") pod \"cilium-operator-f59cbd8c6-cj4sd\" (UID: \"34aec365-7e61-4395-a367-e32f5d62a4ee\") " pod="kube-system/cilium-operator-f59cbd8c6-cj4sd" Feb 8 23:43:04.290222 env[1051]: time="2024-02-08T23:43:04.290186591Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-f59cbd8c6-cj4sd,Uid:34aec365-7e61-4395-a367-e32f5d62a4ee,Namespace:kube-system,Attempt:0,}" Feb 8 23:43:04.305711 env[1051]: time="2024-02-08T23:43:04.305615607Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 8 23:43:04.305896 env[1051]: time="2024-02-08T23:43:04.305684777Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 8 23:43:04.305896 env[1051]: time="2024-02-08T23:43:04.305699114Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 8 23:43:04.306128 env[1051]: time="2024-02-08T23:43:04.306088195Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7e6df90609b8ba23136149c5271311593f0d5ac61599a8a4061f5ce3121a7c3f pid=2972 runtime=io.containerd.runc.v2 Feb 8 23:43:04.318574 systemd[1]: Started cri-containerd-7e6df90609b8ba23136149c5271311593f0d5ac61599a8a4061f5ce3121a7c3f.scope. Feb 8 23:43:04.350887 env[1051]: time="2024-02-08T23:43:04.350790106Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jt9vd,Uid:b18a0295-aa77-43fd-a1b8-83dd50fa0417,Namespace:kube-system,Attempt:0,}" Feb 8 23:43:04.370533 env[1051]: time="2024-02-08T23:43:04.370489435Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-f59cbd8c6-cj4sd,Uid:34aec365-7e61-4395-a367-e32f5d62a4ee,Namespace:kube-system,Attempt:0,} returns sandbox id \"7e6df90609b8ba23136149c5271311593f0d5ac61599a8a4061f5ce3121a7c3f\"" Feb 8 23:43:04.372315 env[1051]: time="2024-02-08T23:43:04.372288034Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 8 23:43:04.377974 env[1051]: time="2024-02-08T23:43:04.377803305Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 8 23:43:04.378177 env[1051]: time="2024-02-08T23:43:04.378129618Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 8 23:43:04.378177 env[1051]: time="2024-02-08T23:43:04.378157521Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 8 23:43:04.378588 env[1051]: time="2024-02-08T23:43:04.378518860Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0a6519b20d4746d5ce0ef7df91eb69f531ed07f5abe0a517df8260e868f4631f pid=3011 runtime=io.containerd.runc.v2 Feb 8 23:43:04.391904 systemd[1]: Started cri-containerd-0a6519b20d4746d5ce0ef7df91eb69f531ed07f5abe0a517df8260e868f4631f.scope. Feb 8 23:43:04.424570 env[1051]: time="2024-02-08T23:43:04.424480366Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jt9vd,Uid:b18a0295-aa77-43fd-a1b8-83dd50fa0417,Namespace:kube-system,Attempt:0,} returns sandbox id \"0a6519b20d4746d5ce0ef7df91eb69f531ed07f5abe0a517df8260e868f4631f\"" Feb 8 23:43:04.427623 env[1051]: time="2024-02-08T23:43:04.427591190Z" level=info msg="CreateContainer within sandbox \"0a6519b20d4746d5ce0ef7df91eb69f531ed07f5abe0a517df8260e868f4631f\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 8 23:43:04.442662 env[1051]: time="2024-02-08T23:43:04.442576964Z" level=info msg="CreateContainer within sandbox \"0a6519b20d4746d5ce0ef7df91eb69f531ed07f5abe0a517df8260e868f4631f\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"6ba94f1b63ce041a13ac22e20a74d49fdedf54220ca55ee4094f743ecba4446d\"" Feb 8 23:43:04.443447 env[1051]: time="2024-02-08T23:43:04.443379852Z" level=info msg="StartContainer for \"6ba94f1b63ce041a13ac22e20a74d49fdedf54220ca55ee4094f743ecba4446d\"" Feb 8 23:43:04.459760 systemd[1]: Started cri-containerd-6ba94f1b63ce041a13ac22e20a74d49fdedf54220ca55ee4094f743ecba4446d.scope. Feb 8 23:43:04.473047 systemd[1]: cri-containerd-6ba94f1b63ce041a13ac22e20a74d49fdedf54220ca55ee4094f743ecba4446d.scope: Deactivated successfully. Feb 8 23:43:04.492744 env[1051]: time="2024-02-08T23:43:04.492694840Z" level=info msg="shim disconnected" id=6ba94f1b63ce041a13ac22e20a74d49fdedf54220ca55ee4094f743ecba4446d Feb 8 23:43:04.492974 env[1051]: time="2024-02-08T23:43:04.492953787Z" level=warning msg="cleaning up after shim disconnected" id=6ba94f1b63ce041a13ac22e20a74d49fdedf54220ca55ee4094f743ecba4446d namespace=k8s.io Feb 8 23:43:04.493068 env[1051]: time="2024-02-08T23:43:04.493052542Z" level=info msg="cleaning up dead shim" Feb 8 23:43:04.501632 env[1051]: time="2024-02-08T23:43:04.501579952Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:43:04Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3071 runtime=io.containerd.runc.v2\ntime=\"2024-02-08T23:43:04Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/6ba94f1b63ce041a13ac22e20a74d49fdedf54220ca55ee4094f743ecba4446d/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Feb 8 23:43:04.502019 env[1051]: time="2024-02-08T23:43:04.501900304Z" level=error msg="copy shim log" error="read /proc/self/fd/66: file already closed" Feb 8 23:43:04.504927 env[1051]: time="2024-02-08T23:43:04.504885221Z" level=error msg="Failed to pipe stdout of container \"6ba94f1b63ce041a13ac22e20a74d49fdedf54220ca55ee4094f743ecba4446d\"" error="reading from a closed fifo" Feb 8 23:43:04.504987 env[1051]: time="2024-02-08T23:43:04.504945355Z" level=error msg="Failed to pipe stderr of container \"6ba94f1b63ce041a13ac22e20a74d49fdedf54220ca55ee4094f743ecba4446d\"" error="reading from a closed fifo" Feb 8 23:43:04.509006 env[1051]: time="2024-02-08T23:43:04.508953986Z" level=error msg="StartContainer for \"6ba94f1b63ce041a13ac22e20a74d49fdedf54220ca55ee4094f743ecba4446d\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Feb 8 23:43:04.509455 kubelet[1341]: E0208 23:43:04.509261 1341 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="6ba94f1b63ce041a13ac22e20a74d49fdedf54220ca55ee4094f743ecba4446d" Feb 8 23:43:04.509455 kubelet[1341]: E0208 23:43:04.509391 1341 kuberuntime_manager.go:872] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Feb 8 23:43:04.509455 kubelet[1341]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Feb 8 23:43:04.509455 kubelet[1341]: rm /hostbin/cilium-mount Feb 8 23:43:04.509630 kubelet[1341]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-rbm8m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-jt9vd_kube-system(b18a0295-aa77-43fd-a1b8-83dd50fa0417): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Feb 8 23:43:04.509726 kubelet[1341]: E0208 23:43:04.509434 1341 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-jt9vd" podUID=b18a0295-aa77-43fd-a1b8-83dd50fa0417 Feb 8 23:43:04.639953 kubelet[1341]: E0208 23:43:04.638612 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:43:05.358447 env[1051]: time="2024-02-08T23:43:05.358344317Z" level=info msg="CreateContainer within sandbox \"0a6519b20d4746d5ce0ef7df91eb69f531ed07f5abe0a517df8260e868f4631f\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:1,}" Feb 8 23:43:05.386474 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3073274996.mount: Deactivated successfully. Feb 8 23:43:05.401061 env[1051]: time="2024-02-08T23:43:05.400967560Z" level=info msg="CreateContainer within sandbox \"0a6519b20d4746d5ce0ef7df91eb69f531ed07f5abe0a517df8260e868f4631f\" for &ContainerMetadata{Name:mount-cgroup,Attempt:1,} returns container id \"58fc6c0ad2bc63c11737c8f25e96c7319e101f9dd6754800e2f996bfeb2811ea\"" Feb 8 23:43:05.402507 env[1051]: time="2024-02-08T23:43:05.402431581Z" level=info msg="StartContainer for \"58fc6c0ad2bc63c11737c8f25e96c7319e101f9dd6754800e2f996bfeb2811ea\"" Feb 8 23:43:05.444254 systemd[1]: Started cri-containerd-58fc6c0ad2bc63c11737c8f25e96c7319e101f9dd6754800e2f996bfeb2811ea.scope. Feb 8 23:43:05.455890 systemd[1]: cri-containerd-58fc6c0ad2bc63c11737c8f25e96c7319e101f9dd6754800e2f996bfeb2811ea.scope: Deactivated successfully. Feb 8 23:43:05.474677 env[1051]: time="2024-02-08T23:43:05.474629763Z" level=info msg="shim disconnected" id=58fc6c0ad2bc63c11737c8f25e96c7319e101f9dd6754800e2f996bfeb2811ea Feb 8 23:43:05.474930 env[1051]: time="2024-02-08T23:43:05.474909999Z" level=warning msg="cleaning up after shim disconnected" id=58fc6c0ad2bc63c11737c8f25e96c7319e101f9dd6754800e2f996bfeb2811ea namespace=k8s.io Feb 8 23:43:05.475001 env[1051]: time="2024-02-08T23:43:05.474986673Z" level=info msg="cleaning up dead shim" Feb 8 23:43:05.484122 env[1051]: time="2024-02-08T23:43:05.484064497Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:43:05Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3108 runtime=io.containerd.runc.v2\ntime=\"2024-02-08T23:43:05Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/58fc6c0ad2bc63c11737c8f25e96c7319e101f9dd6754800e2f996bfeb2811ea/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Feb 8 23:43:05.484427 env[1051]: time="2024-02-08T23:43:05.484337269Z" level=error msg="copy shim log" error="read /proc/self/fd/72: file already closed" Feb 8 23:43:05.484658 env[1051]: time="2024-02-08T23:43:05.484623086Z" level=error msg="Failed to pipe stderr of container \"58fc6c0ad2bc63c11737c8f25e96c7319e101f9dd6754800e2f996bfeb2811ea\"" error="reading from a closed fifo" Feb 8 23:43:05.484935 env[1051]: time="2024-02-08T23:43:05.484901920Z" level=error msg="Failed to pipe stdout of container \"58fc6c0ad2bc63c11737c8f25e96c7319e101f9dd6754800e2f996bfeb2811ea\"" error="reading from a closed fifo" Feb 8 23:43:05.487783 env[1051]: time="2024-02-08T23:43:05.487743138Z" level=error msg="StartContainer for \"58fc6c0ad2bc63c11737c8f25e96c7319e101f9dd6754800e2f996bfeb2811ea\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Feb 8 23:43:05.488512 kubelet[1341]: E0208 23:43:05.487960 1341 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="58fc6c0ad2bc63c11737c8f25e96c7319e101f9dd6754800e2f996bfeb2811ea" Feb 8 23:43:05.488512 kubelet[1341]: E0208 23:43:05.488417 1341 kuberuntime_manager.go:872] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Feb 8 23:43:05.488512 kubelet[1341]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Feb 8 23:43:05.488512 kubelet[1341]: rm /hostbin/cilium-mount Feb 8 23:43:05.488681 kubelet[1341]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-rbm8m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-jt9vd_kube-system(b18a0295-aa77-43fd-a1b8-83dd50fa0417): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Feb 8 23:43:05.488793 kubelet[1341]: E0208 23:43:05.488459 1341 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-jt9vd" podUID=b18a0295-aa77-43fd-a1b8-83dd50fa0417 Feb 8 23:43:05.543509 kubelet[1341]: E0208 23:43:05.543463 1341 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:43:05.640916 kubelet[1341]: E0208 23:43:05.639800 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:43:05.762129 kubelet[1341]: E0208 23:43:05.762090 1341 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 8 23:43:06.231489 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-58fc6c0ad2bc63c11737c8f25e96c7319e101f9dd6754800e2f996bfeb2811ea-rootfs.mount: Deactivated successfully. Feb 8 23:43:06.356665 kubelet[1341]: I0208 23:43:06.356606 1341 scope.go:115] "RemoveContainer" containerID="6ba94f1b63ce041a13ac22e20a74d49fdedf54220ca55ee4094f743ecba4446d" Feb 8 23:43:06.357076 kubelet[1341]: I0208 23:43:06.357044 1341 scope.go:115] "RemoveContainer" containerID="6ba94f1b63ce041a13ac22e20a74d49fdedf54220ca55ee4094f743ecba4446d" Feb 8 23:43:06.358809 env[1051]: time="2024-02-08T23:43:06.358767125Z" level=info msg="RemoveContainer for \"6ba94f1b63ce041a13ac22e20a74d49fdedf54220ca55ee4094f743ecba4446d\"" Feb 8 23:43:06.366592 env[1051]: time="2024-02-08T23:43:06.366555586Z" level=info msg="RemoveContainer for \"6ba94f1b63ce041a13ac22e20a74d49fdedf54220ca55ee4094f743ecba4446d\" returns successfully" Feb 8 23:43:06.366908 env[1051]: time="2024-02-08T23:43:06.366864948Z" level=info msg="RemoveContainer for \"6ba94f1b63ce041a13ac22e20a74d49fdedf54220ca55ee4094f743ecba4446d\"" Feb 8 23:43:06.367026 env[1051]: time="2024-02-08T23:43:06.367005191Z" level=info msg="RemoveContainer for \"6ba94f1b63ce041a13ac22e20a74d49fdedf54220ca55ee4094f743ecba4446d\" returns successfully" Feb 8 23:43:06.367727 kubelet[1341]: E0208 23:43:06.367691 1341 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mount-cgroup pod=cilium-jt9vd_kube-system(b18a0295-aa77-43fd-a1b8-83dd50fa0417)\"" pod="kube-system/cilium-jt9vd" podUID=b18a0295-aa77-43fd-a1b8-83dd50fa0417 Feb 8 23:43:06.640655 kubelet[1341]: E0208 23:43:06.640596 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:43:07.040298 env[1051]: time="2024-02-08T23:43:07.040233010Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:43:07.043207 env[1051]: time="2024-02-08T23:43:07.043159707Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:43:07.045451 env[1051]: time="2024-02-08T23:43:07.045397822Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:43:07.046539 env[1051]: time="2024-02-08T23:43:07.046457001Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Feb 8 23:43:07.050351 env[1051]: time="2024-02-08T23:43:07.050277929Z" level=info msg="CreateContainer within sandbox \"7e6df90609b8ba23136149c5271311593f0d5ac61599a8a4061f5ce3121a7c3f\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 8 23:43:07.069471 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3467509722.mount: Deactivated successfully. Feb 8 23:43:07.088016 env[1051]: time="2024-02-08T23:43:07.087933503Z" level=info msg="CreateContainer within sandbox \"7e6df90609b8ba23136149c5271311593f0d5ac61599a8a4061f5ce3121a7c3f\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"5716e6ce044db79fb7ab43b5a116a08f76a795ca02823c843c8437571163aa5b\"" Feb 8 23:43:07.089663 env[1051]: time="2024-02-08T23:43:07.089533710Z" level=info msg="StartContainer for \"5716e6ce044db79fb7ab43b5a116a08f76a795ca02823c843c8437571163aa5b\"" Feb 8 23:43:07.115885 systemd[1]: Started cri-containerd-5716e6ce044db79fb7ab43b5a116a08f76a795ca02823c843c8437571163aa5b.scope. Feb 8 23:43:07.161515 env[1051]: time="2024-02-08T23:43:07.161463186Z" level=info msg="StartContainer for \"5716e6ce044db79fb7ab43b5a116a08f76a795ca02823c843c8437571163aa5b\" returns successfully" Feb 8 23:43:07.368116 env[1051]: time="2024-02-08T23:43:07.363575585Z" level=info msg="StopPodSandbox for \"0a6519b20d4746d5ce0ef7df91eb69f531ed07f5abe0a517df8260e868f4631f\"" Feb 8 23:43:07.368116 env[1051]: time="2024-02-08T23:43:07.363640407Z" level=info msg="Container to stop \"58fc6c0ad2bc63c11737c8f25e96c7319e101f9dd6754800e2f996bfeb2811ea\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 8 23:43:07.365611 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0a6519b20d4746d5ce0ef7df91eb69f531ed07f5abe0a517df8260e868f4631f-shm.mount: Deactivated successfully. Feb 8 23:43:07.380250 systemd[1]: cri-containerd-0a6519b20d4746d5ce0ef7df91eb69f531ed07f5abe0a517df8260e868f4631f.scope: Deactivated successfully. Feb 8 23:43:07.416365 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0a6519b20d4746d5ce0ef7df91eb69f531ed07f5abe0a517df8260e868f4631f-rootfs.mount: Deactivated successfully. Feb 8 23:43:07.600808 kubelet[1341]: W0208 23:43:07.600731 1341 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb18a0295_aa77_43fd_a1b8_83dd50fa0417.slice/cri-containerd-6ba94f1b63ce041a13ac22e20a74d49fdedf54220ca55ee4094f743ecba4446d.scope WatchSource:0}: container "6ba94f1b63ce041a13ac22e20a74d49fdedf54220ca55ee4094f743ecba4446d" in namespace "k8s.io": not found Feb 8 23:43:07.643108 kubelet[1341]: E0208 23:43:07.642102 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:43:07.796212 env[1051]: time="2024-02-08T23:43:07.796128581Z" level=info msg="shim disconnected" id=0a6519b20d4746d5ce0ef7df91eb69f531ed07f5abe0a517df8260e868f4631f Feb 8 23:43:07.797663 env[1051]: time="2024-02-08T23:43:07.797613540Z" level=warning msg="cleaning up after shim disconnected" id=0a6519b20d4746d5ce0ef7df91eb69f531ed07f5abe0a517df8260e868f4631f namespace=k8s.io Feb 8 23:43:07.797913 env[1051]: time="2024-02-08T23:43:07.797871886Z" level=info msg="cleaning up dead shim" Feb 8 23:43:07.826443 env[1051]: time="2024-02-08T23:43:07.826338399Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:43:07Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3179 runtime=io.containerd.runc.v2\n" Feb 8 23:43:07.827379 env[1051]: time="2024-02-08T23:43:07.827323569Z" level=info msg="TearDown network for sandbox \"0a6519b20d4746d5ce0ef7df91eb69f531ed07f5abe0a517df8260e868f4631f\" successfully" Feb 8 23:43:07.827652 env[1051]: time="2024-02-08T23:43:07.827578208Z" level=info msg="StopPodSandbox for \"0a6519b20d4746d5ce0ef7df91eb69f531ed07f5abe0a517df8260e868f4631f\" returns successfully" Feb 8 23:43:07.934978 kubelet[1341]: I0208 23:43:07.934382 1341 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b18a0295-aa77-43fd-a1b8-83dd50fa0417-host-proc-sys-kernel\") pod \"b18a0295-aa77-43fd-a1b8-83dd50fa0417\" (UID: \"b18a0295-aa77-43fd-a1b8-83dd50fa0417\") " Feb 8 23:43:07.934978 kubelet[1341]: I0208 23:43:07.934465 1341 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b18a0295-aa77-43fd-a1b8-83dd50fa0417-xtables-lock\") pod \"b18a0295-aa77-43fd-a1b8-83dd50fa0417\" (UID: \"b18a0295-aa77-43fd-a1b8-83dd50fa0417\") " Feb 8 23:43:07.934978 kubelet[1341]: I0208 23:43:07.934530 1341 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b18a0295-aa77-43fd-a1b8-83dd50fa0417-hubble-tls\") pod \"b18a0295-aa77-43fd-a1b8-83dd50fa0417\" (UID: \"b18a0295-aa77-43fd-a1b8-83dd50fa0417\") " Feb 8 23:43:07.934978 kubelet[1341]: I0208 23:43:07.934524 1341 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b18a0295-aa77-43fd-a1b8-83dd50fa0417-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "b18a0295-aa77-43fd-a1b8-83dd50fa0417" (UID: "b18a0295-aa77-43fd-a1b8-83dd50fa0417"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:43:07.934978 kubelet[1341]: I0208 23:43:07.934580 1341 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b18a0295-aa77-43fd-a1b8-83dd50fa0417-cilium-run\") pod \"b18a0295-aa77-43fd-a1b8-83dd50fa0417\" (UID: \"b18a0295-aa77-43fd-a1b8-83dd50fa0417\") " Feb 8 23:43:07.934978 kubelet[1341]: I0208 23:43:07.934628 1341 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b18a0295-aa77-43fd-a1b8-83dd50fa0417-hostproc\") pod \"b18a0295-aa77-43fd-a1b8-83dd50fa0417\" (UID: \"b18a0295-aa77-43fd-a1b8-83dd50fa0417\") " Feb 8 23:43:07.935552 kubelet[1341]: I0208 23:43:07.934644 1341 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b18a0295-aa77-43fd-a1b8-83dd50fa0417-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "b18a0295-aa77-43fd-a1b8-83dd50fa0417" (UID: "b18a0295-aa77-43fd-a1b8-83dd50fa0417"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:43:07.935552 kubelet[1341]: I0208 23:43:07.934678 1341 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b18a0295-aa77-43fd-a1b8-83dd50fa0417-cni-path\") pod \"b18a0295-aa77-43fd-a1b8-83dd50fa0417\" (UID: \"b18a0295-aa77-43fd-a1b8-83dd50fa0417\") " Feb 8 23:43:07.935552 kubelet[1341]: I0208 23:43:07.934736 1341 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b18a0295-aa77-43fd-a1b8-83dd50fa0417-cilium-config-path\") pod \"b18a0295-aa77-43fd-a1b8-83dd50fa0417\" (UID: \"b18a0295-aa77-43fd-a1b8-83dd50fa0417\") " Feb 8 23:43:07.935552 kubelet[1341]: I0208 23:43:07.934730 1341 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b18a0295-aa77-43fd-a1b8-83dd50fa0417-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "b18a0295-aa77-43fd-a1b8-83dd50fa0417" (UID: "b18a0295-aa77-43fd-a1b8-83dd50fa0417"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:43:07.935552 kubelet[1341]: I0208 23:43:07.934788 1341 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b18a0295-aa77-43fd-a1b8-83dd50fa0417-host-proc-sys-net\") pod \"b18a0295-aa77-43fd-a1b8-83dd50fa0417\" (UID: \"b18a0295-aa77-43fd-a1b8-83dd50fa0417\") " Feb 8 23:43:07.936618 kubelet[1341]: I0208 23:43:07.936034 1341 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/b18a0295-aa77-43fd-a1b8-83dd50fa0417-cilium-ipsec-secrets\") pod \"b18a0295-aa77-43fd-a1b8-83dd50fa0417\" (UID: \"b18a0295-aa77-43fd-a1b8-83dd50fa0417\") " Feb 8 23:43:07.936618 kubelet[1341]: I0208 23:43:07.936121 1341 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b18a0295-aa77-43fd-a1b8-83dd50fa0417-clustermesh-secrets\") pod \"b18a0295-aa77-43fd-a1b8-83dd50fa0417\" (UID: \"b18a0295-aa77-43fd-a1b8-83dd50fa0417\") " Feb 8 23:43:07.936618 kubelet[1341]: I0208 23:43:07.936175 1341 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b18a0295-aa77-43fd-a1b8-83dd50fa0417-cilium-cgroup\") pod \"b18a0295-aa77-43fd-a1b8-83dd50fa0417\" (UID: \"b18a0295-aa77-43fd-a1b8-83dd50fa0417\") " Feb 8 23:43:07.936618 kubelet[1341]: I0208 23:43:07.936223 1341 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b18a0295-aa77-43fd-a1b8-83dd50fa0417-bpf-maps\") pod \"b18a0295-aa77-43fd-a1b8-83dd50fa0417\" (UID: \"b18a0295-aa77-43fd-a1b8-83dd50fa0417\") " Feb 8 23:43:07.936618 kubelet[1341]: I0208 23:43:07.936270 1341 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b18a0295-aa77-43fd-a1b8-83dd50fa0417-etc-cni-netd\") pod \"b18a0295-aa77-43fd-a1b8-83dd50fa0417\" (UID: \"b18a0295-aa77-43fd-a1b8-83dd50fa0417\") " Feb 8 23:43:07.936618 kubelet[1341]: I0208 23:43:07.936322 1341 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b18a0295-aa77-43fd-a1b8-83dd50fa0417-lib-modules\") pod \"b18a0295-aa77-43fd-a1b8-83dd50fa0417\" (UID: \"b18a0295-aa77-43fd-a1b8-83dd50fa0417\") " Feb 8 23:43:07.937169 kubelet[1341]: I0208 23:43:07.936394 1341 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rbm8m\" (UniqueName: \"kubernetes.io/projected/b18a0295-aa77-43fd-a1b8-83dd50fa0417-kube-api-access-rbm8m\") pod \"b18a0295-aa77-43fd-a1b8-83dd50fa0417\" (UID: \"b18a0295-aa77-43fd-a1b8-83dd50fa0417\") " Feb 8 23:43:07.937169 kubelet[1341]: I0208 23:43:07.936463 1341 reconciler_common.go:295] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b18a0295-aa77-43fd-a1b8-83dd50fa0417-cilium-run\") on node \"172.24.4.229\" DevicePath \"\"" Feb 8 23:43:07.937169 kubelet[1341]: I0208 23:43:07.936496 1341 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b18a0295-aa77-43fd-a1b8-83dd50fa0417-host-proc-sys-kernel\") on node \"172.24.4.229\" DevicePath \"\"" Feb 8 23:43:07.937169 kubelet[1341]: I0208 23:43:07.936528 1341 reconciler_common.go:295] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b18a0295-aa77-43fd-a1b8-83dd50fa0417-xtables-lock\") on node \"172.24.4.229\" DevicePath \"\"" Feb 8 23:43:07.939569 kubelet[1341]: I0208 23:43:07.938300 1341 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b18a0295-aa77-43fd-a1b8-83dd50fa0417-hostproc" (OuterVolumeSpecName: "hostproc") pod "b18a0295-aa77-43fd-a1b8-83dd50fa0417" (UID: "b18a0295-aa77-43fd-a1b8-83dd50fa0417"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:43:07.939569 kubelet[1341]: I0208 23:43:07.938397 1341 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b18a0295-aa77-43fd-a1b8-83dd50fa0417-cni-path" (OuterVolumeSpecName: "cni-path") pod "b18a0295-aa77-43fd-a1b8-83dd50fa0417" (UID: "b18a0295-aa77-43fd-a1b8-83dd50fa0417"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:43:07.939569 kubelet[1341]: W0208 23:43:07.938661 1341 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/b18a0295-aa77-43fd-a1b8-83dd50fa0417/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 8 23:43:07.941645 kubelet[1341]: I0208 23:43:07.941460 1341 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b18a0295-aa77-43fd-a1b8-83dd50fa0417-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "b18a0295-aa77-43fd-a1b8-83dd50fa0417" (UID: "b18a0295-aa77-43fd-a1b8-83dd50fa0417"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:43:07.941645 kubelet[1341]: I0208 23:43:07.941537 1341 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b18a0295-aa77-43fd-a1b8-83dd50fa0417-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "b18a0295-aa77-43fd-a1b8-83dd50fa0417" (UID: "b18a0295-aa77-43fd-a1b8-83dd50fa0417"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:43:07.960954 kubelet[1341]: I0208 23:43:07.945979 1341 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b18a0295-aa77-43fd-a1b8-83dd50fa0417-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "b18a0295-aa77-43fd-a1b8-83dd50fa0417" (UID: "b18a0295-aa77-43fd-a1b8-83dd50fa0417"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:43:07.960954 kubelet[1341]: I0208 23:43:07.946064 1341 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b18a0295-aa77-43fd-a1b8-83dd50fa0417-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "b18a0295-aa77-43fd-a1b8-83dd50fa0417" (UID: "b18a0295-aa77-43fd-a1b8-83dd50fa0417"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:43:07.960954 kubelet[1341]: I0208 23:43:07.946119 1341 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b18a0295-aa77-43fd-a1b8-83dd50fa0417-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "b18a0295-aa77-43fd-a1b8-83dd50fa0417" (UID: "b18a0295-aa77-43fd-a1b8-83dd50fa0417"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:43:07.960954 kubelet[1341]: I0208 23:43:07.948883 1341 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b18a0295-aa77-43fd-a1b8-83dd50fa0417-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "b18a0295-aa77-43fd-a1b8-83dd50fa0417" (UID: "b18a0295-aa77-43fd-a1b8-83dd50fa0417"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 8 23:43:07.960954 kubelet[1341]: I0208 23:43:07.955862 1341 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b18a0295-aa77-43fd-a1b8-83dd50fa0417-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "b18a0295-aa77-43fd-a1b8-83dd50fa0417" (UID: "b18a0295-aa77-43fd-a1b8-83dd50fa0417"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 8 23:43:07.949534 systemd[1]: var-lib-kubelet-pods-b18a0295\x2daa77\x2d43fd\x2da1b8\x2d83dd50fa0417-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 8 23:43:07.961761 kubelet[1341]: I0208 23:43:07.957214 1341 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b18a0295-aa77-43fd-a1b8-83dd50fa0417-kube-api-access-rbm8m" (OuterVolumeSpecName: "kube-api-access-rbm8m") pod "b18a0295-aa77-43fd-a1b8-83dd50fa0417" (UID: "b18a0295-aa77-43fd-a1b8-83dd50fa0417"). InnerVolumeSpecName "kube-api-access-rbm8m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 8 23:43:07.949849 systemd[1]: var-lib-kubelet-pods-b18a0295\x2daa77\x2d43fd\x2da1b8\x2d83dd50fa0417-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Feb 8 23:43:07.960108 systemd[1]: var-lib-kubelet-pods-b18a0295\x2daa77\x2d43fd\x2da1b8\x2d83dd50fa0417-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2drbm8m.mount: Deactivated successfully. Feb 8 23:43:07.962679 kubelet[1341]: I0208 23:43:07.962633 1341 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b18a0295-aa77-43fd-a1b8-83dd50fa0417-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "b18a0295-aa77-43fd-a1b8-83dd50fa0417" (UID: "b18a0295-aa77-43fd-a1b8-83dd50fa0417"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 8 23:43:07.967636 kubelet[1341]: I0208 23:43:07.967575 1341 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b18a0295-aa77-43fd-a1b8-83dd50fa0417-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "b18a0295-aa77-43fd-a1b8-83dd50fa0417" (UID: "b18a0295-aa77-43fd-a1b8-83dd50fa0417"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 8 23:43:08.037294 kubelet[1341]: I0208 23:43:08.037247 1341 reconciler_common.go:295] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b18a0295-aa77-43fd-a1b8-83dd50fa0417-lib-modules\") on node \"172.24.4.229\" DevicePath \"\"" Feb 8 23:43:08.037628 kubelet[1341]: I0208 23:43:08.037602 1341 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-rbm8m\" (UniqueName: \"kubernetes.io/projected/b18a0295-aa77-43fd-a1b8-83dd50fa0417-kube-api-access-rbm8m\") on node \"172.24.4.229\" DevicePath \"\"" Feb 8 23:43:08.037794 kubelet[1341]: I0208 23:43:08.037773 1341 reconciler_common.go:295] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b18a0295-aa77-43fd-a1b8-83dd50fa0417-hostproc\") on node \"172.24.4.229\" DevicePath \"\"" Feb 8 23:43:08.038035 kubelet[1341]: I0208 23:43:08.038011 1341 reconciler_common.go:295] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b18a0295-aa77-43fd-a1b8-83dd50fa0417-hubble-tls\") on node \"172.24.4.229\" DevicePath \"\"" Feb 8 23:43:08.038209 kubelet[1341]: I0208 23:43:08.038188 1341 reconciler_common.go:295] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b18a0295-aa77-43fd-a1b8-83dd50fa0417-cni-path\") on node \"172.24.4.229\" DevicePath \"\"" Feb 8 23:43:08.038359 kubelet[1341]: I0208 23:43:08.038340 1341 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b18a0295-aa77-43fd-a1b8-83dd50fa0417-cilium-config-path\") on node \"172.24.4.229\" DevicePath \"\"" Feb 8 23:43:08.038520 kubelet[1341]: I0208 23:43:08.038500 1341 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b18a0295-aa77-43fd-a1b8-83dd50fa0417-host-proc-sys-net\") on node \"172.24.4.229\" DevicePath \"\"" Feb 8 23:43:08.038695 kubelet[1341]: I0208 23:43:08.038674 1341 reconciler_common.go:295] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/b18a0295-aa77-43fd-a1b8-83dd50fa0417-cilium-ipsec-secrets\") on node \"172.24.4.229\" DevicePath \"\"" Feb 8 23:43:08.038895 kubelet[1341]: I0208 23:43:08.038871 1341 reconciler_common.go:295] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b18a0295-aa77-43fd-a1b8-83dd50fa0417-clustermesh-secrets\") on node \"172.24.4.229\" DevicePath \"\"" Feb 8 23:43:08.039083 kubelet[1341]: I0208 23:43:08.039061 1341 reconciler_common.go:295] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b18a0295-aa77-43fd-a1b8-83dd50fa0417-etc-cni-netd\") on node \"172.24.4.229\" DevicePath \"\"" Feb 8 23:43:08.039247 kubelet[1341]: I0208 23:43:08.039226 1341 reconciler_common.go:295] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b18a0295-aa77-43fd-a1b8-83dd50fa0417-cilium-cgroup\") on node \"172.24.4.229\" DevicePath \"\"" Feb 8 23:43:08.039408 kubelet[1341]: I0208 23:43:08.039387 1341 reconciler_common.go:295] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b18a0295-aa77-43fd-a1b8-83dd50fa0417-bpf-maps\") on node \"172.24.4.229\" DevicePath \"\"" Feb 8 23:43:08.231447 systemd[1]: var-lib-kubelet-pods-b18a0295\x2daa77\x2d43fd\x2da1b8\x2d83dd50fa0417-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 8 23:43:08.384591 kubelet[1341]: I0208 23:43:08.384549 1341 scope.go:115] "RemoveContainer" containerID="58fc6c0ad2bc63c11737c8f25e96c7319e101f9dd6754800e2f996bfeb2811ea" Feb 8 23:43:08.387696 env[1051]: time="2024-02-08T23:43:08.387608717Z" level=info msg="RemoveContainer for \"58fc6c0ad2bc63c11737c8f25e96c7319e101f9dd6754800e2f996bfeb2811ea\"" Feb 8 23:43:08.394482 env[1051]: time="2024-02-08T23:43:08.394398631Z" level=info msg="RemoveContainer for \"58fc6c0ad2bc63c11737c8f25e96c7319e101f9dd6754800e2f996bfeb2811ea\" returns successfully" Feb 8 23:43:08.398555 systemd[1]: Removed slice kubepods-burstable-podb18a0295_aa77_43fd_a1b8_83dd50fa0417.slice. Feb 8 23:43:08.401216 kubelet[1341]: I0208 23:43:08.400609 1341 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-f59cbd8c6-cj4sd" podStartSLOduration=-9.223372031454248e+09 pod.CreationTimestamp="2024-02-08 23:43:03 +0000 UTC" firstStartedPulling="2024-02-08 23:43:04.371740294 +0000 UTC m=+99.978320515" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-08 23:43:07.398719483 +0000 UTC m=+103.005299753" watchObservedRunningTime="2024-02-08 23:43:08.400528134 +0000 UTC m=+104.007108404" Feb 8 23:43:08.426612 kubelet[1341]: I0208 23:43:08.426562 1341 topology_manager.go:210] "Topology Admit Handler" Feb 8 23:43:08.427029 kubelet[1341]: E0208 23:43:08.426998 1341 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b18a0295-aa77-43fd-a1b8-83dd50fa0417" containerName="mount-cgroup" Feb 8 23:43:08.427227 kubelet[1341]: E0208 23:43:08.427202 1341 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b18a0295-aa77-43fd-a1b8-83dd50fa0417" containerName="mount-cgroup" Feb 8 23:43:08.427423 kubelet[1341]: I0208 23:43:08.427398 1341 memory_manager.go:346] "RemoveStaleState removing state" podUID="b18a0295-aa77-43fd-a1b8-83dd50fa0417" containerName="mount-cgroup" Feb 8 23:43:08.427584 kubelet[1341]: I0208 23:43:08.427561 1341 memory_manager.go:346] "RemoveStaleState removing state" podUID="b18a0295-aa77-43fd-a1b8-83dd50fa0417" containerName="mount-cgroup" Feb 8 23:43:08.439676 systemd[1]: Created slice kubepods-burstable-pod88af8420_a5b8_4466_b465_a406924d833c.slice. Feb 8 23:43:08.543332 kubelet[1341]: I0208 23:43:08.543289 1341 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/88af8420-a5b8-4466-b465-a406924d833c-bpf-maps\") pod \"cilium-fdm7f\" (UID: \"88af8420-a5b8-4466-b465-a406924d833c\") " pod="kube-system/cilium-fdm7f" Feb 8 23:43:08.543873 kubelet[1341]: I0208 23:43:08.543788 1341 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/88af8420-a5b8-4466-b465-a406924d833c-host-proc-sys-net\") pod \"cilium-fdm7f\" (UID: \"88af8420-a5b8-4466-b465-a406924d833c\") " pod="kube-system/cilium-fdm7f" Feb 8 23:43:08.544212 kubelet[1341]: I0208 23:43:08.544114 1341 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24wtt\" (UniqueName: \"kubernetes.io/projected/88af8420-a5b8-4466-b465-a406924d833c-kube-api-access-24wtt\") pod \"cilium-fdm7f\" (UID: \"88af8420-a5b8-4466-b465-a406924d833c\") " pod="kube-system/cilium-fdm7f" Feb 8 23:43:08.544535 kubelet[1341]: I0208 23:43:08.544440 1341 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/88af8420-a5b8-4466-b465-a406924d833c-cilium-run\") pod \"cilium-fdm7f\" (UID: \"88af8420-a5b8-4466-b465-a406924d833c\") " pod="kube-system/cilium-fdm7f" Feb 8 23:43:08.544892 kubelet[1341]: I0208 23:43:08.544757 1341 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/88af8420-a5b8-4466-b465-a406924d833c-lib-modules\") pod \"cilium-fdm7f\" (UID: \"88af8420-a5b8-4466-b465-a406924d833c\") " pod="kube-system/cilium-fdm7f" Feb 8 23:43:08.545227 kubelet[1341]: I0208 23:43:08.545131 1341 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/88af8420-a5b8-4466-b465-a406924d833c-clustermesh-secrets\") pod \"cilium-fdm7f\" (UID: \"88af8420-a5b8-4466-b465-a406924d833c\") " pod="kube-system/cilium-fdm7f" Feb 8 23:43:08.545590 kubelet[1341]: I0208 23:43:08.545563 1341 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/88af8420-a5b8-4466-b465-a406924d833c-hubble-tls\") pod \"cilium-fdm7f\" (UID: \"88af8420-a5b8-4466-b465-a406924d833c\") " pod="kube-system/cilium-fdm7f" Feb 8 23:43:08.545913 kubelet[1341]: I0208 23:43:08.545802 1341 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/88af8420-a5b8-4466-b465-a406924d833c-xtables-lock\") pod \"cilium-fdm7f\" (UID: \"88af8420-a5b8-4466-b465-a406924d833c\") " pod="kube-system/cilium-fdm7f" Feb 8 23:43:08.546249 kubelet[1341]: I0208 23:43:08.546153 1341 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/88af8420-a5b8-4466-b465-a406924d833c-cilium-config-path\") pod \"cilium-fdm7f\" (UID: \"88af8420-a5b8-4466-b465-a406924d833c\") " pod="kube-system/cilium-fdm7f" Feb 8 23:43:08.546566 kubelet[1341]: I0208 23:43:08.546473 1341 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/88af8420-a5b8-4466-b465-a406924d833c-hostproc\") pod \"cilium-fdm7f\" (UID: \"88af8420-a5b8-4466-b465-a406924d833c\") " pod="kube-system/cilium-fdm7f" Feb 8 23:43:08.546886 kubelet[1341]: I0208 23:43:08.546786 1341 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/88af8420-a5b8-4466-b465-a406924d833c-cni-path\") pod \"cilium-fdm7f\" (UID: \"88af8420-a5b8-4466-b465-a406924d833c\") " pod="kube-system/cilium-fdm7f" Feb 8 23:43:08.547219 kubelet[1341]: I0208 23:43:08.547119 1341 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/88af8420-a5b8-4466-b465-a406924d833c-etc-cni-netd\") pod \"cilium-fdm7f\" (UID: \"88af8420-a5b8-4466-b465-a406924d833c\") " pod="kube-system/cilium-fdm7f" Feb 8 23:43:08.547540 kubelet[1341]: I0208 23:43:08.547441 1341 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/88af8420-a5b8-4466-b465-a406924d833c-cilium-ipsec-secrets\") pod \"cilium-fdm7f\" (UID: \"88af8420-a5b8-4466-b465-a406924d833c\") " pod="kube-system/cilium-fdm7f" Feb 8 23:43:08.547884 kubelet[1341]: I0208 23:43:08.547754 1341 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/88af8420-a5b8-4466-b465-a406924d833c-host-proc-sys-kernel\") pod \"cilium-fdm7f\" (UID: \"88af8420-a5b8-4466-b465-a406924d833c\") " pod="kube-system/cilium-fdm7f" Feb 8 23:43:08.548194 kubelet[1341]: I0208 23:43:08.548101 1341 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/88af8420-a5b8-4466-b465-a406924d833c-cilium-cgroup\") pod \"cilium-fdm7f\" (UID: \"88af8420-a5b8-4466-b465-a406924d833c\") " pod="kube-system/cilium-fdm7f" Feb 8 23:43:08.643790 kubelet[1341]: E0208 23:43:08.643721 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:43:08.749879 env[1051]: time="2024-02-08T23:43:08.749692984Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-fdm7f,Uid:88af8420-a5b8-4466-b465-a406924d833c,Namespace:kube-system,Attempt:0,}" Feb 8 23:43:08.782491 env[1051]: time="2024-02-08T23:43:08.782349244Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 8 23:43:08.783050 env[1051]: time="2024-02-08T23:43:08.782437299Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 8 23:43:08.783050 env[1051]: time="2024-02-08T23:43:08.782469680Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 8 23:43:08.783050 env[1051]: time="2024-02-08T23:43:08.782900760Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/eda3e4dff05eb0f67ef1ca741e18ba558d48de03314713dde24a6974b80b1636 pid=3209 runtime=io.containerd.runc.v2 Feb 8 23:43:08.817024 systemd[1]: Started cri-containerd-eda3e4dff05eb0f67ef1ca741e18ba558d48de03314713dde24a6974b80b1636.scope. Feb 8 23:43:08.866000 env[1051]: time="2024-02-08T23:43:08.865928007Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-fdm7f,Uid:88af8420-a5b8-4466-b465-a406924d833c,Namespace:kube-system,Attempt:0,} returns sandbox id \"eda3e4dff05eb0f67ef1ca741e18ba558d48de03314713dde24a6974b80b1636\"" Feb 8 23:43:08.869778 env[1051]: time="2024-02-08T23:43:08.869747121Z" level=info msg="CreateContainer within sandbox \"eda3e4dff05eb0f67ef1ca741e18ba558d48de03314713dde24a6974b80b1636\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 8 23:43:08.884859 env[1051]: time="2024-02-08T23:43:08.884805206Z" level=info msg="CreateContainer within sandbox \"eda3e4dff05eb0f67ef1ca741e18ba558d48de03314713dde24a6974b80b1636\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"8f8c29923f13e0b8bd0f1f387ca11166d39f259cef326684142432d24412eb70\"" Feb 8 23:43:08.885783 env[1051]: time="2024-02-08T23:43:08.885760220Z" level=info msg="StartContainer for \"8f8c29923f13e0b8bd0f1f387ca11166d39f259cef326684142432d24412eb70\"" Feb 8 23:43:08.907275 systemd[1]: Started cri-containerd-8f8c29923f13e0b8bd0f1f387ca11166d39f259cef326684142432d24412eb70.scope. Feb 8 23:43:08.949253 env[1051]: time="2024-02-08T23:43:08.949184175Z" level=info msg="StartContainer for \"8f8c29923f13e0b8bd0f1f387ca11166d39f259cef326684142432d24412eb70\" returns successfully" Feb 8 23:43:08.974540 systemd[1]: cri-containerd-8f8c29923f13e0b8bd0f1f387ca11166d39f259cef326684142432d24412eb70.scope: Deactivated successfully. Feb 8 23:43:09.014284 env[1051]: time="2024-02-08T23:43:09.014197214Z" level=info msg="shim disconnected" id=8f8c29923f13e0b8bd0f1f387ca11166d39f259cef326684142432d24412eb70 Feb 8 23:43:09.014284 env[1051]: time="2024-02-08T23:43:09.014255855Z" level=warning msg="cleaning up after shim disconnected" id=8f8c29923f13e0b8bd0f1f387ca11166d39f259cef326684142432d24412eb70 namespace=k8s.io Feb 8 23:43:09.014284 env[1051]: time="2024-02-08T23:43:09.014268318Z" level=info msg="cleaning up dead shim" Feb 8 23:43:09.022151 env[1051]: time="2024-02-08T23:43:09.022103856Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:43:09Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3294 runtime=io.containerd.runc.v2\n" Feb 8 23:43:09.397273 env[1051]: time="2024-02-08T23:43:09.397182554Z" level=info msg="CreateContainer within sandbox \"eda3e4dff05eb0f67ef1ca741e18ba558d48de03314713dde24a6974b80b1636\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 8 23:43:09.427029 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2371066222.mount: Deactivated successfully. Feb 8 23:43:09.441698 env[1051]: time="2024-02-08T23:43:09.441587251Z" level=info msg="CreateContainer within sandbox \"eda3e4dff05eb0f67ef1ca741e18ba558d48de03314713dde24a6974b80b1636\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"40fb58601ccd35d0c51c65f597eeff43c903a767e5c5331badb31b2d751d705a\"" Feb 8 23:43:09.443389 env[1051]: time="2024-02-08T23:43:09.443313483Z" level=info msg="StartContainer for \"40fb58601ccd35d0c51c65f597eeff43c903a767e5c5331badb31b2d751d705a\"" Feb 8 23:43:09.495299 systemd[1]: Started cri-containerd-40fb58601ccd35d0c51c65f597eeff43c903a767e5c5331badb31b2d751d705a.scope. Feb 8 23:43:09.551988 env[1051]: time="2024-02-08T23:43:09.551897439Z" level=info msg="StartContainer for \"40fb58601ccd35d0c51c65f597eeff43c903a767e5c5331badb31b2d751d705a\" returns successfully" Feb 8 23:43:09.557998 systemd[1]: cri-containerd-40fb58601ccd35d0c51c65f597eeff43c903a767e5c5331badb31b2d751d705a.scope: Deactivated successfully. Feb 8 23:43:09.645724 kubelet[1341]: E0208 23:43:09.645610 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:43:09.653644 env[1051]: time="2024-02-08T23:43:09.652091525Z" level=info msg="shim disconnected" id=40fb58601ccd35d0c51c65f597eeff43c903a767e5c5331badb31b2d751d705a Feb 8 23:43:09.654028 env[1051]: time="2024-02-08T23:43:09.653956889Z" level=warning msg="cleaning up after shim disconnected" id=40fb58601ccd35d0c51c65f597eeff43c903a767e5c5331badb31b2d751d705a namespace=k8s.io Feb 8 23:43:09.654028 env[1051]: time="2024-02-08T23:43:09.654009427Z" level=info msg="cleaning up dead shim" Feb 8 23:43:09.671711 env[1051]: time="2024-02-08T23:43:09.671626127Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:43:09Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3358 runtime=io.containerd.runc.v2\n" Feb 8 23:43:09.792659 env[1051]: time="2024-02-08T23:43:09.792562244Z" level=info msg="StopPodSandbox for \"0a6519b20d4746d5ce0ef7df91eb69f531ed07f5abe0a517df8260e868f4631f\"" Feb 8 23:43:09.792947 env[1051]: time="2024-02-08T23:43:09.792771087Z" level=info msg="TearDown network for sandbox \"0a6519b20d4746d5ce0ef7df91eb69f531ed07f5abe0a517df8260e868f4631f\" successfully" Feb 8 23:43:09.792947 env[1051]: time="2024-02-08T23:43:09.792894118Z" level=info msg="StopPodSandbox for \"0a6519b20d4746d5ce0ef7df91eb69f531ed07f5abe0a517df8260e868f4631f\" returns successfully" Feb 8 23:43:09.796666 kubelet[1341]: I0208 23:43:09.796575 1341 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=b18a0295-aa77-43fd-a1b8-83dd50fa0417 path="/var/lib/kubelet/pods/b18a0295-aa77-43fd-a1b8-83dd50fa0417/volumes" Feb 8 23:43:10.227424 kubelet[1341]: I0208 23:43:10.227360 1341 setters.go:548] "Node became not ready" node="172.24.4.229" condition={Type:Ready Status:False LastHeartbeatTime:2024-02-08 23:43:10.227261156 +0000 UTC m=+105.833841426 LastTransitionTime:2024-02-08 23:43:10.227261156 +0000 UTC m=+105.833841426 Reason:KubeletNotReady Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized} Feb 8 23:43:10.232164 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-40fb58601ccd35d0c51c65f597eeff43c903a767e5c5331badb31b2d751d705a-rootfs.mount: Deactivated successfully. Feb 8 23:43:10.401438 env[1051]: time="2024-02-08T23:43:10.401306657Z" level=info msg="CreateContainer within sandbox \"eda3e4dff05eb0f67ef1ca741e18ba558d48de03314713dde24a6974b80b1636\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 8 23:43:10.430395 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3111373024.mount: Deactivated successfully. Feb 8 23:43:10.446108 env[1051]: time="2024-02-08T23:43:10.446032162Z" level=info msg="CreateContainer within sandbox \"eda3e4dff05eb0f67ef1ca741e18ba558d48de03314713dde24a6974b80b1636\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"e76a87a235e053d1f7e90294ca32ffac03b10d08502b0edf6053e58a4ad07a4b\"" Feb 8 23:43:10.446883 env[1051]: time="2024-02-08T23:43:10.446830402Z" level=info msg="StartContainer for \"e76a87a235e053d1f7e90294ca32ffac03b10d08502b0edf6053e58a4ad07a4b\"" Feb 8 23:43:10.483202 systemd[1]: Started cri-containerd-e76a87a235e053d1f7e90294ca32ffac03b10d08502b0edf6053e58a4ad07a4b.scope. Feb 8 23:43:10.527792 env[1051]: time="2024-02-08T23:43:10.527757222Z" level=info msg="StartContainer for \"e76a87a235e053d1f7e90294ca32ffac03b10d08502b0edf6053e58a4ad07a4b\" returns successfully" Feb 8 23:43:10.537126 systemd[1]: cri-containerd-e76a87a235e053d1f7e90294ca32ffac03b10d08502b0edf6053e58a4ad07a4b.scope: Deactivated successfully. Feb 8 23:43:10.564915 env[1051]: time="2024-02-08T23:43:10.564761415Z" level=info msg="shim disconnected" id=e76a87a235e053d1f7e90294ca32ffac03b10d08502b0edf6053e58a4ad07a4b Feb 8 23:43:10.565157 env[1051]: time="2024-02-08T23:43:10.565138965Z" level=warning msg="cleaning up after shim disconnected" id=e76a87a235e053d1f7e90294ca32ffac03b10d08502b0edf6053e58a4ad07a4b namespace=k8s.io Feb 8 23:43:10.565223 env[1051]: time="2024-02-08T23:43:10.565209558Z" level=info msg="cleaning up dead shim" Feb 8 23:43:10.573636 env[1051]: time="2024-02-08T23:43:10.573604275Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:43:10Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3416 runtime=io.containerd.runc.v2\n" Feb 8 23:43:10.646936 kubelet[1341]: E0208 23:43:10.646808 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:43:10.714715 kubelet[1341]: W0208 23:43:10.714637 1341 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb18a0295_aa77_43fd_a1b8_83dd50fa0417.slice/cri-containerd-58fc6c0ad2bc63c11737c8f25e96c7319e101f9dd6754800e2f996bfeb2811ea.scope WatchSource:0}: container "58fc6c0ad2bc63c11737c8f25e96c7319e101f9dd6754800e2f996bfeb2811ea" in namespace "k8s.io": not found Feb 8 23:43:10.763914 kubelet[1341]: E0208 23:43:10.763772 1341 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 8 23:43:11.231787 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e76a87a235e053d1f7e90294ca32ffac03b10d08502b0edf6053e58a4ad07a4b-rootfs.mount: Deactivated successfully. Feb 8 23:43:11.412711 env[1051]: time="2024-02-08T23:43:11.412533253Z" level=info msg="CreateContainer within sandbox \"eda3e4dff05eb0f67ef1ca741e18ba558d48de03314713dde24a6974b80b1636\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 8 23:43:11.443560 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2933644917.mount: Deactivated successfully. Feb 8 23:43:11.462215 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount423790138.mount: Deactivated successfully. Feb 8 23:43:11.473227 env[1051]: time="2024-02-08T23:43:11.473077476Z" level=info msg="CreateContainer within sandbox \"eda3e4dff05eb0f67ef1ca741e18ba558d48de03314713dde24a6974b80b1636\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"f0d61807f40ef1354cc1c97cf711838abefb6e9aa97914f59e66f405aa784fe1\"" Feb 8 23:43:11.476243 env[1051]: time="2024-02-08T23:43:11.476177428Z" level=info msg="StartContainer for \"f0d61807f40ef1354cc1c97cf711838abefb6e9aa97914f59e66f405aa784fe1\"" Feb 8 23:43:11.511892 systemd[1]: Started cri-containerd-f0d61807f40ef1354cc1c97cf711838abefb6e9aa97914f59e66f405aa784fe1.scope. Feb 8 23:43:11.554157 systemd[1]: cri-containerd-f0d61807f40ef1354cc1c97cf711838abefb6e9aa97914f59e66f405aa784fe1.scope: Deactivated successfully. Feb 8 23:43:11.557767 env[1051]: time="2024-02-08T23:43:11.557564590Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod88af8420_a5b8_4466_b465_a406924d833c.slice/cri-containerd-f0d61807f40ef1354cc1c97cf711838abefb6e9aa97914f59e66f405aa784fe1.scope/memory.events\": no such file or directory" Feb 8 23:43:11.561879 env[1051]: time="2024-02-08T23:43:11.561823378Z" level=info msg="StartContainer for \"f0d61807f40ef1354cc1c97cf711838abefb6e9aa97914f59e66f405aa784fe1\" returns successfully" Feb 8 23:43:11.585890 env[1051]: time="2024-02-08T23:43:11.585830368Z" level=info msg="shim disconnected" id=f0d61807f40ef1354cc1c97cf711838abefb6e9aa97914f59e66f405aa784fe1 Feb 8 23:43:11.586140 env[1051]: time="2024-02-08T23:43:11.586120092Z" level=warning msg="cleaning up after shim disconnected" id=f0d61807f40ef1354cc1c97cf711838abefb6e9aa97914f59e66f405aa784fe1 namespace=k8s.io Feb 8 23:43:11.586237 env[1051]: time="2024-02-08T23:43:11.586220741Z" level=info msg="cleaning up dead shim" Feb 8 23:43:11.595108 env[1051]: time="2024-02-08T23:43:11.595051828Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:43:11Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3474 runtime=io.containerd.runc.v2\n" Feb 8 23:43:11.647657 kubelet[1341]: E0208 23:43:11.647578 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:43:12.420862 env[1051]: time="2024-02-08T23:43:12.420272383Z" level=info msg="CreateContainer within sandbox \"eda3e4dff05eb0f67ef1ca741e18ba558d48de03314713dde24a6974b80b1636\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 8 23:43:12.496071 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4032787071.mount: Deactivated successfully. Feb 8 23:43:12.511430 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1511965103.mount: Deactivated successfully. Feb 8 23:43:12.527731 env[1051]: time="2024-02-08T23:43:12.527613364Z" level=info msg="CreateContainer within sandbox \"eda3e4dff05eb0f67ef1ca741e18ba558d48de03314713dde24a6974b80b1636\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"70835af0f7db766a42d76ccebffd48e0b48683ba7ebdef7fbbdb82ff2621f3e9\"" Feb 8 23:43:12.529178 env[1051]: time="2024-02-08T23:43:12.529124893Z" level=info msg="StartContainer for \"70835af0f7db766a42d76ccebffd48e0b48683ba7ebdef7fbbdb82ff2621f3e9\"" Feb 8 23:43:12.566960 systemd[1]: Started cri-containerd-70835af0f7db766a42d76ccebffd48e0b48683ba7ebdef7fbbdb82ff2621f3e9.scope. Feb 8 23:43:12.622562 env[1051]: time="2024-02-08T23:43:12.622014110Z" level=info msg="StartContainer for \"70835af0f7db766a42d76ccebffd48e0b48683ba7ebdef7fbbdb82ff2621f3e9\" returns successfully" Feb 8 23:43:12.648338 kubelet[1341]: E0208 23:43:12.648210 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:43:13.465016 kubelet[1341]: I0208 23:43:13.464921 1341 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-fdm7f" podStartSLOduration=5.464846329 pod.CreationTimestamp="2024-02-08 23:43:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-08 23:43:13.464782379 +0000 UTC m=+109.071362609" watchObservedRunningTime="2024-02-08 23:43:13.464846329 +0000 UTC m=+109.071426569" Feb 8 23:43:13.499883 kernel: cryptd: max_cpu_qlen set to 1000 Feb 8 23:43:13.545855 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm_base(ctr(aes-generic),ghash-generic)))) Feb 8 23:43:13.649074 kubelet[1341]: E0208 23:43:13.648977 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:43:13.842062 kubelet[1341]: W0208 23:43:13.841932 1341 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod88af8420_a5b8_4466_b465_a406924d833c.slice/cri-containerd-8f8c29923f13e0b8bd0f1f387ca11166d39f259cef326684142432d24412eb70.scope WatchSource:0}: task 8f8c29923f13e0b8bd0f1f387ca11166d39f259cef326684142432d24412eb70 not found: not found Feb 8 23:43:14.650063 kubelet[1341]: E0208 23:43:14.649903 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:43:14.724057 systemd[1]: run-containerd-runc-k8s.io-70835af0f7db766a42d76ccebffd48e0b48683ba7ebdef7fbbdb82ff2621f3e9-runc.ID97mU.mount: Deactivated successfully. Feb 8 23:43:14.819616 kubelet[1341]: E0208 23:43:14.819539 1341 upgradeaware.go:440] Error proxying data from backend to client: write tcp 172.24.4.229:10250->172.24.4.14:52398: write: connection reset by peer Feb 8 23:43:15.652183 kubelet[1341]: E0208 23:43:15.652096 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:43:16.653593 kubelet[1341]: E0208 23:43:16.653566 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:43:16.784275 systemd-networkd[975]: lxc_health: Link UP Feb 8 23:43:16.795354 systemd-networkd[975]: lxc_health: Gained carrier Feb 8 23:43:16.795841 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 8 23:43:16.937783 systemd[1]: run-containerd-runc-k8s.io-70835af0f7db766a42d76ccebffd48e0b48683ba7ebdef7fbbdb82ff2621f3e9-runc.9nTUcY.mount: Deactivated successfully. Feb 8 23:43:16.954788 kubelet[1341]: W0208 23:43:16.954259 1341 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod88af8420_a5b8_4466_b465_a406924d833c.slice/cri-containerd-40fb58601ccd35d0c51c65f597eeff43c903a767e5c5331badb31b2d751d705a.scope WatchSource:0}: task 40fb58601ccd35d0c51c65f597eeff43c903a767e5c5331badb31b2d751d705a not found: not found Feb 8 23:43:17.654395 kubelet[1341]: E0208 23:43:17.654355 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:43:18.498095 systemd-networkd[975]: lxc_health: Gained IPv6LL Feb 8 23:43:18.655246 kubelet[1341]: E0208 23:43:18.655125 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:43:19.184287 systemd[1]: run-containerd-runc-k8s.io-70835af0f7db766a42d76ccebffd48e0b48683ba7ebdef7fbbdb82ff2621f3e9-runc.CJcbHR.mount: Deactivated successfully. Feb 8 23:43:19.656322 kubelet[1341]: E0208 23:43:19.656256 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:43:20.078763 kubelet[1341]: W0208 23:43:20.078708 1341 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod88af8420_a5b8_4466_b465_a406924d833c.slice/cri-containerd-e76a87a235e053d1f7e90294ca32ffac03b10d08502b0edf6053e58a4ad07a4b.scope WatchSource:0}: task e76a87a235e053d1f7e90294ca32ffac03b10d08502b0edf6053e58a4ad07a4b not found: not found Feb 8 23:43:20.657000 kubelet[1341]: E0208 23:43:20.656969 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:43:21.407651 systemd[1]: run-containerd-runc-k8s.io-70835af0f7db766a42d76ccebffd48e0b48683ba7ebdef7fbbdb82ff2621f3e9-runc.M7OLxt.mount: Deactivated successfully. Feb 8 23:43:21.658206 kubelet[1341]: E0208 23:43:21.658012 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:43:22.659559 kubelet[1341]: E0208 23:43:22.659458 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:43:23.188182 kubelet[1341]: W0208 23:43:23.188121 1341 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod88af8420_a5b8_4466_b465_a406924d833c.slice/cri-containerd-f0d61807f40ef1354cc1c97cf711838abefb6e9aa97914f59e66f405aa784fe1.scope WatchSource:0}: task f0d61807f40ef1354cc1c97cf711838abefb6e9aa97914f59e66f405aa784fe1 not found: not found Feb 8 23:43:23.650904 systemd[1]: run-containerd-runc-k8s.io-70835af0f7db766a42d76ccebffd48e0b48683ba7ebdef7fbbdb82ff2621f3e9-runc.TBMEHh.mount: Deactivated successfully. Feb 8 23:43:23.660621 kubelet[1341]: E0208 23:43:23.660506 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:43:24.660952 kubelet[1341]: E0208 23:43:24.660855 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:43:25.543582 kubelet[1341]: E0208 23:43:25.543512 1341 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:43:25.568454 env[1051]: time="2024-02-08T23:43:25.568363813Z" level=info msg="StopPodSandbox for \"3a07888993018bd9bdcc48b0477d63c32c607f1c5fa0eef54cd71525515ae41e\"" Feb 8 23:43:25.569468 env[1051]: time="2024-02-08T23:43:25.569343733Z" level=info msg="TearDown network for sandbox \"3a07888993018bd9bdcc48b0477d63c32c607f1c5fa0eef54cd71525515ae41e\" successfully" Feb 8 23:43:25.569674 env[1051]: time="2024-02-08T23:43:25.569629810Z" level=info msg="StopPodSandbox for \"3a07888993018bd9bdcc48b0477d63c32c607f1c5fa0eef54cd71525515ae41e\" returns successfully" Feb 8 23:43:25.570621 env[1051]: time="2024-02-08T23:43:25.570519741Z" level=info msg="RemovePodSandbox for \"3a07888993018bd9bdcc48b0477d63c32c607f1c5fa0eef54cd71525515ae41e\"" Feb 8 23:43:25.570758 env[1051]: time="2024-02-08T23:43:25.570594200Z" level=info msg="Forcibly stopping sandbox \"3a07888993018bd9bdcc48b0477d63c32c607f1c5fa0eef54cd71525515ae41e\"" Feb 8 23:43:25.570890 env[1051]: time="2024-02-08T23:43:25.570747207Z" level=info msg="TearDown network for sandbox \"3a07888993018bd9bdcc48b0477d63c32c607f1c5fa0eef54cd71525515ae41e\" successfully" Feb 8 23:43:25.580039 env[1051]: time="2024-02-08T23:43:25.579932313Z" level=info msg="RemovePodSandbox \"3a07888993018bd9bdcc48b0477d63c32c607f1c5fa0eef54cd71525515ae41e\" returns successfully" Feb 8 23:43:25.581031 env[1051]: time="2024-02-08T23:43:25.580980080Z" level=info msg="StopPodSandbox for \"0a6519b20d4746d5ce0ef7df91eb69f531ed07f5abe0a517df8260e868f4631f\"" Feb 8 23:43:25.581668 env[1051]: time="2024-02-08T23:43:25.581543848Z" level=info msg="TearDown network for sandbox \"0a6519b20d4746d5ce0ef7df91eb69f531ed07f5abe0a517df8260e868f4631f\" successfully" Feb 8 23:43:25.581902 env[1051]: time="2024-02-08T23:43:25.581855102Z" level=info msg="StopPodSandbox for \"0a6519b20d4746d5ce0ef7df91eb69f531ed07f5abe0a517df8260e868f4631f\" returns successfully" Feb 8 23:43:25.582566 env[1051]: time="2024-02-08T23:43:25.582518488Z" level=info msg="RemovePodSandbox for \"0a6519b20d4746d5ce0ef7df91eb69f531ed07f5abe0a517df8260e868f4631f\"" Feb 8 23:43:25.582797 env[1051]: time="2024-02-08T23:43:25.582723453Z" level=info msg="Forcibly stopping sandbox \"0a6519b20d4746d5ce0ef7df91eb69f531ed07f5abe0a517df8260e868f4631f\"" Feb 8 23:43:25.583101 env[1051]: time="2024-02-08T23:43:25.583055156Z" level=info msg="TearDown network for sandbox \"0a6519b20d4746d5ce0ef7df91eb69f531ed07f5abe0a517df8260e868f4631f\" successfully" Feb 8 23:43:25.588409 env[1051]: time="2024-02-08T23:43:25.588353723Z" level=info msg="RemovePodSandbox \"0a6519b20d4746d5ce0ef7df91eb69f531ed07f5abe0a517df8260e868f4631f\" returns successfully" Feb 8 23:43:25.661954 kubelet[1341]: E0208 23:43:25.661807 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:43:26.662660 kubelet[1341]: E0208 23:43:26.662601 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:43:27.663995 kubelet[1341]: E0208 23:43:27.663942 1341 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"